id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
ec4c7e472289e8b093229fbb6d4a6def4774a429
Package ‘stockfish’ May 4, 2021 Title Analyze Chess Games with the 'Stockfish' Engine Version 0.1.2 Description An implementation of the UCI open communication protocol that ships with 'Stockfish 11' <https://stockfishchess.org/>, a very popular, open source, powerful chess engine written in C++. License GPL (>= 3) URL https://github.com/curso-r/stockfish BugReports https://github.com/curso-r/stockfish/issues Imports processx, R6 Suggests covr, testthat (>= 3.0.0) Config/testthat/edition 3 Encoding UTF-8 RoxygenNote 7.1.1 SystemRequirements C++11, and GCC or clang NeedsCompilation yes Author Caio Lente [aut, cre], Tord Romstad [ctb], Marco Costalba [ctb], Joona Kiiski [ctb], Gary Linscott [ctb] Maintainer Caio Lente <clente@curso-r.com> Repository CRAN Date/Publication 2021-05-04 07:30:02 UTC R topics documented: fish ................................................................. 2 fish_find ......................................................... 9 Index 10 Description This class represents a Stockfish engine, allowing the user to send commands and receive outputs according to the UCI protocol. In short, a `fish` object, when created, spawns a detached Stockfish process and pipes into its stdin and stdout. For more information, see the full documentation by running `?fish`. Bundled Stockfish This package comes bundled with Stockfish, a very popular, open source, powerful chess engine written in C++. It can achieve an ELO of 3516, runs on Windows, macOS, Linux, iOS and Android, and can be compiled in less than a minute. When installing `{stockfish}` (lower case), Stockfish's (upper case) source code is compiled and the resulting executable is stored with your R packages. This is not a system-wide installation! You don't have to give it administrative privileges to run or even worry about any additional software. The only downside is that the bundled version of the engine is Stockfish 11, not the most recent Stockfish 12. This is because the 12th version needs additional downloads, which would dramatically increase the installation time. If you want to, you can download the version of your choosing and pass the executable as an argument to `fish$new()`. UCI Protocol UCI (Universal Chess Interface) is an open communication protocol that enables chess engines to communicate with user interfaces. Strictly speaking, this class implements the UCI protocol as publicized by Stefan-Meyer Kahlen, just with a focus on the Stockfish engine. This means that some methods are not implemented (see Common Gotchas) and that all tests are run on Stockfish, but everything else should work fine with other engines. The quoted text at the end of the documentation of each method was extracted directly from the official UCI protocol, so you can see exactly what that command can do. In general, the commands are pretty self-explanatory, except for long algebraic notation (LAN), the move notation used by UCI. In this notation, moves are recorded using the starting and ending positions of each piece, e.g. e2e4, e7e5, e1g1 (white short castling), e7e8q (for promotion), 0000 (nullmove). Implementation All the heavy lifting of this class is done by the `{processx}` package. The Stockfish process is created with `processx::process$new` and IO is done with `write_input()` and `read_output()`. An important aspect of the communication protocol of any UCI engine is waiting for replies, and here this is done with a loop that queries the process with `poll_io()` and stops once the output comes back empty. Before implementing the UCI protocol manually, this package used `{bigchess}`. It is a great package created by @rosawojciech, but it has some dependencies that are beyond the scope of this package and ultimately I wanted more control over the API (e.g. using `{R6}`). Common Gotchas This class has some specifics that the user should keep in mind when trying to communicate with Stockfish. Some of them are due to implementation choices, but others are related to the UCI protocol itself. This is by no means a comprehensive list (and you should probably read UCI’s documentation), but here are a few things to look out for: - Not every UCI method is implemented: since \{stockfish\} was made with Stockfish (and mainly Stockfish 11) in mind, a couple of UCI methods that don’t work with the engine were not implemented. They are debug() and register(). - Most methods return silently: since most UCI commands don’t output anything or output boilerplate text, most methods of this class return silently. The exceptions are run(), isready(), go() and stop(): you can see exactly what they return by reading their documentations. - Not every Stockfish option will work: at least when using the bundled version of Stockfish, not every documented option will work with setoption(). This happens because, as described above, this package comes with Stockfish 11, which is not the most recent version. Options that will not work are labeled with an asterisk. - Times are in milliseconds: unlike most R functions, every method that takes a time interval expects them in milliseconds, not seconds. Public fields - process Connection to \{processx\} process running the engine - output String vector with the output of the last command - log String vector with the all outputs from the engine Methods Public methods: - fish$new() - fish$run() - fish$uci() - fish$isready() - fish$setoption() - fish$ucinewgame() - fish$position() - fish$go() - fish$stop() - fish$ponderhit() - fish$quit() - fish$print() - fish$clone() Method new(): Start Stockfish engine By default, this function uses the included version of Stockfish. If you want to run a more recent version, you can pass its executable as an argument. For more information, see the Bundled Stockfish section of this documentation. Usage: ```r fish$new(path = NULL) ``` Arguments: ``` p Path to Stockfish executable (defaults to bundled version) ``` **Method run()**: Send a command to be run on the engine. Every supported command is documented on the UCI protocol as publicized by Stefan-Meyer Kahlen. Please refrain from sending more than one command per call as the engine can get confused! Also note that commands automatically get a newline ('\n') at the end, so there is no need to append that to the string itself. Usage: ```r fish$run(command, infinite = FALSE) ``` Arguments: ``` command A string with the command to run infinite Whether the command involves go infinite (will make function return instantly as output should only be collected when a stop() command is issued) ``` Returns: A string vector with the output of the command or NULL. **Method uci()**: Tell the engine to use the UCI. "Tell engine to use the uci (universal chess interface), this will be send once as a first command after program boot to tell the engine to switch to uci mode. After receiving the uci command the engine must identify itself with the 'id' command and sent the 'option' commands to tell the GUI which engine settings the engine supports if any. After that the engine should sent 'uciok' to acknowledge the uci mode. If no uciok is sent within a certain time period, the engine task will be killed by the GUI." Usage: ```r fish$uci() ``` **Method isready()**: Ask if the engine is ready for more commands. "This is used to synchronize the engine with the GUI. When the GUI has sent a command or multiple commands that can take some time to complete, this command can be used to wait for the engine to be ready again or to ping the engine to find out if it is still alive. E.g. this should be sent after setting the path to the tablebases as this can take some time. This command is also required once before the engine is asked to do any search to wait for the engine to finish initializing. This command must always be answered with 'readyok' and can be sent also when the engine is calculating in which case the engine should also immediately answer with 'readyok' without stopping the search." Usage: ```r fish$isready() ``` Returns: Boolean indicating whether the output is exactly "readyok" **Method setoption()**: Change the internal parameters of the engine. All currently supported options (according to Stockfish’s documentation) are listed below, but note that those marked with an * require Stockfish 12 to work): • **Threads**: the number of CPU threads used for searching a position. For best performance, set this equal to the number of CPU cores available. • **Hash**: the size of the hash table in MB. It is recommended to set Hash after setting Threads. • **Ponder**: let Stockfish ponder its next move while the opponent is thinking. • **MultiPV**: output the N best lines (principal variations, PVs) when searching. Leave at 1 for best performance. • **Use NNUE**: toggle between the NNUE and classical evaluation functions. If set to "true", the network parameters must be available to load from file (see also EvalFile), if they are not embedded in the binary. • **EvalFile**: the name of the file of the NNUE evaluation parameters. Depending on the GUI the filename might have to include the full path to the folder/directory that contains the file. Other locations, such as the directory that contains the binary and the working directory, are also searched. • **UCI_AnalyseMode**: an option handled by your GUI. • **UCI_Chess960**: an option handled by your GUI. If true, Stockfish will play Chess960. • **UCI_ShowWDL**: if enabled, show approximate WDL statistics as part of the engine output. These WDL numbers model expected game outcomes for a given evaluation and game ply for engine self-play at fishtest LTC conditions (60+0.6s per game). • **UCI_LimitStrength**: enable weaker play aiming for an Elo rating as set by UCI_Elo. This option overrides Skill Level. • **UCI_Elo**: if enabled by UCI_LimitStrength, aim for an engine strength of the given Elo. This Elo rating has been calibrated at a time control of 60s+0.6s and anchored to CCRL 40/4. • **Skill Level**: lower the Skill Level in order to make Stockfish play weaker (see also UCI_LimitStrength). Internally, MultiPV is enabled, and with a certain probability depending on the Skill Level a weaker move will be played. • **SyzygyPath**: path to the folders/directories storing the Syzygy tablebase files. Multiple directories are to be separated by ";" on Windows and by ":" on Unix-based operating systems. Do not use spaces around the ";" or ":". Example: C:\tablebases\wdl345;C:\tablebases\wdl6;D:\tablebases\dtz345;D:\tablebases\dtz6. It is recommended to store .rtbw files on an SSD. There is no loss in storing the .rtbz files on a regular HD. It is recommended to verify all md5 checksums of the downloaded tablebase files (md5sum -c checksum.md5) as corruption will lead to engine crashes. • **SyzygyProbeDepth**: minimum remaining search depth for which a position is probed. Set this option to a higher value to probe less aggressively if you experience too much slowdown (in terms of nps) due to TB probing. • **Syzygy50MoveRule**: disable to let fifty-move rule draws detected by Syzygy tablebase probes count as wins or losses. This is useful for ICCF correspondence games. • **SyzygyProbeLimit**: limit Syzygy tablebase probing to positions with at most this many pieces left (including kings and pawns). • **Contempt**: a positive value for contempt favors middle game positions and avoids draws, effective for the classical evaluation only. • **Analysis Contempt**: by default, contempt is set to prefer the side to move. Set this option to "White" or "Black" to analyse with contempt for that side, or "Off" to disable contempt. • **Move Overhead**: assume a time delay of x ms due to network and GUI overheads. This is useful to avoid losses on time in those cases. • **Slow Mover**: lower values will make Stockfish take less time in games, higher values will make it think longer. • nodetime: tells the engine to use nodes searched instead of wall time to account for elapsed time. Useful for engine testing. • Clear Hash: clear the hash table. • Debug Log File: write all communication to and from the engine into a text file. "This is sent to the engine when the user wants to change the internal parameters of the engine. For the 'button' type no value is needed. One string will be sent for each parameter and this will only be sent when the engine is waiting. The name of the option in should not be case sensitive and can includes spaces like also the value. The substrings 'value' and 'name' should be avoided in and to allow unambiguous parsing, for example do not use = 'draw value'. Here are some strings for the example below: • setoption name Nullmove value true • setoption name Selectivity value 3 • setoption name Style value Risky • setoption name Clear Hash • setoption name NalimovPath value c:\chess\tb\4;c:\chess\tb\5" Usage: fish$setoption(name, value = NULL) Arguments: name Name of the option value Value to set (or NULL if option doesn’t need a value) Method ucinewgame(): Tell the engine that the next search will be from a different game. "This is sent to the engine when the next search (started with 'position' and 'go') will be from a different game. This can be a new game the engine should play or a new game it should analyse but also the next position from a testsuite with positions only. If the GUI hasn’t sent a ‘ucinewgame’ before the first ‘position’ command, the engine shouldn’t expect any further ucinewgame com- mands as the GUI is probably not supporting the ucinewgame command. So the engine should not rely on this command even though all new GUIs should support it. As the engine’s reaction to ‘ucinewgame’ can take some time the GUI should always send ‘isready’ after ucinewgame to wait for the engine to finish its operation." Usage: fish$ucinewgame() Method position(): Set up the position on the internal board. When passing a sequence of moves, use long algebraic notation (LAN) as described in the UCI Protocol section of this documentation. "Set up the position described in fenstring on the internal board and play the moves on the internal chess board. if the game was played from the start position the string 'startpos' will be sent. Note: no 'new' command is needed. However, if this position is from a different game than the last position sent to the engine, the GUI should have sent a ‘ucinewgame’ inbetween." Usage: fish$position(position = NULL, type = c("fen", "startpos")) Arguments: position String with position (either a FEN or a sequence of moves separated by spaces) type Either "fen" or "startpos", respectively Method go(): Start calculating on the current position. "Start calculating on the current position set up with the 'position' command. There are a number of commands that can follow this command, all will be sent in the same string. If one command is not send its value should be interpreted as it would not influence the search. - searchmoves: restrict search to this moves only. Example: after 'position startpos' and 'go infinite searchmoves e2e4 d2d4' the engine should only search the two moves e2e4 and d2d4 in the initial position. - ponder: start searching in pondering mode. Do not exit the search in ponder mode, even if it’s mate! This means that the last move sent in the position string is the ponder move. The engine can do what it wants to do, but after a 'ponderhit' command it should execute the suggested move to ponder on. This means that the ponder move sent by the GUI can be interpreted as a recommendation about which move to ponder. However, if the engine decides to ponder on a different move, it should not display any mainlines as they are likely to be misinterpreted by the GUI because the GUI expects the engine to ponder on the suggested move. - wtime: white has x msec left on the clock. - btime: black has x msec left on the clock. - winc: white increment per move in mseconds if x > 0. - binc: black increment per move in mseconds if x > 0. - movestogo: there are x moves to the next time control, this will only be sent if x > 0, if you don’t get this and get the wtime and btime it’s sudden death. - depth: search x plies only. - nodes: search x nodes only. - mate: search for a mate in x moves. - movetime: search exactly x mseconds. - infinite: search until the 'stop' command. Do not exit the search without being told so in this mode! Usage: ```bash fish$go( searchmoves = NULL, ponder = NULL, wtime = NULL, btime = NULL, winc = NULL, binc = NULL, movestogo = NULL, depth = NULL, nodes = NULL, mate = NULL, movetime = NULL, infinite = FALSE ) ``` Arguments: - searchmoves A string with the only moves (separated by spaces) that should be searched ponder Pondering move (see UCI documentation for more information) wtime Time (in ms) white has left on the clock (if movestogo is not set, it’s sudden death) btime Time (in ms) black has left on the clock (if movestogo is not set, it’s sudden death) winc White increment (in ms) per move if wtime > 0 binc Black increment (in ms) per move if btime > 0 movestogo Number of moves to the next time control depth Maximum number of plies to search nodes Maximum number of nodes to search mate Search for a mate in this number of moves movetime Time (in ms) allowed for searching infinite Whether to only stop searching when a stop() command is issued (makes function return instantly without any output) Returns: A string with result of the search or NULL if infinite == TRUE Method stop(): Stop calculating as soon as possible. "Stop calculating as soon as possible, don’t forget the 'bestmove' and possibly the 'ponder' token when finishing the search." Usage: fish$stop() Returns: A string with the result of search or NULL if there was no search underway Method ponderhit(): Tell the engine that the user has played the expected move. "The user has played the expected move. This will be sent if the engine was told to ponder on the same move the user has played. The engine should continue searching but switch from pondering to normal search." Usage: fish$ponderhit() Method quit(): Kill the engine "Quit the program as soon as possible." Usage: fish$quit() Method print(): Print information about engine process. Usage: fish$print(...) Arguments: ... Arguments passed on to print() Method clone(): The objects of this class are cloneable with this method. Usage: fish$clone(deep = FALSE) Arguments: deep Whether to make a deep clone. Description This package comes bundled with Stockfish 11, an open source, powerful UCI chess engine. For more information about what Stockfish is, see the full documentation of this package by running ?fish. Usage fish_find() Index fish, 2 fish_find, 9
{"Source-Url": "https://cran.rstudio.com/web/packages/stockfish/stockfish.pdf", "len_cl100k_base": 4430, "olmocr-version": "0.1.47", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 21084, "total-output-tokens": 5026, "length": "2e12", "weborganizer": {"__label__adult": 0.001316070556640625, "__label__art_design": 0.0004324913024902344, "__label__crime_law": 0.0010881423950195312, "__label__education_jobs": 0.0006775856018066406, "__label__entertainment": 0.00033211708068847656, "__label__fashion_beauty": 0.00043320655822753906, "__label__finance_business": 0.0003216266632080078, "__label__food_dining": 0.0015249252319335938, "__label__games": 0.0755615234375, "__label__hardware": 0.001766204833984375, "__label__health": 0.0006852149963378906, "__label__history": 0.0004642009735107422, "__label__home_hobbies": 0.00017118453979492188, "__label__industrial": 0.0006818771362304688, "__label__literature": 0.0005412101745605469, "__label__politics": 0.0005512237548828125, "__label__religion": 0.000912189483642578, "__label__science_tech": 0.0079498291015625, "__label__social_life": 0.000171661376953125, "__label__software": 0.019256591796875, "__label__software_dev": 0.8818359375, "__label__sports_fitness": 0.0024929046630859375, "__label__transportation": 0.0005459785461425781, "__label__travel": 0.0004475116729736328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18778, 0.01054]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18778, 0.35571]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18778, 0.88993]], "google_gemma-3-12b-it_contains_pii": [[0, 998, false], [998, 3839, null], [3839, 5858, null], [5858, 8374, null], [8374, 11942, null], [11942, 14607, null], [14607, 16770, null], [16770, 18522, null], [18522, 18751, null], [18751, 18778, null]], "google_gemma-3-12b-it_is_public_document": [[0, 998, true], [998, 3839, null], [3839, 5858, null], [5858, 8374, null], [8374, 11942, null], [11942, 14607, null], [14607, 16770, null], [16770, 18522, null], [18522, 18751, null], [18751, 18778, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18778, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18778, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18778, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18778, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18778, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18778, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18778, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18778, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18778, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18778, null]], "pdf_page_numbers": [[0, 998, 1], [998, 3839, 2], [3839, 5858, 3], [5858, 8374, 4], [8374, 11942, 5], [11942, 14607, 6], [14607, 16770, 7], [16770, 18522, 8], [18522, 18751, 9], [18751, 18778, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18778, 0.0]]}
olmocr_science_pdfs
2024-11-23
2024-11-23
f400254ebdb46e1828a2c34eb9be298c1d572574
Continuous Integration for Commitfests Testing all the patches all the time Thomas Munro, PGCon 2018, Ottawa $ whoami • PostgreSQL hacker at EnterpriseDB (~3 years) • Some things I’ve worked on: Parallel Hash Join, various parallel query infrastructure, transition tables for triggers (sous-chef), remote_apply, replay_lag, SKIP LOCKED, various portability stuff <table> <thead> <tr> <th>ID</th> <th>Description</th> <th>Author(s)</th> <th>Status</th> <th>Windows Passing</th> <th>Linux Passing</th> </tr> </thead> <tbody> <tr> <td>18/1500</td> <td>csv output format for psql</td> <td>Daniel Vérité</td> <td>patch</td> <td>✓</td> <td>✗</td> </tr> <tr> <td>18/1619</td> <td>de-deduplicate code in DML execution hooks in postgres_fdw</td> <td>Ashutosh Bapat</td> <td>patch</td> <td>✓ ✓ ✓</td> <td>✓ ✓ ✓</td> </tr> <tr> <td>18/1591</td> <td>fix constraint exclusion failure for certain partition key types</td> <td>Amit Langote</td> <td>patch</td> <td>✓</td> <td>✗</td> </tr> <tr> <td>18/1620</td> <td>generalized expression syntax for partition bounds</td> <td>Tom Lane, Kyotaro Horiguchi, Amit Langote</td> <td>patch</td> <td>✓ ✓ ✓</td> <td>✓ ✓ ✓</td> </tr> <tr> <td>18/1640</td> <td>generate bootstrap entries for array types</td> <td>John Naylor</td> <td>patch</td> <td>✓ ✓ ✓</td> <td>✓ ✓ ✓</td> </tr> <tr> <td>18/1536</td> <td>get rid of StdRdOptions, use individual binary reoptions representation for each...</td> <td>Nikolay Shaplov</td> <td>patch</td> <td>✓</td> <td>✗</td> </tr> <tr> <td>18/1631</td> <td>hostorder and failover_timeout for libpq</td> <td>Ildar Musin</td> <td>patch</td> <td>✓ ✓ ✓</td> <td>✓ ✓ ✓</td> </tr> <tr> <td>18/1587</td> <td>kNN for SP-GIST</td> <td>Nikita Glukhov</td> <td>patch</td> <td>✓ ✓ ✓</td> <td>✓ ✓ ✓</td> </tr> <tr> <td>18/1518</td> <td>lc_messages parameter doesn't work on Windows</td> <td>Aleksandr Parfenov</td> <td>patch</td> <td>✓ ✓ ✓</td> <td>✓ ✓ ✓</td> </tr> <tr> <td>18/1635</td> <td>libpq compression</td> <td>Konstantin Knizhnik</td> <td>patch</td> <td>✓ ✓ ✓</td> <td>✓ ✓ ✓</td> </tr> <tr> <td>18/1490</td> <td>macOS Secure Transport SSL Support</td> <td>Daniel Gustafsson</td> <td>patch</td> <td>✓</td> <td>✗</td> </tr> </tbody> </table> - List of current proposed patches - Does the patch apply, do the tests pass on Windows, do the tests pass on Linux? - Recent changes highlighted <table> <thead> <tr> <th>Issue</th> <th>Description</th> <th>Author(s)</th> <th>Status</th> </tr> </thead> <tbody> <tr> <td>18/1491</td> <td>Handling better supported channel binding types for SSL implementations</td> <td>Michael Paquier</td> <td></td> </tr> <tr> <td>18/1204</td> <td>Implement NULL-related checks in object address functions to prevent cache lookups...</td> <td>Michael Paquier</td> <td></td> </tr> <tr> <td>18/1642</td> <td>Fix some error handling for read() and errno</td> <td>Michael Paquier</td> <td>✓ ✓ ✓</td> </tr> <tr> <td>18/528</td> <td>Fix the optimization to skip WAL-logging on table created in same transaction</td> <td>Heikki Linnakangas, Michael Paquier, Kyotaro Horiguchi</td> <td>✓ ✓ ✓</td> </tr> <tr> <td>18/1633</td> <td>Make description of heap records more talkative for flags</td> <td>Michael Paquier</td> <td>✓ ✓ ✓</td> </tr> <tr> <td>18/1609</td> <td>Simplify final sync in pg_rewind's target folder and add --no-sync</td> <td>Michael Paquier</td> <td></td> </tr> <tr> <td>18/1630</td> <td>Temporary WAL segments files not cleaned up after an instance crash</td> <td>Michael Paquier</td> <td></td> </tr> </tbody> </table> Per author view <table> <thead> <tr> <th>Pull Request</th> <th>Title</th> <th>Author</th> <th>Status</th> </tr> </thead> <tbody> <tr> <td>18/669</td> <td>pgbench - allow to store query results into variables</td> <td>Fabien Coelho</td> <td>✗</td> </tr> <tr> <td>18/1388</td> <td>pgbench - option to build using ppoll() for larger connection counts</td> <td>doug rady</td> <td>✔️</td> </tr> <tr> <td>18/1389</td> <td>verify ALTER TABLE SET NOT NULL by valid constraints</td> <td>Sergei Kornilov</td> <td>✔️</td> </tr> </tbody> </table> ``` 5839 Build FAILED. 5840 "C:\projects\postgresql\pgsql.sln" (default target) (1) -> 5841 "C:\projects\postgresql\pgbench.vcxproj" (default target) (72) -> 5842 (ClCompile target) -> 5843 src/bin/pgbench/pgbench.c(6002): warning C4033: 'ignore_socket' must return a value 5844 [C:\projects\postgresql\pgbench.vcxproj] 5845 src/bin/pgbench/pgbench.c(6120): warning C4013: 'ppoll' undefined; assuming extern returning int 5846 [C:\projects\postgresql\pgbench.vcxproj] 5847 [C:\projects\postgresql\pgbench.vcxproj] 5848 "C:\projects\postgresql\pgsql.sln" (default target) (1) -> 5849 "C:\projects\postgresql\pgbench.vcxproj" (default target) (72) -> 5850 (ClCompile target) -> 5851 src/bin/pgbench/pgbench.c(6004): error C2027: use of undefined type 'pollfd' 5852 src/bin/pgbench/pgbench.c(6000): error C2036: 'socket_set:*' unknown size 5853 [C:\projects\postgresql\pgbench.vcxproj] 5854 [C:\projects\postgresql\pgbench.vcxproj] 5855 src/bin/pgbench/pgbench.c(6000): error C2036: 'socket_set:*' unknown size 5856 [C:\projects\postgresql\pgbench.vcxproj] 5857 src/bin/pgbench/pgbench.c(6000): error C2037: 'fd' specifies undefined object 'union pollfd' 5858 src/bin/pgbench/pgbench.c(6000): error C2037: left of 'revents' specifies undefined object 'union pollfd' 5859 src/bin/pgbench/pgbench.c(6000): error C2065: 'POLLRDHUP': undeclared identifier 5860 src/bin/pgbench/pgbench.c(6000): error C2065: 'POLLRDHUP': undeclared identifier 6605 t/090_reindexdb.pl ........ ok 6606 t/091_reindexdb_all.pl .... ok 6607 t/100_vacuumdb.pl ......... ok 6608 t/101_vacuumdb_all.pl ...... ok 6609 t/102_vacuumdb_stages.pl .. ok 6610 t/290_connstr.pl .......... ok 6611 All tests successful.``` Linux and Windows logos with Tux. Command exited with code 1 perl dumprepr.pl == $path == C:/projects/postgresql/src/test/regress/expected/psql.out Thu May 17 05:10:50 2018 C:/projects/postgresql/src/test/regress/results/psql.out Thu May 17 05:21:38 2018 ********** ### 261,267 #### border 1 columns 0 expanded off fieldsep '' fieldsep_zero on footer on format aligned --- 261,267 ---- border 1 columns 0 expanded off fieldsep not used fieldsep_zero off footer on format aligned Motivation pgsql-hackers@postgresql.org • ~140 people contributing code • ~500 people contributing to discussions • Up to ~250 proposed patches in consideration at a time 4 times a year patches are reviewed and committed in a month-long ‘commitfest’ Patch submission and review is done entirely through the pgsql-hackers, pgsql-bugs, pgsql-committers mailing lists Patches are tracked through the commitfest.postgresql.org web app; registering a thread in the CF app is approximately like making a ‘pull request’ in many other projects Patch inflation Moved Committed Returned Rejected Welcome, new contributors Distinct patch authors How long do patches live? Age (no. commitfests) of patches that reached final state in CF 2018-03 Reviewer & committer bandwidth is precious Automatically discoverable problems - Bitrot: please rebase! - Other compilers are pickier than yours - Tests fail (maybe with obscure build options or full TAP tests) - Portability bugs (endianness, word size, OS, libraries) - Uninitialised data, race conditions, … - Documentation is broken Build farm • The build farm will find some of these problems automatically • ... but that happens after commit, and consumes committer time and energy • People will shout at you — ask me how I know • Let’s apply some of that sort of automation to proposals, during the review phase Implementation This time last year - Daily cronjob to check for bitrot in time for morning coffee - Various experiments with executing tests, but … how safe is that? From: Cron Daemon <munro@asterix> Subject: Cron <munro@asterix> /home/munro/patches/patchmon.sh 7 out of 8 hunks failed while patching src/backend/libpq/auth.c Failed to apply /home/munro/patches/ldap-diagnostic-message-v3.patch 1 out of 2 hunks failed while patching configure 1 out of 2 hunks failed while patching configure.in Failed to apply /home/munro/patches/kqueue-v7.patch Let’s execute random code from the internet… What could possibly go wrong? patch -p1 < foo.patch - CVE-2018-1000156 - CVE-2016-10713 - CVE-2015-1418 - CVE-2015-1416 - CVE-2015-1395 - CVE-2015-1196 - CVE-2014-9637 - CVE-2010-4651 - patch: runs arbitrary shell commands - patch: writes to files outside the target source tree - patch: denial of service Step 1: Quarantine and apply 1. pristine source tree, patch tools → cloned ZFS filesystem 2. patches 3. Apply patches in jail 4. Push branch to GitHub as commitfest/18/1234 5. Destroy jail, filesystem github.com/postgresql-cfbot/postgresql ### Overview - **Default branch** - **master** Updated 5 months ago by tglfsfdc ### Active branches <table> <thead> <tr> <th>Branch</th> <th>Updated Time</th> <th>Updated By</th> <th>Protection Status</th> <th>Merge Base</th> </tr> </thead> <tbody> <tr> <td>commitfest/16/1394</td> <td>Updated 5 minutes ago by Commitfest B...</td> <td>853</td> <td><img src="" alt=" " /></td> <td><img src="" alt=" " /></td> </tr> <tr> <td>commitfest/16/1389</td> <td>Updated 10 minutes ago by Commitfest ...</td> <td>853</td> <td><img src="" alt=" " /></td> <td><img src="" alt=" " /></td> </tr> <tr> <td>commitfest/16/1388</td> <td>Updated 15 minutes ago by Commitfest ...</td> <td>853</td> <td><img src="" alt=" " /></td> <td><img src="" alt=" " /></td> </tr> <tr> <td>commitfest/16/1386</td> <td>Updated 25 minutes ago by Commitfest ...</td> <td>853</td> <td><img src="" alt=" " /></td> <td><img src="" alt=" " /></td> </tr> </tbody> </table> [View more active branches] [CF 16/1386] Atomic pgrename on Windows This commit was automatically generated by cfbot at commitfest.cputube.org. It is based on patches submitted to the PostgreSQL mailing lists and registered in the PostgreSQL Commitfest application. This branch will be overwritten each time a new patch version is posted to the email thread or the master branch changes. Commitfest entry: https://commitfest.postgresql.org/16/1386 Patch(es): https://www.postgresql.org/message-id/CAPpHfds7duGZt%2BPf2GL9qSSVv00ZnjNwqiCPjN7mirDw882tA%40mail.gmail.com Author(s): Alexander Korotkov Commitfest Bot committed 26 minutes ago 1 parent 1cc4f53 commit 5c2270755ec14258ba7209a53dc177ddc91552a5 Showing 3 changed files with 77 additions and 3 deletions. - .travis.yml - src/backend/postmaster/pgstat.c - src/port/dirmod.c Step 2: Build and test - Many wonderful, generous, free-for-open-source build-bot providers - Running untrusted code in throw-away virtual machine images is their core business - travis-ci.org for Ubuntu, macOS appveyor.com for Windows … there are many more - Friendly result pages and APIs How to • Tell travis-ci.org, appveyor.com, … to watch your github.com, bitbucket.com, … public source repository and build any branch with a control file in it • Add the control file to your branch (.travis.yml, appveyor.yml etc as appropriate): `script: ./configure ... && make -j4 && make check` • This is a nice way to test your branches before you submit patches, and can send you emails, provide ‘badges’ for your web page, tell your IRC channel, release homing pigeons etc • This talk is about plugging an old school mailing list workflow into this technology! cfbot information flow - git.postgresql.org - commitfest.postgresql.org - archives.postgresql.org GitHub Travis CI AppVeyor CI <table> <thead> <tr> <th>Branch</th> <th>#</th> <th>Status</th> <th>Commit</th> <th>Label</th> <th>Build Status</th> </tr> </thead> <tbody> <tr> <td>commitfest/16/1479</td> <td>🧪 14 builds</td> <td># 28284 started</td> <td>🧪 -</td> <td>🔄 efb690</td> <td>⚠️ unknown</td> </tr> <tr> <td>commitfest/17/1478</td> <td>🧪 14 builds</td> <td># 28283 started</td> <td>🧪 -</td> <td>🔄 5531ca5</td> <td>✅ Commitfest Bot</td> </tr> <tr> <td>commitfest/17/1476</td> <td>🧪 17 builds</td> <td># 28282 started</td> <td>🧪 -</td> <td>🔄 85e4147</td> <td>✅ Commitfest Bot</td> </tr> <tr> <td>commitfest/17/1474</td> <td>✔️ 19 builds</td> <td># 28281 passed</td> <td>🧪 4 minutes ago</td> <td>🔄 acbf300</td> <td>✅ Commitfest Bot</td> </tr> <tr> <td>commitfest/17/1470</td> <td>✔️ 19 builds</td> <td># 28280 passed</td> <td>🧪 8 minutes ago</td> <td>🔄 94781ee</td> <td>✅ Commitfest Bot</td> </tr> <tr> <td>commitfest/17/1469</td> <td>✔️ ✔️</td> <td># 28279 passed</td> <td>🧪 -</td> <td>🔄 f3e1aae</td> <td>✅ Commitfest Bot</td> </tr> </tbody> </table> Step 3: Collect results - CI providers have APIs where you can collect the results - Collecting them in a small database allows consolidated reporting in one place - You can also browse results directly at CI websites Active battles Windows - Currently able to run `make check` on appveyor.com CI, but the tablespace test fails so I just exclude it - Not yet attempting to run `check-world` - If you know how to fix this, please see me after, I will pay you in beer Rare transient false negatives - coverage .gdca files getting trampled on by multiple backends (later GCC will fix that) - Failure to fetch “winflexbison” from sf.net - Failure to fetch XSL files from oasis-open.org, sf.net - Timeout of crash-restart TAP test —undiagnosed! Plans for the future ## Commitfest 2018-09 ### Active patches <table> <thead> <tr> <th>Patch</th> <th>Status</th> <th>Author</th> <th>Reviewers</th> <th>Committer</th> <th>Latest activity</th> <th>Latest mail</th> </tr> </thead> <tbody> <tr> <td>Fix the optimization to skip WAL-logging on table created in same transaction</td> <td>Needs review</td> <td>Heikki Linnakangas (heikki), Michael Paquier (michael-kun), Kyoto Horiguchi (horiguti)</td> <td>Michael Paquier (michael-kun), Kyotaro Horiguchi (horiguti), satyanarayana Nariapuram (snarlap)</td> <td>2018-04-10 18:17</td> <td>2018-03-30 01:06</td> <td></td> </tr> <tr> <td>Fix a bug that can prevent standby from restarting</td> <td>Needs review</td> <td>Kyotaro Horiguchi (horiguti)</td> <td>Michael Paquier (michael-kun)</td> <td>2018-04-10 18:17</td> <td>2018-05-14 06:59</td> <td></td> </tr> <tr> <td>Replication status in logical replication</td> <td>Ready for Committer</td> <td>Masahiko Sawada (masahikosawada)</td> <td>Simon Riggs (simon), Petr Jelinek (pjmodos), vaishnavi prabakaran (vaishnavip)</td> <td>simon</td> <td>2018-04-10 18:18</td> <td></td> </tr> </tbody> </table> • Run Coverity and other static analysis tools? • Run Valgrind, Clang asan etc to look for bugs? • Add a big endian 32 bit non-Linux system for maximum portability bug detection with one stone? • Display built documentation for review? • Make Travis/AppVeyor fetch and apply patches themselves? • Put .travis.yml, .appveyor.yml files in the tree? • Andreas Seltenreich’s SQL Smith? • Code coverage report? (that is, reinstate) • Automated performance testing…? Questions, ideas? - Thanks to Andres Freund, Dagfinn Ilmari Mannsåker, Andrew Dunstan, Peter van Hardenberg, Oli Bridgman for ideas and scripting improvements - Thanks to Travis CI and AppVeyor CI for supporting open source - Thanks to pgsql-hackers for all the patches
{"Source-Url": "https://www.pgcon.org/2018/schedule/attachments/499_pgcon%202018%20-%20continuous%20integration.pdf", "len_cl100k_base": 4774, "olmocr-version": "0.1.42", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 42360, "total-output-tokens": 5826, "length": "2e12", "weborganizer": {"__label__adult": 0.0002951622009277344, "__label__art_design": 0.00023734569549560547, "__label__crime_law": 0.00023651123046875, "__label__education_jobs": 0.0007424354553222656, "__label__entertainment": 4.76837158203125e-05, "__label__fashion_beauty": 0.00010728836059570312, "__label__finance_business": 0.0002110004425048828, "__label__food_dining": 0.00026988983154296875, "__label__games": 0.00030517578125, "__label__hardware": 0.00037288665771484375, "__label__health": 0.0002111196517944336, "__label__history": 0.00012755393981933594, "__label__home_hobbies": 8.0108642578125e-05, "__label__industrial": 0.00022792816162109375, "__label__literature": 0.0001271963119506836, "__label__politics": 0.00014650821685791016, "__label__religion": 0.00024819374084472656, "__label__science_tech": 0.0013637542724609375, "__label__social_life": 0.00011360645294189452, "__label__software": 0.0061798095703125, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.0002276897430419922, "__label__transportation": 0.0002071857452392578, "__label__travel": 0.00013554096221923828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15753, 0.06817]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15753, 0.0209]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15753, 0.67244]], "google_gemma-3-12b-it_contains_pii": [[0, 110, false], [110, 366, null], [366, 2721, null], [2721, 3935, null], [3935, 6260, null], [6260, 6728, null], [6728, 6739, null], [6739, 6900, null], [6900, 7267, null], [7267, 7321, null], [7321, 7371, null], [7371, 7470, null], [7470, 7513, null], [7513, 7807, null], [7807, 8093, null], [8093, 8108, null], [8108, 8645, null], [8645, 8721, null], [8721, 8999, null], [8999, 9241, null], [9241, 10051, null], [10051, 10858, null], [10858, 11154, null], [11154, 11728, null], [11728, 11858, null], [11858, 12577, null], [12577, 12796, null], [12796, 12811, null], [12811, 13047, null], [13047, 13322, null], [13322, 13343, null], [13343, 15014, null], [15014, 15483, null], [15483, 15753, null]], "google_gemma-3-12b-it_is_public_document": [[0, 110, true], [110, 366, null], [366, 2721, null], [2721, 3935, null], [3935, 6260, null], [6260, 6728, null], [6728, 6739, null], [6739, 6900, null], [6900, 7267, null], [7267, 7321, null], [7321, 7371, null], [7371, 7470, null], [7470, 7513, null], [7513, 7807, null], [7807, 8093, null], [8093, 8108, null], [8108, 8645, null], [8645, 8721, null], [8721, 8999, null], [8999, 9241, null], [9241, 10051, null], [10051, 10858, null], [10858, 11154, null], [11154, 11728, null], [11728, 11858, null], [11858, 12577, null], [12577, 12796, null], [12796, 12811, null], [12811, 13047, null], [13047, 13322, null], [13322, 13343, null], [13343, 15014, null], [15014, 15483, null], [15483, 15753, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 15753, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15753, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15753, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15753, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15753, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15753, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15753, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15753, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15753, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15753, null]], "pdf_page_numbers": [[0, 110, 1], [110, 366, 2], [366, 2721, 3], [2721, 3935, 4], [3935, 6260, 5], [6260, 6728, 6], [6728, 6739, 7], [6739, 6900, 8], [6900, 7267, 9], [7267, 7321, 10], [7321, 7371, 11], [7371, 7470, 12], [7470, 7513, 13], [7513, 7807, 14], [7807, 8093, 15], [8093, 8108, 16], [8108, 8645, 17], [8645, 8721, 18], [8721, 8999, 19], [8999, 9241, 20], [9241, 10051, 21], [10051, 10858, 22], [10858, 11154, 23], [11154, 11728, 24], [11728, 11858, 25], [11858, 12577, 26], [12577, 12796, 27], [12796, 12811, 28], [12811, 13047, 29], [13047, 13322, 30], [13322, 13343, 31], [13343, 15014, 32], [15014, 15483, 33], [15483, 15753, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15753, 0.20248]]}
olmocr_science_pdfs
2024-11-22
2024-11-22
e5dc01fddc070d6868236824809b91a0fbbe9939
Comparative research about high failure rate of IT projects and opportunities to improve Giuseppe Arcidiacono, PMP, CGEIT, CISM, CISA, CRISC. ABSTRACT This paper aims to study the causes that lead to failure in IT projects by analyzing some of the most interesting statistics retrieved in the current literature around the world. The analysis aims to highlight the weaknesses and opportunities to improve with the objective to give practitioners some suggestions and tips in order to raise projects’ success rate. For every source, the present document reports in a table the weakness identified by the original researcher together with considerations, and tied opportunities emerged from his/her work. INTRODUCTION In a publication issued in 2009, International Data Corporation states that Project Management is the main cause (with a weight of 54%) leading to the following disaster scenario: almost 25% of IT projects experience outright failure, 50% of projects require material rework and 20%/25% of them do not provide Return On Investment (ROI). The other factors that influence the success or failure of a project are: - Activities defining and controlling the IT project Business (21%) - Aspects of the project dealing with project funding, internal rate of return and business data People (14%) - The team that carries out the IT project Method (8%) - The dimension involving approach, procedures and tools Technical (3%) Despite its long tradition in IT field, Project Management has not achieved an appropriate degree of maturity so that the most part of challenges are directly associated with how a project is managed through all phases of its lifecycle. The present paper aims to analyze some scientific articles, papers and books talking about this issue in order to obtain a big picture and give to practitioner some tips and suggestions. The seven reasons of Joseph Gulla The IT leader of Alazar Press and former executive IT specialist at IBM, Joseph Gulla highlights on [http://www.ibmsystemsmag.com/](http://www.ibmsystemsmag.com/) seven reasons involved in project failures tied to a same number of opportunity to improve. His opinion is reported in the following table: <table> <thead> <tr> <th>WEAKNESS</th> <th>CONSIDERATIONS AND OPPORTUNITIES</th> </tr> </thead> <tbody> <tr> <td>Poor Project Planning and Direction</td> <td>Improving project planning and direction is one of the key factors in IT project success. This requires a method made up of rules, processes and tools for project planning and management, supported by a software tool. It’s important to remember the Four Ps—pilot, phase, parallel and plunge—and, certainly, don’t plunge under any circumstance. A vital part of planning is to assign the right people to the right task and make clear assignments to team members, with defined goals and responsibilities. When assignments don’t work out, adjust roles as necessary.</td> </tr> <tr> <td>Insufficient Communication</td> <td>Objective status reports, frequent contact with sponsors and business users, and involvement of such external parties as the hardware vendor are crucial to avoiding the communication breakdowns that can derail IT projects. Simple actions matter, such as organized agendas, minutes, action items and information-push emails. Agendas force the project manager running the meeting to organize the time and supply preliminary materials. The thinking and preparation that goes into creating the agenda are more important than the agenda itself. Also, mix up the way the message is delivered, especially for executive reviews. Using the same status presentation repeatedly might be an efficient method, but it could also be missing necessary diversity to keep executives interested in the story behind the status.</td> </tr> <tr> <td>Ineffective Management</td> <td>Sidestep this pitfall by proactively managing changing objectives, goals and risks, coordinating efforts between the technology and finance departments, and measuring performance. Implement a straightforward change-management process with estimating and approval steps. This should be a lightweight process, but one that also allows management to understand the impact of changing requirements on the project. Utilize a risk-management assessment tool to uncover risks that must be addressed during and after the project. Enlist a finance representative on the team and formalize a business case.</td> </tr> </tbody> </table> Lastly, identify discrete performance measurements, like planned and actual task starts and completes, and include them in the status reporting. **Failure to Align With Constituents and Stakeholders** Building understanding and trust with constituents and stakeholders is essential to a successful outcome, particularly when these groups are in different organizations and might have varied measurements and motivations. For greater alignment, target specific initiatives to ensure interlock and communication with stakeholders. This can be done through input-gathering meetings, communication to push information and activities to get sign-offs on work products. Early in the IT project, it’s useful to have at least one face-to-face meeting with key stakeholders and team members. A well-planned kickoff meeting, where relationships are developed, will support the project in later months. **Ineffective Involvement of Executive Management** The participation of an executive sponsor in key operational working sessions is crucial to establish priorities. Project kickoff is the best first meeting, but it doesn’t end there. Executive involvement must be targeted for specific status meetings to monitor project progress, particularly in meetings where go/no-go decisions must be solicited. **Lack of Soft Skills or the Ability to Adapt** To prevent a situation in which team members lack the necessary skills for the project, utilize a mentoring approach for less-experienced employees. Also, include required education in the overall project schedule. Actively recruit skilled personnel through internal and external routes like jobs systems. A good outcome will not result without sufficiently skilled people. **Poor or Missing Methodology and Tools** Successful projects are based on a methodology or framework that includes project-management tools. This approach can increase accuracy and save time by automating activities like task tracking. Maintaining a simple, organic methodology can have significant payback on a project. It should include the following: - Set up an electronic project notebook. - Establish written objectives for the project. - Work with the technical lead to establish tasks within phases. - Ask team members to estimate the time and number of tasks required. Create a formal project plan and manage to it, including basic change control. Proactively solve problems that may arise. **FINAL CONSIDERATIONS** Improving the success rate of IT projects is possible by putting significantly more focus on general-management activities. It can be daunting at the onset of a project to know the odds indicate major retoolings or even outright failure. But with accurate planning, defined goals, clear assignments and effective communication, proactive managers can overcome those odds to master even the most challenging project. Simply knowing where potential pitfalls lie can help prevent backlogs and costly delays in the future. The “State of Project Management”’s results The ‘The State of Project Management’ annual survey published by “Wellingtone Project Management” and the “Association for Project Management (APM) Project Management Office (PMO) Specific Interest Group (SIG)” report that many IT project managers do not apply a Consistent Project Management Approach; that is a strong indicator of project management maturity. The survey aims to answer to doubts like as “are all projects of a similar size & scale run in a similar way, or is the approach driven by the selected Project Manager?”. For this reason, participants were asked a series of questions about the application of consistent methods and their frequency of use. The most evident problems emerged from participants answers are the following: <table> <thead> <tr> <th>WEAKNESS</th> <th>CONSIDERATIONS AND OPPORTUNITIES</th> </tr> </thead> <tbody> <tr> <td>Over 34% of projects schedules are not baselined</td> <td>Baselining should occur at the end of the Planning Stage as part of approval to proceed. Actual progress can then be tracked against the baseline plan. PMs should not only know where they are in the plan, but also where they are compared to where they should be</td> </tr> <tr> <td>25% of PMs never or sometimes prepare a standard scoping document.</td> <td>Effective scope management requires good and clear communication, as this ensures that members on the team understand the scope of the project while agreeing on how the project goals will be met. Scope management helps avoid the challenges that a project might face with bloating scope and an unruly requirements list. Project scope clearly sets out what is or is not included in the project, and controls what gets added or removed as the</td> </tr> </tbody> </table> project is executed. Scope management establishes control mechanisms to address factors that may result in changes during the project life-cycle. Without defining the project scope, the cost or time that the project will take up cannot be estimated. At times, due to a lack of communication, scope may need to change. This directly affects the cost and disturbs the schedule of the project, causing losses. 25% of PMs never or sometimes engage in project risk management. Risk management plans contribute to project success by establishing a list of internal and external risks. This plan typically includes the identified risks, probability of occurrence, potential impact and proposed actions. Low risk events usually have little or no impact on cost, schedule or performance. Moderate risk causes some increase in cost, disruption of schedule or degradation of performance. High risk events are likely to cause a significant increase in the budget, disruption of the schedule or performance problems. FINAL CONSIDERATIONS The significant numbers emerged from survey demonstrate that even across professionally qualified experienced PMs there is significant room for improvement. PMI’s 2016 “Pulse of the Profession” In the PMI’s 2016 “Pulse of the Profession” it is shown that, compared to previous year: - Fewer projects are being completed within budget or meeting original goals and business intent; - More projects are actually failing and creating significant monetary loss for their organizations. <table> <thead> <tr> <th>WEAKNESS</th> <th>CONSIDERATIONS AND OPPORTUNITIES</th> </tr> </thead> <tbody> <tr> <td>According to the cited work, For every $1 billion invested in the United States, $122 million was wasted due to lacking project performance</td> <td>Organizations that place a high priority on creating a culture that recognizes its importance as a driver of better project performance report 71 percent of projects meeting original goals and business intent versus 52 percent that place a low priority on it.</td> </tr> </tbody> </table> Doomed From the Start? The study, entitled “Doomed From the Start? Why a Majority of Business and IT Teams Anticipate Their Software Development Projects Will Fail” examines why teams continue to struggle to meet the business expectations for their projects. It surveys participants on such topics as requirements definition, accountability, and measuring project success. “There is no question that the overall survey results shows that our single biggest performance improvement opportunity is to have a more business-centric approach to requirements,” states Geneca President & CEO, Joel Basgall. “Unfortunately, poor requirements definition practices have become so common that they're almost tolerated. The gloomy results of this survey really drive this home.” Interestingly, survey responses from IT professionals and their business counterparts are fairly similar, indicating that both groups have many of the same concerns with regard to their projects. <table> <thead> <tr> <th>WEAKNESS</th> <th>CONSIDERATIONS AND OPPORTUNITIES</th> </tr> </thead> <tbody> <tr> <td>Lack of confidence in project success: 75% of respondents admit that their projects are either always or usually “doomed right from the start.”</td> <td>The perception is that challenges start at the beginning of a project and reflect difficulty in defining project success. This carries forward to IT and has impact throughout the rest of the project.</td> </tr> <tr> <td>Rework wariness: 80% admit they spend at least half their time on rework.</td> <td>Extensive rework usually occurs when an organization devalues the need for full and complete requirements capture. This mindset usually leads to increasingly expensive fixes throughout the project because requirements are not properly defined and accounted for up front. Other factors leading to rework include a lack of commitment by the business to the details of what they need. There may also be a lack of consensus on requirements from all the areas of the organization that touch the solution.</td> </tr> <tr> <td>Business involvement is inconsistent or results in confusion: 78% feel the business is usually or always out of sync with project requirements and business stakeholders need to be more involved and engaged in the requirements process.</td> <td>Unfortunately, in some organizations, requirements definition is considered exclusively an IT responsibility, rather than a joint responsibility with the business. Sometimes confusion results when the business stakeholders believe they are talking about the same thing and do not realize they’re actually in disagreement. When this disparate information is given to IT, the outcomes are almost always disappointing. This ambiguity</td> </tr> <tr> <td></td> <td>contributes to the problem of confusing business input</td> </tr> <tr> <td>---</td> <td>---</td> </tr> <tr> <td>Fuzzy business objectives: Only 55% feel that the business objectives of their projects are clear to them.</td> <td>Too often business objective are too broad (like such “I want to launch this product in Q4” or “I expect this kind of return”) and do not always take into account the “on the ground” realities of how business actually works. As a result, commitments get made without knowing how feasible it is to actually get the job done</td> </tr> <tr> <td>Confusion around roles and accountabilities: The greatest barriers to success are seen as confusion around team roles and responsibilities by 38% and as a lack of common vision on project success criteria by 31%</td> <td>Most IT organization can find room to improve practices in this area. While most of the project team knows who is providing requirements and writing specs, a bigger challenge is to identify who is accountable to clarify objectives and create alignment. On most projects, it is the role of the Business to own the bigger picture and address the business problems. It is IT’s responsibility is to implement and execute on the problems. Opportunities realized or opportunities lost are often linked directly to the level of commitment to clarity around these roles.</td> </tr> <tr> <td>Requirements definition processes do not reflect business need: Less than 20% describe the requirements process as the articulation of business need.</td> <td>Project success necessitates an upfront requirements definition process that reflects a common vision of what the business needs built, the process activities and business scenarios to make that vision a reality, and specifications that translate what business users need to do their job into something that can be built by the team.</td> </tr> <tr> <td>Lack of complete agreement when projects are done: Only 23% state they are always in agreement when a project is truly done.</td> <td>The root cause is ambiguity and confusion around project expectations. It also reflects a lack of metrics that communicate progress and realized value in terms the business understands and appreciates. This can be resolved by consistently measuring progress based on delivered business value rather than technical components</td> </tr> </tbody> </table> **FINAL CONSIDERATIONS** “Although most software projects begin with high expectations, this research reminds us that problems usually lurk below the surface right from the start,” states Basgall. “The key is to understand what we are seeing and what to do about it.” Disturbing Reality of Today’s PMOs In the webinar on the Disturbing Reality of Today’s PMOs, KeyedIn Solutions’ team ties project failure to the Program Management Offices’ difficulties in today’s enterprise. In particular the webinar reported some of the following shocking statistics and reasons (retrieved from several, authoritative source) why Program Management Offices fail: <table> <thead> <tr> <th>Problem</th> <th>Source</th> </tr> </thead> <tbody> <tr> <td>75% of project management offices close within 3 years</td> <td>PMI 2015 Annual report <a href="http://media.corporate-ir.net/media_files/IROL/14/146476/files/pdf/PMI_2015AR_CompleteAnnualReport-r.pdf">link</a></td> </tr> <tr> <td>Since 2008, the correlated PMO implementation failure rate is over 50%</td> <td>Gartner Project Manager 2014 <a href="http://www.gartner.com/imagesrv/summits/docs/na/program-management/PPM-2014-Trip-Report.pdf">link</a></td> </tr> <tr> <td>Only a third of all projects were successfully completed on time and on budget over the past year</td> <td>Standish Group’s CHAOS report <a href="https://www.versionone.com/assets/img/files/CHAOSManifesto2013.pdf">link</a></td> </tr> <tr> <td>68% of stakeholders perceive their PMOs to be bureaucratic</td> <td>2013 Gartner PPM Summit <a href="http://www.gartner.com/imagesrv/summits/docs/emea/program-management/PPM-2013-Brochure.pdf">link</a></td> </tr> <tr> <td>Only 40% of projects met schedule, budget and quality goals</td> <td>MAKING CHANGE WORK - IBM Change Management Survey of 1500 execs <a href="https://www-07.ibm.com/au/pdf/making_change_work.pdf">link</a></td> </tr> </tbody> </table> **FINAL CONSIDERATIONS** One of the overriding issues fueling these poor stats is the fact that in many organizations, there is a wide gap between what the PMO is doing and what the business expects. This misalignment has serious consequences for both the PMO and the business. As pointed out by the first two bullets, the failure rate is very high with about half of all PMOs closed within 3 years or considered implementation failure. And as Gartner points out, backed up by many other industry statistics, there is not a lot of good news when it comes to how the PMO and Project teams are considered by the rest of the organization. An amazing 68% of stakeholders perceive their PMOs to be bureaucratic and only 40% of projects met their goals when it comes to schedule, budget and quality. For many, this misalignment is caused through a lack of adequate tools, as well as the 4 fundamental pillars that drive program success – resources, money, deliverables and benefits. Conclusions IT Project failure rate is still too high and Project Management seems to be the root cause. This paper would highlight that there is no consensus among practitioners about the reasons leading Project Management to cause project failure but that, at the same time, it is important to understand that failures are strictly tied to opportunities to learn and improve. Each failure should be investigated and lessons drawn for future projects’ executions. Knowledge about failure will strengthen an organization’s project management execution and practice. References - Adam Alami (2016), “Why Do Information Technology Projects Fail?”, ScienceDirect - Project Institute Management (2016) “The High Cost of Low Performance” - Joseph Gulla (2012), Seven Reasons IT Projects Fail About the Author Giuseppe Arcidiacono Italy Giuseppe Arcidiacono, PMP, CISM, CISA, CGEIT, CRISC is an Italian Computer Engineer, member of PMI and ISACA. Graduated with honours from the University of Calabria, Giuseppe holds two post-graduate master’s degrees in “Public Management” and in “Governance, Audit and Control System for Public and Private Organizations”. He holds the following international certification: - PMP – Project Management Professional – PMI (Project Management Institute); - CISA – Certified Information System Auditor – ISACA; - CISM – Certified Information Security Manager – ISACA; - CGEIT – Certified in Governance of Enterprise IT; - CRISC – Certified in Risk and Information Systems Control. Giuseppe has working in Project Management since 2003 and is specialized in European Commission Project Management Framework based on PCM (Project Cycle Management). Giuseppe is author of several scientific articles about Governance of IT (GEIT), Information Security Management and Audit, and Project Management published in prestigious international Journals and Blogs. Since 2008 Giuseppe is the Head of IT Department in ARCEA, Agenzia della Regione Calabria per le Erogazioni in Agricoltura (http://www.arcea.it), an European Commission Accredited Paying Agency (pursuant to Commission Regulation (EC) No 885/2006). He has worked as IT Auditor, IT Risk Practitioner and CEO – consultant for many Italian enterprises. Previously, Giuseppe worked as IT Project Manager in several international IT projects. Giuseppe can be contacted at: arcidiaconog@gmail.com
{"Source-Url": "https://pmworldlibrary.net/wp-content/uploads/2017/02/pmwj55-Feb2017-Arcidiacono-high-failure-rate-of-it-projects-featured-paper.pdf", "len_cl100k_base": 4448, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 25414, "total-output-tokens": 5061, "length": "2e12", "weborganizer": {"__label__adult": 0.000762939453125, "__label__art_design": 0.0016374588012695312, "__label__crime_law": 0.0025119781494140625, "__label__education_jobs": 0.0992431640625, "__label__entertainment": 0.00032401084899902344, "__label__fashion_beauty": 0.00042557716369628906, "__label__finance_business": 0.302734375, "__label__food_dining": 0.0008378028869628906, "__label__games": 0.0016984939575195312, "__label__hardware": 0.0024662017822265625, "__label__health": 0.0018157958984375, "__label__history": 0.000843048095703125, "__label__home_hobbies": 0.0009393692016601562, "__label__industrial": 0.005100250244140625, "__label__literature": 0.0008792877197265625, "__label__politics": 0.000942230224609375, "__label__religion": 0.0008330345153808594, "__label__science_tech": 0.09381103515625, "__label__social_life": 0.0006814002990722656, "__label__software": 0.11993408203125, "__label__software_dev": 0.359619140625, "__label__sports_fitness": 0.0004372596740722656, "__label__transportation": 0.0010080337524414062, "__label__travel": 0.0005846023559570312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22680, 0.01344]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22680, 0.07876]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22680, 0.92908]], "google_gemma-3-12b-it_contains_pii": [[0, 1868, false], [1868, 4595, null], [4595, 6899, null], [6899, 9439, null], [9439, 11457, null], [11457, 14141, null], [14141, 16645, null], [16645, 18780, null], [18780, 21089, null], [21089, 22680, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1868, true], [1868, 4595, null], [4595, 6899, null], [6899, 9439, null], [9439, 11457, null], [11457, 14141, null], [14141, 16645, null], [16645, 18780, null], [18780, 21089, null], [21089, 22680, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22680, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22680, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22680, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22680, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22680, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22680, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22680, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22680, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22680, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22680, null]], "pdf_page_numbers": [[0, 1868, 1], [1868, 4595, 2], [4595, 6899, 3], [6899, 9439, 4], [9439, 11457, 5], [11457, 14141, 6], [14141, 16645, 7], [16645, 18780, 8], [18780, 21089, 9], [21089, 22680, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22680, 0.2439]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
71d013acf6e68046fd100c8a595e8414075944c5
[REMOVED]
{"Source-Url": "https://inria.hal.science/hal-01875485v1/file/468158_1_En_3_Chapter.pdf", "len_cl100k_base": 5638, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24846, "total-output-tokens": 7003, "length": "2e12", "weborganizer": {"__label__adult": 0.0008020401000976562, "__label__art_design": 0.0019502639770507812, "__label__crime_law": 0.0015506744384765625, "__label__education_jobs": 0.044525146484375, "__label__entertainment": 0.000522613525390625, "__label__fashion_beauty": 0.0004391670227050781, "__label__finance_business": 0.1512451171875, "__label__food_dining": 0.0008087158203125, "__label__games": 0.004154205322265625, "__label__hardware": 0.001071929931640625, "__label__health": 0.0020923614501953125, "__label__history": 0.001251220703125, "__label__home_hobbies": 0.0008854866027832031, "__label__industrial": 0.001087188720703125, "__label__literature": 0.0019893646240234375, "__label__politics": 0.0032444000244140625, "__label__religion": 0.0008368492126464844, "__label__science_tech": 0.16748046875, "__label__social_life": 0.0034008026123046875, "__label__software": 0.134521484375, "__label__software_dev": 0.47314453125, "__label__sports_fitness": 0.0005674362182617188, "__label__transportation": 0.0015773773193359375, "__label__travel": 0.0008149147033691406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32113, 0.03692]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32113, 0.29316]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32113, 0.9534]], "google_gemma-3-12b-it_contains_pii": [[0, 1164, false], [1164, 3483, null], [3483, 6362, null], [6362, 9332, null], [9332, 12297, null], [12297, 15238, null], [15238, 18407, null], [18407, 21588, null], [21588, 24704, null], [24704, 26386, null], [26386, 29645, null], [29645, 32113, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1164, true], [1164, 3483, null], [3483, 6362, null], [6362, 9332, null], [9332, 12297, null], [12297, 15238, null], [15238, 18407, null], [18407, 21588, null], [21588, 24704, null], [24704, 26386, null], [26386, 29645, null], [29645, 32113, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32113, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32113, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32113, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32113, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32113, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32113, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32113, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32113, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32113, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32113, null]], "pdf_page_numbers": [[0, 1164, 1], [1164, 3483, 2], [3483, 6362, 3], [6362, 9332, 4], [9332, 12297, 5], [12297, 15238, 6], [15238, 18407, 7], [18407, 21588, 8], [21588, 24704, 9], [24704, 26386, 10], [26386, 29645, 11], [29645, 32113, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32113, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
696ee05bb22985b7a8cc3dce998b1053a9b09363
Intelligent Agents Inference First Order Logic Ute Schmid Applied Computer Science, Bamberg University last change: 9. Juli 2012 The Idea of MAS - An agent is a computer system that is capable of independent action on behalf of its owner or user. - A multiagent system consists of a number of agents which interact, typically by exchanging messages via some computer network infrastructure. - Different agents might represent users/owners with different goals/motivations. - Therefore, to successfully interact, agents require the ability to - Cooperate - Coordinate - Negotiate with each other (similar to interaction of people in everyday live) Key Research Questions - Micro/Agent Design: how to build agents capable of independent, autonomous action - Macro/Society Design: how to build agents capable of interacting with other agents, esp. if they have different goals/interests? Standard AI: focus on intelligent individual MAS: Social abilities - Emergence of cooperation in a society of self-interested agents - Language to communicate beliefs and aspirations - Conflict recognition and resolution - Coordination of activities to reach common goals Example Scenarios - NASA Deep Space 1 mission (1998): space probe with an autonomous, agent-based control system which can make some decisions by itself (before: control decisions were completely done by a 300 person ground crew) - Autonomous air-traffic control systems: recognition of failure of other control systems and cooperation to track and deal with attended flights (e.g. DVMT, Durfee; OASIS) - Last minute holiday package via PDA, using a negotiating agent MAS is Interdisciplinary Research - Software Engineering: Agent paradigm (going beyond OO) - Social Sciences: Using theories, gaining insights by simulation of artificial societies - AI: use planning, reasoning, learning technologies; study intelligent behavior in dynamic, interactive environments - Game theory: use theories and techniques for negotiation Definition: Agent An agent is a computer system that is situated in some **environment**, and that is capable of **autonomous action** in this environment in order to meet its design objectives. Compare to - Control systems: e.g. thermostate - Software demons: e.g. xbiff Environments - Accessible vs. inaccessible obtaining complete, accurate, up-to-date information about the environment's state - Deterministic vs. non-deterministic each action has a single, guaranteed effect, no uncertainty about the result of an action; note: highly complex deterministic environments must be handled as non-deterministic - Static vs. dynamic environment remains unchanged except by performance of the agent - Discrete vs. continuous fixed, finite number of actions and percepts → Open Env: inaccessible, non-deterministic, dynamic, continuous Reactive Systems - Two sources of complexity of MAS: characteristics of environment and nature of interaction between agent and environment - Reactive system: maintenance of interaction with environment, must be studied and described on a behavioral level (not a functional level, i.e. in classical terms of pre- and postconditions) - Example: reactive planning systems - Local decisions have global consequences - Example: printer controller Simple rule: first grant access to process $p_1$ and at some later time to process $p_2$ is unfair, because it might never grant access to $p_2$ An intelligent agent is a computer system with the ability to perform actions independently, autonomously, and flexible (on behalf of a user or an owner). Flexibility means: being - reactive - pro-active - social Demands and Examples - Performing a useful activity on behalf of humans or organizations (cleaning robot) - Coexist/interact with humans (cleaning robot) - Be aware of social rules and norms (transportation robot) - Coordinate activities (team of cleaning robots) - Cooperate or compete (RoboCup) - Entertain or educate people (games, tutor systems) Acting Reactively - If the environment of a program is static (known in advance), the program cannot fail (Compile-Time, Runtime) - In the real world, changes occur, information is incomplete (dynamic system). - A reactive system continuously interacts with its environment and reacts in time to changes - Example: Java-Listener, BUT: here reactions do NOT take into account the current state of the environment, they are determined in advance - Reactive systems can be modelled relatively straightforward: e.g. as stimulus-response rules Acting Pro-actively - means: generate goals autonomously, try to reach goals - not only event-driven behavior but: act on one’s own initiative Acting Socially - Real world is a multi-agent environment - When trying to reach goals, others must be taken into account - Some goals can only be reached through cooperation - In some situations exit conflicts and competition (e.g. internet auctions) - Social skills of agents: ability to model goals of other agents when trying to reach one’s own (local) goals, ability to interact (i.e. cooperate and coordinate) Further Features - Mobility: ability to move in a computer net or in another environment - Adaptivity/learning: Improving performance over time - Rationality: do not act in a way which hinders to fulfill one’s goals Example: Tileworld - Dynamic environment: holes appear/disappear - Agent must recognize changes and modify its behavior left: push up to H; middle: H disappears; right: better go to H to the right Agents as Intentional Systems - Endowing agents with “mental” states: beliefs, desires, wishes, hopes - Folk psychology: attributing attitudes for predicting and explaining other peoples behavior - Intentional systems (Dennett): - First order: having beliefs, desires, etc. - Second order: having beliefs and desires about beliefs and desires of its own and others - Compare to physical systems: for predicting that a stone will fall from my hand I do not attribute beliefs and desires but mass or weight Abstract Architecture - Environment: finite set of discrete states $E = \{e, e', \ldots\}$ assumption: continuous env can be modelled by a discrete env to any degree of accuracy - Repertoire of possible actions of an agent: $Ac = \{\alpha, \alpha', \ldots\}$ - Interaction of agent and environment: run $r$, as a sequence of interleaved environment states and actions - $R$: set of all possible finite sequences over $E$ and $Ac$ - $R^Ac$: subset of $R$ ending with an action - $R^E$: subset of $R$ ending with an environment state Abstract Architecture - State transformer function: $\tau : R^{Ac} \to P(E)$ - Termination: $\tau(r) = \emptyset$ - Environment $Env = (E, e_0, \tau)$ - Agent: $Ag : R^E \to Ac$ - Set of runs of an agent in an environment $R(Ag, Env)$ - Behavioral equivalence: $R(Ag_1, Env) = R(Ag_2, Env)$ Purely Reactive Agents - No reference to their history, next state is only dependent on the current state - \( Ag : E \rightarrow Ac \) - Example: thermostat Perception - \textit{see}: \( E \rightarrow \text{Per} \) - \textit{action}: \( \text{Per}^* \rightarrow \text{Ac} \) - Agent \( \text{Ag} = (\text{see}, \text{action}) \) - Equivalence relation over environment states: \( e \sim e' \) if \( \text{see}(e) = \text{see}(e') \) - If \( |\sim| = |E| \), the agent is omniscient - If \( |\sim| = 1 \), the agent has no perceptual ability Agents with State - Internal states \( I \) - \( action : I \rightarrow Ac \) - \( next : I \times Per \rightarrow I \) State-based agents as defined here are not more powerful than agents as defined above. Identical expressive power: Every state-based agent can be transformed into a standard agent that is behaviorally equivalent. Utility Functions - Telling an agent *what* to do without telling it how to do it - Indirectly via some performance measure - Associate utility with states of environment, prefer actions leading to states with higher utilities - Utility can be defined over states or over runs \[ u : E \rightarrow \mathbb{R} \text{ or } u : R \rightarrow \mathbb{R} \] Task Environments - \( \Psi : R \rightarrow \{0, 1\} \) is 1 (true) if a run satisfies some specification and 0 (false) otherwise - Task environment: \( \langle \text{Env}, \Psi \rangle \) specifies the properties of the environment and the criteria by which an agent will be judged to have succeeded in its task - Definition of success - pessimistic: \( \forall r \in R(Ag, \text{Env}) \) it has to hold \( \Psi(r) \) - optimistic \( \exists r \in R(Ag, \text{Env}) \) where it holds \( \Psi(r) \) - Two kinds of tasks: - Achievement: relation to planning - Maintainance Deductive Reasoning Agents - The usual problems of knowledge engineering - Transduction problem: Translating world in adequate symbolic description - Representation problem: providing a representation such that agents can reason with it for the results to be in time and useful - Agents as theorem provers Logic-based agents: “deliberate agents” - Specialized languages, e.g. MetateM, based on temporal logic Deliberate Agents - $D$ as a set of logical formulae, internal state of an agent is $\delta \in D$ - $\rho$ as set of deduction rules - $\delta \rightarrow_\rho \varphi$: formula $\varphi$ can be proved from database $\delta$ using deduction rules $\rho$ - Goal: Deriving a formula $Do(\alpha)$ either as best action or as action which is not explicitly forbidden function action($\delta : D$) for each $\alpha \in Ac$ do if $\delta \rightarrow_\rho Do(\alpha)$ then return $\alpha$ for each $\alpha \in Ac$ do if $\delta \nrightarrow_\rho \neg Do(\alpha)$ then return $\alpha$ return null Vacuum World Predicates: In(x,y) Dirt(x,y) Facing(d) (d ∈ \{north, east, south, west\}) Navigation: In(0,0) ∧ Facing(north) ∧ ¬Dirt(0,0) → Do(forward) Cleaning: In(x,y) ∧ Dirt(x,y) → Do(suck) Concurrent MetateM - Michael Fisher, 1994 - Language for direct execution of logical formulae (exacutable temporal logic) - Near the “ideal” of agents as deductive theorem provers - Concurrently executing agents, communication via asynchronous broadcast message passing - Components of an agent - Interface: defines interaction with other agents with \textit{id}, set of \textit{environment propositions} (accepted messages), set of \textit{component properties} (messages the agent can send) \texttt{stack(pop, push)[popped, full]} - Computational engine (executable temporal logic) Agent specification as set of program rules of the form antecedent about past => consequent about present and future Declarative past and imperative future paradigm ## Temporal Connectives <table> <thead> <tr> <th>Operator</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>◯φ</td> <td>φ is true ’tomorrow’</td> </tr> <tr> <td>●φ</td> <td>φ was true ’yesterday’</td> </tr> <tr> <td>♦φ</td> <td>at some time in the future, φ</td> </tr> <tr> <td>□φ</td> <td>always in the future, φ</td> </tr> <tr> <td>◯ ●φ</td> <td>at some time in the past, φ</td> </tr> <tr> <td>□ ●φ</td> <td>always in the past, φ</td> </tr> <tr> <td>φUψ</td> <td>φ will be true until ψ</td> </tr> <tr> <td>φSψ</td> <td>φ has been true since ψ</td> </tr> <tr> <td>φWψ</td> <td>φ is true unless ψ</td> </tr> <tr> <td>φZψ</td> <td>φ is true since ψ</td> </tr> <tr> <td>start</td> <td>nullary operator, true only at the beginning</td> </tr> </tbody> </table> Example rp(ask1, ask2)[give1, give2]: \[ \circlearrowright \text{ask1} \Rightarrow \lozenge \text{give1}; \] \[ \circlearrowright \text{ask2} \Rightarrow \lozenge \text{give2}; \] \[ \text{start} \rightarrow \square \neg (\text{give1} \land \text{give2}). \] rc1(give1)[ask1]: \[ \text{start} \rightarrow \text{ask1}; \] \[ \circlearrowright \text{ask1} \rightarrow \text{ask1}. \] rc2(ask1, give2)[ask2]: \[ \circlearrowright (\text{ask1} \land \neg \text{ask2}) \rightarrow \text{ask2}. \] Example Run: <table> <thead> <tr> <th>Time</th> <th>Agent</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>rp</td> </tr> <tr> <td>1</td> <td>ask1, ask2, give1</td> </tr> <tr> <td>2</td> <td>ask1, give2</td> </tr> <tr> <td>3</td> <td>ask1, give2</td> </tr> <tr> <td>4</td> <td>ask1, give2</td> </tr> </tbody> </table> ... Practical Reasoning Agents - What we want to achieve: deliberation - How to achieve a state: means-end-analysis The Procedural Reasoning System - Georgeff and Lansky - Belief-desire-intention architecture (BDI) Diagram: - Data input - Beliefs - Plans - Interpreter - Desires - Intentions - Action output Rational Actions - Reasoning and planning agents: calculative/computational rationality - First step: deliberation – **what** goal should be achieved forming an intention - Second step: Means-End-Analysis – **how** should the goal be achieved - Intention: agent commits itself to a task, inconsistent intentions can be blocked by an active intention - Agents believe that their intentions can be fulfilled and do not belief that they cannot fulfill their intentions (rationality) - Model: Belief, desire, intention Semantics (BDI) In the following: Focus on multi-agent interactions (games) Rational Actions in MAS - Assume that environment can be influenced by both agents - Utility function $u(\omega)$ with $\omega$ represented as action tuple: \[ \begin{align*} u_i(D, D) &= 1; \\ u_i(D, C) &= 1; \\ u_i(C, D) &= 4; \\ u_i(C, C) &= 4 \end{align*} \[ \begin{align*} u_j(D, D) &= 1; \\ u_j(D, C) &= 4; \\ u_j(C, D) &= 1; \\ u_j(C, C) &= 4 \end{align*} \] - Preference of agent $i$: \[ (C, C) \geq_i (C, D) \geq_i (D, C) \geq_i (D, D) \] - $C$ is the rational decision for $i$, because it prefers all states where it selects $C$ - This is the basic model of game theory. - Representation in payoff matrices ## Payoff Matrix <table> <thead> <tr> <th></th> <th>i: Cooperate</th> <th>j: Cooperate</th> <th>j: Defect</th> </tr> </thead> <tbody> <tr> <td></td> <td>4</td> <td>4</td> <td>1</td> </tr> <tr> <td>i: Cooperate</td> <td></td> <td>4</td> <td>4</td> </tr> <tr> <td>j: Cooperate</td> <td>4</td> <td></td> <td>1</td> </tr> <tr> <td>j: Defect</td> <td>1</td> <td>4</td> <td>1</td> </tr> <tr> <td>i: Defect</td> <td></td> <td>4</td> <td>1</td> </tr> </tbody> </table> Dominant Strategies - Strategy: Decision function by which an agent selects an action - A strategy $s_1$ dominates another strategy $s_2$ if an agent prefers each possible outcome of strategy $s_1$ over each possible outcome of strategy $s_2$ (with respect to the possible actions of the other agents) <table> <thead> <tr> <th></th> <th>i</th> <th></th> </tr> </thead> <tbody> <tr> <td></td> <td>s1</td> <td>s2</td> </tr> <tr> <td>j</td> <td></td> <td></td> </tr> <tr> <td>s1'</td> <td>4</td> <td>1</td> </tr> <tr> <td></td> <td>4</td> <td>4</td> </tr> </tbody> </table> outcome of $s_1 \geq$ outcome of $s_2$ for $s_1'$ and for $s_2'$ A rational agent will never select a strategy which is dominated by another strategy (such strategies can be excluded when actions are selected). Unfortunately, there exists not always a distinct dominant strategy. Pareto-optimality: an outcome is Pareto-optimal, if there is no other outcome all agents would prefer. Nash-Equilibrium - Two strategies $s_1$ and $s_2$ are in a Nash-equilibrium if: - Under assumption that agent $i$ plays $s_1$, $s_2$ is the rational choice of agent $j$ - Under assumption that agent $j$ plays $s_2$, $s_1$ is the rational choice of agent $i$ - If we have two strategies in Nash-equilibrium, no agent has an incentive to change its strategy. - Obtaining a Nash-equilibrium is desirable, because it is an effort to switch strategies and strategy switches might danger stability of a system. Analysis of game theoretic scenarios - Nash-equilibrium is an important tool for analyzing game theoretic scenarios - If each player has a dominant strategy, the combination of those strategies is called dominant strategy equilibrium - Solution of a game: the rational strategy of each agent - Each dominant strategy equilibrium is a Nash equilibrium - Nash’s theorem: there are equilibrium strategies even if there is no dominant strategy - Nash equilibrium is a necessary condition for an optimal solution (but is it also sufficient?) Coordination Games - Acme, a video game hardware manufacturer, must decide whether its next game machine will use CD or DVD. - Best, a video software producer, must decide whether to produce its next game on CD or DVD. <table> <thead> <tr> <th></th> <th>Acme DVD</th> <th>CD</th> </tr> </thead> <tbody> <tr> <td>Best DVD</td> <td>9</td> <td>-4</td> </tr> <tr> <td>DVD</td> <td>9</td> <td>-1</td> </tr> <tr> <td>CD</td> <td>-3</td> <td>5</td> </tr> <tr> <td></td> <td>-1</td> <td>5</td> </tr> </tbody> </table> Example cont. - No dominant strategy - Two Nash equilibria: (CD, CD) and (DVD, DVD) - if one player moves to a different strategy unilaterally, he will be worse off - One Pareto-optimal solution: (DVD, DVD) - If payoff for CD and DVD would be equal, there would be two Pareto-optimal solutions → guess or communicate → coordination game Zero-sum games - If preference orders of two agents are diametral, we have a strongly competitive scenario: \[ \omega \geq_i \omega' \rightarrow \omega' \geq_j \omega \] - An interaction is called zero-sum interaction, if \[ u_i(\omega) + u_j(\omega) = 0 \text{ for all } \omega \in \Omega \] - All zero-sum interactions are competitive! Prisoner’s Dilemma - Invented by Albert W. Tucker, further studied e.g. by Robert Axelrod - Two people are accused of complicity in a criminal act - They are in two different prison cells and cannot communicate - The attorney guarantees: - If one confesses the crime and the other not, the first will be free, the other goes 5 years to prison - If both confess, both go for 3 years to prison - Both know that they go 2 years to prison if none of them confesses Prisoner’s Dilemma <table> <thead> <tr> <th></th> <th>i</th> <th>j</th> </tr> </thead> <tbody> <tr> <td>confess</td> <td>3</td> <td></td> </tr> <tr> <td>deny</td> <td>0</td> <td>2</td> </tr> <tr> <td>deny</td> <td>5</td> <td>2</td> </tr> </tbody> </table> - Global utility is maximal if both cooperate (deny) - But: for each single agent the rational choice is not to cooperate but to testify - Is cooperation feasible in a society of rational, egoistical agents? Generalized Form of PD <table> <thead> <tr> <th></th> <th>i cooperate</th> <th>defect</th> </tr> </thead> <tbody> <tr> <td>j</td> <td></td> <td></td> </tr> <tr> <td>cooperate</td> <td>win</td> <td>win much</td> </tr> <tr> <td>win</td> <td>lose much</td> <td></td> </tr> <tr> <td>defect</td> <td>lose much</td> <td></td> </tr> <tr> <td>win much</td> <td>lose</td> <td></td> </tr> </tbody> </table> - \( T \): Temptation to defect, \( R \): Reward for mutual cooperation, - \( P \): Punishment for mutual defection, \( S \): Sucker’s payoff For PD scenarios it must hold: \( T > R > P > S \) For iterated versions, in addition: \( 2R > T + S \) If that condition does not hold, then full cooperation is not necessarily Pareto optimal, as the players are collectively better off by having each player alternate between cooperate and defect. (Douglas Hofstadter) Iterative PD - Repeated playing, memory of earlier encounters - Studied in social psychology, competition of computed strategies - Greedy strategies tend to do very poorly in the long run, altruistic strategies do better - Best deterministic strategy: “Tit for Tat” developed by Anatol Rapoport: the simplest of any program entered, containing only four lines of BASIC, won the contest! - Cooperate on the first iteration of the game; after that, the player does what his opponent did on the previous move Iterative PD cont. - Best if mis-communication is introduced: “Tit for Tat with forgiveness” When the opponent defects, on the next move, the player sometimes cooperates anyway, with a small probability (around 1%-5%). This allows for occasional recovery from getting trapped in a cycle of defections. Analyzing Top Scoring Strategies - Nice, retaliating, forgiving, non-envious - Utopian sounding: Nice guys can finish first! - e.g.: arms race Dilemma of the Commons - “Allmende Klemme” (William Forster Lloyd, 1833; Garrett Hardin, 1968) - social trap that involve a conflict over resources between individual interests and the common good - free access and unrestricted demand for a finite resource ultimately dooms the resource through over-exploitation! - This occurs because the benefits of exploitation accrue to individuals, each of which is motivated to maximise his or her own use of the resource, while the costs of exploitation are distributed between all those to whom the resource is available - Solutions? Multiagent Communication - Competitive: mechanisms for collective decision making - Voting - Auction - Negotiation - Argumentation - Cooperative: communication for distributed problem solving - Speech acts - Agent Communication Languages - Ontologies Collective Decision Mechanisms Design of a protocol - Guaranteed success: ensuring that eventually an agreement can be reached - Maximizing social welfare: total sum of utilities should be maximal - Pareto efficiency: no other outcome where at least one agent is better off and none is worse off - Individual rationality: following the protocols in the best interest of negotiation participants - Stability: providing all agents with an incentive to behave in a particular way (e.g. Nash equilibrium) - Simplicity: a participant can easily determine the optimal strategy - Distribution: designed such that there is no 'single point of failure' and to minimize communication between agents Online auctions are very popular simple interaction scenarios \rightarrow easy to automate good choice as a simple way for agents to reach agreements, allocating goods, tasks, resources Auctioneer agent, bidder agents, a good private value vs. public/common value of goods, correlated value: value for private factors as well as other agents valuation Dimensions of Auction Protocols - Winner determination: first price (highest bid gets good for the bidded amount), second price (highest bidder gets the good, but for price of 2nd highest bid) - Knowledge of bids: open cry, sealed-bid - Bidding mechanism: one shot, ascending/descending bids in successive rounds → different types of auctions English Auctions - “Mother of auctions” (Sothebys) - first-price, open cry, ascending (starting with a reservation price) - Dominant strategy: successively bid a small amount more than the current highest bid until price reaches current valuation, then withdraw - Winner’s curse: uncertainty about the true value (e.g. land speculation), winner might have overvalued the good Dutch Auctions - open-cry, descending - auctioneer starts with an artificially high value - decreasing value, until someone makes an offer - no dominant strategy - also susceptible to winner’s curse First-Price Sealed-Bid Auctions - first-price, sealed bid, one-shot - simplest form of auction - difference between second-highest and highest bid is wasted money - best strategy: bid less than true valuation, how much less depends on the other agents, no general solution Vickery Auctions - second-price, sealed bid, one-shot - dominant strategy: bid true valuation - Because truth telling is the dominant strategy, this form is discussed much in multiagent literature - BUT: counterintuitive for human bidders - Possibility of antisocial behavior: own valuation 90 $, guess that another agent will pay 100 $, therefore bid 99 $ such that opponent needs to pay more than necessary - Commercial situations: one company cannot compete directly but forces other company into bankruptcy Issues for Auctions - Expected revenue: Strategies for the auctioneer to maximize his revenue (for risk-neutral bidders, expected revenue is provably identical in all auction types) - Lies and collusions: coalition between agents (bid small amounts, share win afterwards), place bogus bidders, ... - Counterspeculation: costs time and money and is risky (compare with meta-level reasoning) Negotiation - Auctions are only concerned with allocation of goods - When agents must reach agreements on matters of mutual interest, richer techniques are required - Negotiation techniques for artificial agents (Rosenschein and Zlotkin, 1994) Object vs. Agent Communication - Object $o_2$ invokes a public method $m_1$ of object $o_1$ passing argument $arg$ - $o_2$ communicates $arg$ to $o_1$ - BUT: the decision to execute $m_1$ lies only with $o_2$ - An autonomous agent has control over its state and its behavior. There is no guarantee that another agent really performs an action. - An agent cannot force another agent to perform some action or to change its internal state. - An agent can try to influence another agent by communication. - Communication can change the internal state (belief, desire, intention) of another agent. - Communication as special case of action: speech acts.
{"Source-Url": "https://cogsys.uni-bamberg.de/teaching/ss13/ia/slides/cogsysI-11.pdf", "len_cl100k_base": 6531, "olmocr-version": "0.1.50", "pdf-total-pages": 61, "total-fallback-pages": 0, "total-input-tokens": 84715, "total-output-tokens": 8301, "length": "2e12", "weborganizer": {"__label__adult": 0.0003845691680908203, "__label__art_design": 0.0008292198181152344, "__label__crime_law": 0.001110076904296875, "__label__education_jobs": 0.00574493408203125, "__label__entertainment": 0.00020396709442138672, "__label__fashion_beauty": 0.0002288818359375, "__label__finance_business": 0.000934123992919922, "__label__food_dining": 0.00049591064453125, "__label__games": 0.0034465789794921875, "__label__hardware": 0.001384735107421875, "__label__health": 0.000926971435546875, "__label__history": 0.0007433891296386719, "__label__home_hobbies": 0.00026416778564453125, "__label__industrial": 0.0009937286376953125, "__label__literature": 0.0011568069458007812, "__label__politics": 0.0008645057678222656, "__label__religion": 0.0005340576171875, "__label__science_tech": 0.467529296875, "__label__social_life": 0.00025177001953125, "__label__software": 0.014739990234375, "__label__software_dev": 0.495849609375, "__label__sports_fitness": 0.0003938674926757813, "__label__transportation": 0.0007853507995605469, "__label__travel": 0.0001881122589111328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24554, 0.00884]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24554, 0.41312]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24554, 0.86137]], "google_gemma-3-12b-it_contains_pii": [[0, 132, false], [132, 660, null], [660, 1173, null], [1173, 1642, null], [1642, 2001, null], [2001, 2276, null], [2276, 2851, null], [2851, 3442, null], [3442, 3656, null], [3656, 4007, null], [4007, 4547, null], [4547, 4691, null], [4691, 5108, null], [5108, 5325, null], [5325, 5524, null], [5524, 6034, null], [6034, 6575, null], [6575, 6867, null], [6867, 7026, null], [7026, 7413, null], [7413, 7749, null], [7749, 8105, null], [8105, 8695, null], [8695, 9113, null], [9113, 9709, null], [9709, 9904, null], [9904, 10497, null], [10497, 10663, null], [10663, 11602, null], [11602, 12097, null], [12097, 12333, null], [12333, 12446, null], [12446, 12641, null], [12641, 13234, null], [13234, 13872, null], [13872, 14280, null], [14280, 14768, null], [14768, 15088, null], [15088, 15600, null], [15600, 16138, null], [16138, 16534, null], [16534, 16875, null], [16875, 17220, null], [17220, 17686, null], [17686, 18029, null], [18029, 18706, null], [18706, 19213, null], [19213, 19518, null], [19518, 19662, null], [19662, 20239, null], [20239, 20506, null], [20506, 21197, null], [21197, 21549, null], [21549, 21894, null], [21894, 22271, null], [22271, 22471, null], [22471, 22745, null], [22745, 23257, null], [23257, 23648, null], [23648, 23893, null], [23893, 24554, null]], "google_gemma-3-12b-it_is_public_document": [[0, 132, true], [132, 660, null], [660, 1173, null], [1173, 1642, null], [1642, 2001, null], [2001, 2276, null], [2276, 2851, null], [2851, 3442, null], [3442, 3656, null], [3656, 4007, null], [4007, 4547, null], [4547, 4691, null], [4691, 5108, null], [5108, 5325, null], [5325, 5524, null], [5524, 6034, null], [6034, 6575, null], [6575, 6867, null], [6867, 7026, null], [7026, 7413, null], [7413, 7749, null], [7749, 8105, null], [8105, 8695, null], [8695, 9113, null], [9113, 9709, null], [9709, 9904, null], [9904, 10497, null], [10497, 10663, null], [10663, 11602, null], [11602, 12097, null], [12097, 12333, null], [12333, 12446, null], [12446, 12641, null], [12641, 13234, null], [13234, 13872, null], [13872, 14280, null], [14280, 14768, null], [14768, 15088, null], [15088, 15600, null], [15600, 16138, null], [16138, 16534, null], [16534, 16875, null], [16875, 17220, null], [17220, 17686, null], [17686, 18029, null], [18029, 18706, null], [18706, 19213, null], [19213, 19518, null], [19518, 19662, null], [19662, 20239, null], [20239, 20506, null], [20506, 21197, null], [21197, 21549, null], [21549, 21894, null], [21894, 22271, null], [22271, 22471, null], [22471, 22745, null], [22745, 23257, null], [23257, 23648, null], [23648, 23893, null], [23893, 24554, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24554, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24554, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24554, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24554, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24554, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24554, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24554, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24554, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24554, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24554, null]], "pdf_page_numbers": [[0, 132, 1], [132, 660, 2], [660, 1173, 3], [1173, 1642, 4], [1642, 2001, 5], [2001, 2276, 6], [2276, 2851, 7], [2851, 3442, 8], [3442, 3656, 9], [3656, 4007, 10], [4007, 4547, 11], [4547, 4691, 12], [4691, 5108, 13], [5108, 5325, 14], [5325, 5524, 15], [5524, 6034, 16], [6034, 6575, 17], [6575, 6867, 18], [6867, 7026, 19], [7026, 7413, 20], [7413, 7749, 21], [7749, 8105, 22], [8105, 8695, 23], [8695, 9113, 24], [9113, 9709, 25], [9709, 9904, 26], [9904, 10497, 27], [10497, 10663, 28], [10663, 11602, 29], [11602, 12097, 30], [12097, 12333, 31], [12333, 12446, 32], [12446, 12641, 33], [12641, 13234, 34], [13234, 13872, 35], [13872, 14280, 36], [14280, 14768, 37], [14768, 15088, 38], [15088, 15600, 39], [15600, 16138, 40], [16138, 16534, 41], [16534, 16875, 42], [16875, 17220, 43], [17220, 17686, 44], [17686, 18029, 45], [18029, 18706, 46], [18706, 19213, 47], [19213, 19518, 48], [19518, 19662, 49], [19662, 20239, 50], [20239, 20506, 51], [20506, 21197, 52], [21197, 21549, 53], [21549, 21894, 54], [21894, 22271, 55], [22271, 22471, 56], [22471, 22745, 57], [22745, 23257, 58], [23257, 23648, 59], [23648, 23893, 60], [23893, 24554, 61]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24554, 0.11538]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
5c8d93b9617eaa01036345231293e290829c9bba
[REMOVED]
{"Source-Url": "https://www.uni-kassel.de/fb07/fileadmin/datas/fb07/5-Institute/IBWL/Sunyaev/Publications/pdf/P14_Sunyaev.pdf", "len_cl100k_base": 5655, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 33088, "total-output-tokens": 8089, "length": "2e12", "weborganizer": {"__label__adult": 0.0022220611572265625, "__label__art_design": 0.0013132095336914062, "__label__crime_law": 0.0015535354614257812, "__label__education_jobs": 0.006317138671875, "__label__entertainment": 0.0001748800277709961, "__label__fashion_beauty": 0.0010213851928710938, "__label__finance_business": 0.0010614395141601562, "__label__food_dining": 0.0029087066650390625, "__label__games": 0.0013885498046875, "__label__hardware": 0.004486083984375, "__label__health": 0.306640625, "__label__history": 0.0008935928344726562, "__label__home_hobbies": 0.0004019737243652344, "__label__industrial": 0.0011205673217773438, "__label__literature": 0.0009007453918457032, "__label__politics": 0.0007834434509277344, "__label__religion": 0.0013551712036132812, "__label__science_tech": 0.09588623046875, "__label__social_life": 0.00036215782165527344, "__label__software": 0.0157928466796875, "__label__software_dev": 0.548828125, "__label__sports_fitness": 0.0022716522216796875, "__label__transportation": 0.0016727447509765625, "__label__travel": 0.0008902549743652344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34772, 0.02686]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34772, 0.23588]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34772, 0.89418]], "google_gemma-3-12b-it_contains_pii": [[0, 2028, false], [2028, 5291, null], [5291, 8617, null], [8617, 11607, null], [11607, 13672, null], [13672, 15741, null], [15741, 18846, null], [18846, 22358, null], [22358, 26093, null], [26093, 29389, null], [29389, 32592, null], [32592, 34772, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2028, true], [2028, 5291, null], [5291, 8617, null], [8617, 11607, null], [11607, 13672, null], [13672, 15741, null], [15741, 18846, null], [18846, 22358, null], [22358, 26093, null], [26093, 29389, null], [29389, 32592, null], [32592, 34772, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34772, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34772, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34772, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34772, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34772, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34772, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34772, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34772, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34772, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34772, null]], "pdf_page_numbers": [[0, 2028, 1], [2028, 5291, 2], [5291, 8617, 3], [8617, 11607, 4], [11607, 13672, 5], [13672, 15741, 6], [15741, 18846, 7], [18846, 22358, 8], [22358, 26093, 9], [26093, 29389, 10], [29389, 32592, 11], [32592, 34772, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34772, 0.064]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
93049a49720a4bc7244e89208ed3daa21168d32d
Describable Visual Attributes for Face Verification and Image Search Kumar, Berg, Belhumeur, Nayar. *PAMI*, 2011. Ryan Lei 2011/05/05 Course aMMAI Outline • Title explanation • Introduction • Related work • Creating labeled image datasets • Learning visual attributes • Application to face verification • Application to face search • Possible improvements Title Explanation • Describable visual attributes: • *Labels* that can be given to an image to describe its appearance. • Examples: gender, age, skin color, hair style, smiling. • Goal 1 - Face verification (authentication): a. Determine whether two faces are of the same individual. b. Knowing the identity, judge if the query face image is the same person. • Goal 2 - Image Search: • Attribute-based image search system. Sample query: “smiling Asian man with glasses”. Introduction (1) • Traditional face recognition research: • Use of low-level image features to *directly* train classifier for the classification task. • Examples: color (histogram, GCM), texture (Gabor), intensity (Haar, LBP), gradient (SIFT, HOG). • The feature representations are often high dimensional, and are in an abstract space. • Proposal - attribute-based representation: a. Describable visual attributes (*“attributes”* in this slide) b. *Similes*: similarities to reference faces. Recap: semantic concept detection (course MMAI). Concept space for faces -- (a): attributes Concept space for faces -- (b): similes Introduction (2) - Advantages of attribute-based representation: - A mid-level representation bridging the “semantic gap”. - Dimensionality reduction & manifold discovery. - *Flexible*, *generalizable*, *efficient*. - Flexible: various levels of specificity. - “white male” → a set of people. - “… + “brown-hair green-eyes scar-on-forehead” → a specific person. - “… + “smiling lit-from-above seen-from-left” → a particular image of that person. Introduction (3) • Generalizable: • Learn a set of attributes from a large image collection, and then apply them in arbitrary combinations to the recognition of unseen images. • Efficient: • $k$ binary attributes can identify up to $2^k$ categories. • Requires a much smaller labeled dataset to achieve comparable performance on recognition tasks. Introduction (4) - Contributions of this paper: 1. Introduce attribute & simile classifiers, and face representation using these results. 2. Application to face verification and face search. 3. Releases two large public datasets: *FaceTracer* and *PubFig*. - Previous work: Creating Labeled Datasets (1) • Steps 1 & 2: • Download from a variety of online resources, such as Yahoo! Images and Flickr. • Using a commercial face detector (Omron OKAO), detect faces, pose angles, and fiducial points. • Using the yaw angle, flip the face so it always faces left. • Align faces using linear least square regression. Creating Labeled Datasets (1) - **Steps 1 & 2:** - Download from a variety of online resources, such as Yahoo! Images and Flickr. - Using a commercial face detector (Omron OKAO), detect faces, pose angles, and fiducial points. - Using the yaw angle, flip the face so it always faces left. - Align faces using *linear least square regression*. Review of linear least square regression • Basics (1-dimensional-output case): • Given a vector $\mathbf{x}$, when want to predict its output $y$. • Prepare a training set of $N$ data: $\{\mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x}_N\}$. • Introduce $M$ basis functions $\Phi_j(.)$, e.g., Gaussian, such that $y$ is a linear combination of $M$ basis components: $$ y = \sum_{j=1}^{M} w_j \cdot \phi_j(\mathbf{x}) = \mathbf{w}^T \phi(\mathbf{x}) $$ • Goal: learn the weight vector $\mathbf{w}$. • Prediction: $y_{new} = \mathbf{w}^T \phi(\mathbf{x}_{new})$ Review of linear least square regression • Formulation: - N Overdetermined systems: \( \sum_{j=1}^{M} \phi_j(x_i) \cdot w_j = y_i, \quad i = 1, 2, \ldots, N \) \[ X = \begin{pmatrix} \phi_1(x_1) & \phi_2(x_1) & \cdots & \phi_M(x_1) \\ \phi_1(x_2) & \phi_2(x_2) & \cdots & \phi_M(x_2) \\ \vdots & \vdots & \ddots & \vdots \\ \phi_1(x_N) & \phi_2(x_N) & \cdots & \phi_M(x_N) \end{pmatrix}, \quad y = \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_N \end{pmatrix} \quad \Rightarrow \quad Xw = y \] • Solve the normal equations for \( w \): \[ X^T X w = X^T y \quad \Rightarrow \quad w = (X^T X)^{-1} X^T y \] Review of linear least square regression - **Extension to D-dimensional-output case:** \[ W = \begin{pmatrix} w_{11} & w_{12} & \cdots & w_{1D} \\ w_{21} & w_{22} & \cdots & w_{2D} \\ & \cdots & \ddots & \cdots \\ w_{M1} & w_{M2} & \cdots & w_{MD} \end{pmatrix}, Y = \begin{pmatrix} y_{11} & y_{12} & \cdots & y_{1D} \\ y_{21} & y_{22} & \cdots & y_{2D} \\ & \cdots & \ddots & \cdots \\ y_{N1} & y_{N2} & \cdots & y_{ND} \end{pmatrix} \] \[ X^T X W = X^T Y \Rightarrow W = (X^T X)^{-1} X^T Y \] - Given 6 fiducial points, predict 6 new coordinates. - In our case, \( W \) matrix is an *affine transformation*. Creating Labeled Datasets (2) • Step 3: Crowd-sourcing part. • They chose Amazon Mechanical Turk, a service that matches workers to online jobs created by requesters. • Quality control by requiring confirmation of results by several workers, or minimum worker experience, etc. • After verification, they collected 145,000 attribute labels, at the cost of USD$6,000. Amazon Mechanical Turk Worker Requester Creating Labeled Datasets (3) • FaceTracer – attribute labels: • 15,000 faces with 5,000 labels. (0.33 labels per image) • Researchers still need to train their own attribute classifiers to transform the faces into attribute space. • http://www.cs.columbia.edu/CAVE/databases/facetracer/ • PubFig – identity labels: • 58,797 images of 200 public figures. • Development set: 60 people. (for training simile classifiers) • Evaluation set: 140 people. (same purpose as LFW) • http://www.cs.columbia.edu/CAVE/databases/pubfig/ • What if jointly labeling identities and attributes? # of faces of this person. (a) PubFig Development set (60 individuals) (c) All 170 images of Steve Martin Attributes labeled in the FaceTracer dataset <table> <thead> <tr> <th>Attribute</th> <th>Values</th> </tr> </thead> <tbody> <tr> <td>gender</td> <td>male, female</td> </tr> <tr> <td>race</td> <td>asian, white, black</td> </tr> <tr> <td>age</td> <td>baby, child, youth, middle_aged, senior</td> </tr> <tr> <td>hair_color</td> <td>blond, not_blond</td> </tr> <tr> <td>eye_wear</td> <td>none, eyeglasses, sunglasses</td> </tr> <tr> <td>mustache</td> <td>true, false</td> </tr> <tr> <td>expression</td> <td>smiling, not_smiling</td> </tr> <tr> <td>blurry</td> <td>true, false</td> </tr> <tr> <td>lighting</td> <td>harsh, flash</td> </tr> <tr> <td>environment</td> <td>outdoor, indoor</td> </tr> </tbody> </table> However, all attribute classifiers in this work are binary. Learning Visual Attributes (1) • Definitions: • Consider the attribute “gender” and the qualitative labels “male” and “female”. • An Attribute can be thought of as a function that maps an image $I$ to a real value $r_i$. - Female - 0 - Male • Large positive (negative) values of $r_i$ indicate the present (absent) of the $i$-th attribute. • Core part in this work: • Feature selection, though still an open problem in machine learning. Learning Visual Attributes (2) - Low-level features options: a) Region of face. [10] b) Type of pixel value. [5] c) Normalization to apply. [3] d) Level of aggregation to use. [3] - A total of 10*5*3*3 = 450 combinations. Learning Visual Attributes (3) - Complete feature pool: - However, not all possible combinations are valid, e.g., normalization of hues. \[ \hat{x} = \frac{x}{\mu} \] \[ \hat{x} = \frac{x - \mu}{\sigma} \] <table> <thead> <tr> <th>Pixel Value Types</th> <th>c) Normalizations</th> <th>d) Aggregation</th> </tr> </thead> <tbody> <tr> <td>RGB</td> <td>None</td> <td>None (concat)</td> </tr> <tr> <td>HSV</td> <td>Mean Normalization</td> <td></td> </tr> <tr> <td>Image Intensity</td> <td>Energy Normalization</td> <td></td> </tr> <tr> <td>Edge Magnitude</td> <td></td> <td></td> </tr> <tr> <td>Edge Orientation</td> <td></td> <td></td> </tr> </tbody> </table> • **Forward feature selection** – in each iteration: 1. Train all possible classifiers by *concatenating* one feature option with the current feature set. 2. Evaluate the performances by cross-validation. 3. Feature(s) with highest CV accuracy → added. 4. Drop the lowest-scoring 70% of features. 5. Keep adding until the accuracy stops improving. Learning Visual Attributes (5) - Classifiers used: - SVMs with RBF kernel. (tool: libSVM) - Seems to be only one final classifier. - ECCV 2008: Boosting approach. - Thought: - Features A and B work well individually. Will they work well together? Typically range from 80% to 90%. Analysis shows that all regions and feature types are useful: the power of feature selection. Learning Visual Attributes (6) - Simile classifiers: <table> <thead> <tr> <th>Simile</th> <th>Positive Examples</th> <th>Negative Examples</th> </tr> </thead> <tbody> <tr> <td><strong>Reference Person 1</strong></td> <td></td> <td></td> </tr> <tr> <td>Eyebrows</td> <td><img src="image1" alt="Positive Examples" /></td> <td><img src="image2" alt="Negative Examples" /></td> </tr> <tr> <td>Eyes</td> <td><img src="image3" alt="Positive Examples" /></td> <td><img src="image4" alt="Negative Examples" /></td> </tr> <tr> <td>Nose</td> <td><img src="image5" alt="Positive Examples" /></td> <td><img src="image6" alt="Negative Examples" /></td> </tr> <tr> <td>Mouth</td> <td><img src="image7" alt="Positive Examples" /></td> <td><img src="image8" alt="Negative Examples" /></td> </tr> <tr> <td><strong>Reference Person 2</strong></td> <td></td> <td></td> </tr> <tr> <td>Eyebrows</td> <td><img src="image9" alt="Positive Examples" /></td> <td><img src="image10" alt="Negative Examples" /></td> </tr> <tr> <td>Eyes</td> <td><img src="image11" alt="Positive Examples" /></td> <td><img src="image12" alt="Negative Examples" /></td> </tr> <tr> <td>Nose</td> <td><img src="image13" alt="Positive Examples" /></td> <td><img src="image14" alt="Negative Examples" /></td> </tr> <tr> <td>Mouth</td> <td><img src="image15" alt="Positive Examples" /></td> <td><img src="image16" alt="Negative Examples" /></td> </tr> </tbody> </table> Learning Visual Attributes (6) - Simile classifiers: - 60 reference people. - 8 manually selected face regions. (Similes are component-based by definition.) - 6 manually selected combinations of [pixel value type, normalization, aggregation], *without automatic selection*. (Could have done so.) - Yield a total of $60 \times 8 \times 6 = 2,880$ simile classifiers, each being an SVM with RBF kernel. • Now we have these attributes and similes as mid-level representation. • How do we apply them in high-level classification / retrieval problems? Application to Face Verification (1) • Problem: • “Are these two faces of the same person?” • Existing methods: • Very early work & early work: • Compare L2 distance in PCA-reduced space. • Improved by the supervised LDA. • Early work used well-known low-level features directly. • Often make *avoidable* mistakes: men being confused for women, young people for old, Asians for Caucasians. • From this observation, they claim that the attribute and simile classifiers can avoid such mistakes. Application to Face Verification (2) - Many steps have been explained in previous sections. - Goal here: the verification classifier. Application to Face Verification (3) - Verification classifier: - Train a verification classifier \( V \) that compares attribute vectors \( C(I_1) \) and \( C(I_2) \) of two face images \( I_1 \) and \( I_2 \), and returns the decision \( v(I_1, I_2) \). - Define \( a_i = C_i(I_1) \) and \( b_i = C_i(I_2) \), subscript \( i \) meaning the \( i \)-th attribute (simile) value. \( i = 1, 2, \ldots, n \). - Use both the absolute difference and product. 1) \( |a_i - b_i| \): Observation that \( a_i \) and \( b_i \) should be close if they are of the same individual. 2) \( a_i * b_i \): Observation that \( a_i \) and \( b_i \) should have the same sign if they are of the same individual. Application to Face Verification (4) - Verification classifier (cont.): - Concatenate them in tuple \( p_i \), then concatenate for all \( i \): \[ p_i = \langle |a_i - b_i|, a_i b_i \rangle \quad v(I_1, I_2) = V(\langle p_1, \ldots, p_n \rangle) \] - \( \langle p_1, \ldots, p_n \rangle \) is the training input to the classifier. - Again, they use SVM with RBF kernel. - Experiment: - Performed on datasets LFW and PubFig. - Even the *individuals* are disjoint in training / testing sets. - In other words, machine never sees the same pair of people in its model. Rely solely on attributes / similes. Fig. 10. Face verification performance on LFW How well can attributes potentially do? - Attributes obtained by machine learning: 81.57%. - Attributes obtained by human labeling: 91.86%. - Face verification entirely by human: 99.20%. Application to Face Search • FaceTracer search engine: • For details, refer to the work in ECCV 2008. • Demo video on YouTube: http://www.youtube.com/watch?v=20UJ7JL7RNs • Text-based query. Remember that we have bridged the semantic gap by those mid-level classifiers. • For each attribute, search results are ranked by confidences. • Convert confidences into probabilities in $[0,1]$ by Gaussian. • For multiple query terms, just combine them by taking the product (AND) of those probabilities. • Indexing: basic inverted index? Not mentioned. What about the *user gap*? To accurately capture user’s search intention: - **Sketch-based** - Example: A couple, Sunset, Mountain, Sea - **Concept-based** - Example: Jeep, Grass - **People-attribute-based** - Example: Icons of people with attributes Possible Improvements • Go beyond binary classifiers for attributes. • Regressors are good for quantitative attributes like “age”. • Use feature selection in training the Simile classifiers. • Automatic (Dynamic) selection of attributes. • Image search: • Product of probabilities can’t prevent outlier scores. • TF-IDF approach for ranking the “word frequencies”? • Combine identity information? • Combine location and size, even sketch? Thank You! Q & A
{"Source-Url": "http://www.cmlab.csie.ntu.edu.tw/~ryanlei/slides/aMMAI_20110505.pdf", "len_cl100k_base": 4162, "olmocr-version": "0.1.50", "pdf-total-pages": 40, "total-fallback-pages": 0, "total-input-tokens": 57781, "total-output-tokens": 5447, "length": "2e12", "weborganizer": {"__label__adult": 0.000606536865234375, "__label__art_design": 0.004024505615234375, "__label__crime_law": 0.0009813308715820312, "__label__education_jobs": 0.00400543212890625, "__label__entertainment": 0.00017786026000976562, "__label__fashion_beauty": 0.0006818771362304688, "__label__finance_business": 0.0004787445068359375, "__label__food_dining": 0.0005221366882324219, "__label__games": 0.0006499290466308594, "__label__hardware": 0.002651214599609375, "__label__health": 0.0014200210571289062, "__label__history": 0.0005145072937011719, "__label__home_hobbies": 0.0002269744873046875, "__label__industrial": 0.0007562637329101562, "__label__literature": 0.0004122257232666016, "__label__politics": 0.00045680999755859375, "__label__religion": 0.0006957054138183594, "__label__science_tech": 0.460205078125, "__label__social_life": 0.0002772808074951172, "__label__software": 0.038818359375, "__label__software_dev": 0.48046875, "__label__sports_fitness": 0.00034689903259277344, "__label__transportation": 0.0005249977111816406, "__label__travel": 0.00022840499877929688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14378, 0.02225]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14378, 0.28253]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14378, 0.74587]], "google_gemma-3-12b-it_contains_pii": [[0, 150, false], [150, 360, null], [360, 844, null], [844, 1350, null], [1350, 1399, null], [1399, 1442, null], [1442, 1482, null], [1482, 1942, null], [1942, 2298, null], [2298, 2778, null], [2778, 3124, null], [3124, 3477, null], [3477, 4045, null], [4045, 4652, null], [4652, 5267, null], [5267, 5640, null], [5640, 5682, null], [5682, 6276, null], [6276, 6384, null], [6384, 7182, null], [7182, 7629, null], [7629, 7862, null], [7862, 8628, null], [8628, 8986, null], [8986, 9243, null], [9243, 9371, null], [9371, 10313, null], [10313, 10723, null], [10723, 10870, null], [10870, 11385, null], [11385, 11520, null], [11520, 12227, null], [12227, 12856, null], [12856, 12902, null], [12902, 13090, null], [13090, 13648, null], [13648, 13675, null], [13675, 13908, null], [13908, 14361, null], [14361, 14378, null]], "google_gemma-3-12b-it_is_public_document": [[0, 150, true], [150, 360, null], [360, 844, null], [844, 1350, null], [1350, 1399, null], [1399, 1442, null], [1442, 1482, null], [1482, 1942, null], [1942, 2298, null], [2298, 2778, null], [2778, 3124, null], [3124, 3477, null], [3477, 4045, null], [4045, 4652, null], [4652, 5267, null], [5267, 5640, null], [5640, 5682, null], [5682, 6276, null], [6276, 6384, null], [6384, 7182, null], [7182, 7629, null], [7629, 7862, null], [7862, 8628, null], [8628, 8986, null], [8986, 9243, null], [9243, 9371, null], [9371, 10313, null], [10313, 10723, null], [10723, 10870, null], [10870, 11385, null], [11385, 11520, null], [11520, 12227, null], [12227, 12856, null], [12856, 12902, null], [12902, 13090, null], [13090, 13648, null], [13648, 13675, null], [13675, 13908, null], [13908, 14361, null], [14361, 14378, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14378, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14378, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14378, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14378, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14378, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14378, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14378, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14378, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14378, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14378, null]], "pdf_page_numbers": [[0, 150, 1], [150, 360, 2], [360, 844, 3], [844, 1350, 4], [1350, 1399, 5], [1399, 1442, 6], [1442, 1482, 7], [1482, 1942, 8], [1942, 2298, 9], [2298, 2778, 10], [2778, 3124, 11], [3124, 3477, 12], [3477, 4045, 13], [4045, 4652, 14], [4652, 5267, 15], [5267, 5640, 16], [5640, 5682, 17], [5682, 6276, 18], [6276, 6384, 19], [6384, 7182, 20], [7182, 7629, 21], [7629, 7862, 22], [7862, 8628, 23], [8628, 8986, 24], [8986, 9243, 25], [9243, 9371, 26], [9371, 10313, 27], [10313, 10723, 28], [10723, 10870, 29], [10870, 11385, 30], [11385, 11520, 31], [11520, 12227, 32], [12227, 12856, 33], [12856, 12902, 34], [12902, 13090, 35], [13090, 13648, 36], [13648, 13675, 37], [13675, 13908, 38], [13908, 14361, 39], [14361, 14378, 40]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14378, 0.10877]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
0550c37226d34261aca5dc2927163cf68cb88fe9
IDE 2.0: Collective Intelligence in Software Development Marcel Bruch, Eric Bodden, Martin Monperrus, Mira Mezini To cite this version: Marcel Bruch, Eric Bodden, Martin Monperrus, Mira Mezini. IDE 2.0: Collective Intelligence in Software Development. Proceedings of the 2010 FSE/SDP Workshop on the Future of Software Engineering Research, 2010, Santa Fe, United States. 2010, <10.1145/1882362.1882374>. <hal-01575346> HAL Id: hal-01575346 https://hal.archives-ouvertes.fr/hal-01575346 Submitted on 18 Aug 2017 IDE 2.0: Collective Intelligence in Software Development Marcel Bruch, Eric Bodden, Martin Monperrus, and Mira Mezini Software Technology Group Department of Computer Science Technische Universität Darmstadt, Germany {bruch,bodden,monperrus,mezini}@cs.tu-darmstadt.de ABSTRACT Today’s Integrated Development Environments (IDEs) only integrate the tools and knowledge of a single user and workstation. This neglects the fact that the way in which we develop and maintain a piece of software and interact with our IDE provides a rich source of information that can help ourselves and other programmers to avoid mistakes in the future, or improve productivity otherwise. We argue that, in the near future, IDEs will undergo a revolution that will significantly change the way in which we develop and maintain software, through integration of collective intelligence, the knowledge of the masses. We describe the concept of an IDE based on collective intelligence and discuss three example instantiations of such IDEs. 1. INTRODUCTION Under the right circumstances, groups are remarkably intelligent and are often better than the smartest person in them. – James Surowiecki: Wisdom of the Crowds During the past decades, software systems have grown significantly in size and complexity, making software development and maintenance an extremely challenging endeavor. Integrated Development Environments (IDEs) greatly facilitate this endeavor by providing a convenient means to browse and manipulate a system’s source code and to obtain helpful documentation on Application Programming Interfaces (APIs). Yet, we argue that there is great space for improvement by exploiting collective intelligence, the knowledge of the masses. The leveraging of user data to build intelligent and user-centric web-based systems, commonly summarized as the Web 2.0, is the source of our inspiration. A Web 2.0 site allows its users to interact with each other as contributors to the website’s content, in contrast to websites where users are limited to the passive viewing of information that is provided to them. Web 2.0 examples include web-based communities, web applications, social-networking sites, video-sharing sites, wikis, blogs, mashups, and folksonomies. Amazon, for instance, creates recommendations based on purchase behaviors of its customers or finds interesting similar products based on how customers interact with search results. Netflix, a video-on-demand service, features a web application that leverages user ratings on movies to recommend likely interesting movies to other users. These systems have in common that they leverage crowds to continuously improve the quality of their services, either through implicit feedback (e.g., user click-through behaviors), explicit feedback (e.g., ratings for movies) or user-generated content (e.g., product reviews and movie critics). Today’s IDEs behave more like traditional “Web 1.0” applications in the way that they do not enable their users to contribute and share their knowledge with others, neither explicitly nor implicitly, and thus hinder themselves to effectively exchange knowledge among developers. What would it mean to bring collective intelligence into software development? Figure 1a shows the current state of the practice: software developers use IDEs that are “integrated” only in the sense that they integrate all tools necessary to browse, manipulate and build software on a single machine. If a programmer has a question about a particular piece of code, for instance an API, she has to browse the web for solutions—by hand. After she has found the solution and solved her problem, the newly gained knowledge is usually lost. Figure 1b shows our vision of the near future: IDEs will support developers through integration with a global knowledge base. This knowledge base will receive information from implicit and explicit user feedback. By implicit feedback we mean anonymized usage data that the cross-linked IDEs will send to the knowledge base automatically and spontaneously (in the figure, we represent such spontaneous activity through dashed arrows). The knowledge base will also comprise explicit user feedback in the form of user-written documentation, error reports, manuals, etc. In this work, we will show that such data can help, for example, to improve ranking heuristics, or to focus developer activity. Crucially, the knowledge base itself is intelligent: it will use novel data-mining techniques to integrate the different sources of information to produce new information that has added value. For instance, if the knowledge base discovers that people who write an equals method in Java often write a hashCode method on the same type at the same time, or do so after a longer debugging session, then the knowledge base... may be able to discover the important rule that, in Java, every type that implements `equals` should also implement `hashCode`, and that missing this rule likely causes bugs. The remainder of this paper is organized as follows. In Section 2, we materialize IDE 2.0 by discussing example intelligent IDE services that leverage implicit and explicit user feedback to aid programmers in everyday software-development tasks. We show that not only feedback data itself but in particular derived information, obtained through data mining, has the potential of greatly easing the software-development process as a whole. Moreover, as the data is persisted, it will survive over time, unlike today, where much information gets lost and needs to be re-discovered over and over again. In Section 3, we materialize IDE 2.0 by drawing parallels between the main characteristics of IDE 2.0 and those of Web 2.0. Finally, Section 4 summarizes the paper. 2. FROM IDE 1.0 TOWARDS IDE 2.0 In the following we give three examples of how research in collective intelligence can improve existing IDE services. We split the discussion of each example into three sections. **IDE 1.0** sections describe the state-of-the-art in today’s IDEs. Under **IDE 1.5**, we briefly summarize current research to improve IDE 1.0 services. **IDE 2.0** sections discuss how collective intelligence could solve some of the issues of these approaches. **Intelligent Code Completion** **IDE 1.0**: Code completion is a very popular feature of modern IDEs, a life without which many developers find hard to imagine. One major reason for its popularity is that developers are frequently unaware of what methods they can invoke on a given variable. Here, code completion systems (CCSs) serve as an API browser, allowing developers to browse methods and select the appropriate one from the list of proposals. However, current completions are either computed by rather simplistic reasoning systems or are simply hard-coded. For instance, for method completion, CCSs only consider the receiver’s declared type. This often leads to an overwhelming number of proposals. Triggering code completion on a variable of `javax.swing.JButton` results in 381 method proposals. Clearly, developers only need a fraction of the proposed methods to make their code work. Code templates are an example for hard-coded proposals. Templates (like the Eclipse SWT Code Templates) serve as shortcuts and documentation for developers. Unfortunately, manual proposal definitions are labor intensive and error prone. **IDE 1.5**: Researchers have recognized these issues. For instance, approaches exist that analyze client code to learn which methods the clients frequently use in certain contexts, and rearrange method proposals according to this notion of relevance [4, 8, 14]. Tools like XSnippet, Prospector and Parseweb [13,16,18] attempt to solve the issue of hard-coded code templates by also analyzing source code, identifying common patterns in code. Although obviously useful, these systems didn’t make it into current IDEs. We argue that the primary reason for this is the lack of a continuously growing knowledge base. To build reliable models, source-code based approaches require example applications and full knowledge about the execution environment (i.e., classpath, library versions etc.). However, finding a sufficiently large set of example projects is difficult and tedious, and creating models for new frameworks is too time-consuming yet. While such approaches can sufficiently support a few selected APIs, we argue that they do not scale when tens of thousands of APIs should be supported. **IDE 2.0**: So, how can we build continuously improving code completion systems then? To solve the scalability problem, code completion systems must allow users to share usage information among each other in an anonymized and automated way—from within the developer’s IDE. This con- continous data sharing allows recommender systems to learn models for every API that developers actually use. IDEs are very powerful when it comes to extracting information: they have access to information about the execution environment and about user interactions, even with respect to certain APIs. But the new, massive data sets derived from this information pose a challenge. We will likely require new algorithms to find reliable and valuable patterns in this data. Whatever means future code completion systems will use to build better recommendation models, the systems will be based on shared data. It will be the users who provide this data, and it is important to realize that, as the user base grows, the recommendation systems will be able to continuously improve over time, making intelligent completions that are useful for novice developers and experts alike. Example & Code-Snippet Recommendations **IDE 1.0:** Source-code examples appear to be highly useful to developers, whenever the documentation of the API at hand is insufficient [15]. This is evident by the raise of several code search engines (CSEs) over the last few years, like Google Codesearch, Krugle, and Koslers, just to name a few. However, current CSEs almost exclusively use standard information-retrieval techniques that were developed for text documents. While source code is text, it also bears important inherent structure. Disregarding this structure causes less effective rankings and misleading code summaries. **IDE 1.5:** Researchers have presented a number of approaches [1, 5, 6, 10, 11, 17, 19] that improve certain aspects of CSEs. All these approaches exploit structure, like inheritance relations, method calls, type usages, control flow and more, however they face two severe problems. First, source code provides much more structure than text. Thus, ranking systems have to take in account many more features when building the final ranking for a search query. Consequently, it is hard to derive optimal weights for these features, so that the resulting scoring function will perform as well as possible. Often, a fixed scoring system will perform "well enough" but not be optimal. Another issue with current CSEs is that they ignore the personal experience of the user who issued the query. Many current web search engines now support "personalized search", which leverages the personal background and interests of a user to find documents that are likely to be interesting for this user, but not necessarily for others. Current CSEs lack such functionality. **IDE 2.0:** How can one improve ranking and realize personalized search in CSEs? The key to solving both problems is to leverage implicit user feedback. To solve the manual-weight-tweaking problem of search engines, recent work [7] has shown that leveraging observations of how users interact with the search results can significantly improve the precision of existing search engines. The authors used the information whether or not the user inspects a search result to automatically adjust feature weights. This produces an optimized ranking where all inspected results are listed above those that the user did not investigate. To implement personalized code search engines, one can infer the personal background (or experience) of a developer by the code she has already written. Then, CSEs could first display code examples that are similar to examples previously explored or, on demand, code examples that allow the developer to learn new information. We are certain that IDE services in general, not only those that we discussed, can greatly benefit from leveraging implicit user feedback. Extended Documentation **IDE 1.0:** Software engineers widely accept that documenting software is a tedious job. Especially open-source projects frequently lack sufficient resources to produce comprehensive documentation. Both Sun and the Eclipse Foundation recently started to address this problem by opening their documentation platforms to their users. Eclipse asks its users to provide and update tutorials at the central Eclipse Wiki. Sun’s “Docweb” allows users to edit Javadoc API documentation, and to provide code examples or cross references to other interesting articles in the web. These tools aim to leverage a Wikipedia-style approach tailored to software documentation. Past experience has shown, however, that such systems often suffer from a lack of user participation. We believe that the primary cause for this lack of participation is the fact that people may not be willing to document APIs which they have no control over, because these APIs may change rapidly at any time: they may be completely outdated in just a few months. **IDE 1.5:** Recent research therefore addresses the problem from another angle, enriching existing documentation with automatically mined documentation [3, 9, 12]. Such approaches identify frequent patterns or interesting relations in code, and generate helpful guidelines from these relations. However, generated documentation may not always be helpful. Like text mining, documentation mining uncovers any relation between code elements, no matter whether or not this relation is useful to consider. The problem is aggravated by the fact that it is sometimes the surprising relations that are the most useful. Another drawback of mining approaches is that they cannot provide rationales for their observations, leaving it up to the developer to make sense of the data. **IDE 2.0:** How could collective intelligence address the issues mentioned above? The key to a solution is a mixture of explicit user feedback and user-provided content. In the future, we expect generated documentation to be judged by thousands of users, enabling people to evaluate the quality of their services immediately—tool developers and documentation providers alike. Furthermore, we expect collective intelligence to enable us to migrate documentation from older to newer versions more easily. For example, when a new version of an API becomes available, explicit user feedback will make apparent which parts of the documentation remain valid for the newer version and which parts require updating. Explicit user feedback will also allow users to attach rationales to mined documentation, allowing the documentation to not only state that users must follow a certain principle but why. These examples are just the tip of the iceberg. We are confident that the software engineering research community will invent many more interesting techniques to generate, judge, and complete documentation. 3. FROM WEB 2.0 TO IDE 2.0 We have used the analogy to “Web 2.0” to indicate that this new generation of web applications and our view of future IDEs have something in common. In the following, we discuss the similarities between Web 2.0 and IDE 2.0 to make this analogy more concrete. We define a set of principles that we expect successful IDE 2.0 services to follow. Some of the concepts are para- phrased from Tim O’Reilly’s principles for successful Web 2.0, described in his article “What is Web 2.0?”. 1. The Web as Platform. The web as platform is the core concept of Web 2.0. In various ways, clients and servers share data over the web. We expect the same to hold for future collaborative IDE 2.0 services. These services rely on client-side usage data and thus, the web is also fundamental to them. A notable difference between IDE 2.0 and Web 2.0 is that IDEs offer a much larger spectrum of data and also allow for client-side pre-processing of data like static analysis code analysis. Such pre-processing may even be crucial to allow for proper privacy. Furthermore, one needs to distribute to clients recommendation models that are built on the server-side. Local databases or caches can increase the scalability of these systems; crucial, when dealing with millions of request per day. Whatever the particular technology may be, the web will be the platform for IDE 2.0. 2. Data is key. Data is key to any IDE 2.0 service. However, here we fundamentally differ from Tim O’Reilly’s understanding of who owns this data. In Web 2.0, data is the key factor for the success of an application over its competitors. In contrast, we strongly believe in Open Data: all collected data is publicly available. This fosters a vital ecosystem around the concepts of IDE 2.0 and enables sustainable research. Successfully IDE 2.0 services will use both raw data and derived knowledge and will facilitate innovation instead of locking in data or users. 3. Harnessing Collective Intelligence. Leveraging the wisdom of the crowds is the third fundamental concept of successful Web 2.0 applications—and same holds for IDE 2.0. The examples introduced in the previous section used either user-provided content (like source code, updated documentation or code snippets), implicit feedback (like user click-through data used to improve rankings), or explicit feedback (like ratings for judging the quality of relevance of generated documentation) to build new kind of services. It is important to recognize that, while individuals may be able to build these services, these services cannot unleash their potential without the crowds sharing their knowledge. Only with collective intelligence, IDE services like intelligent code completion, example recommenders or even smart documentation systems become possible. Yochai Benkler’s work about commons-based peer production [2] gives interesting insights into what motivates individuals to contribute to projects like IDE 2.0. 4. Rich User Experiences. The appearance of AJAX gave web applications a new look and feel, bringing web applications much closer to desktop applications than ever before. In the context of IDE 2.0, intelligent, context-sensitive recommender systems will evolve that recommend relevant APIs or documentation where appropriate and help to reduce the clutter in IDEs at the same time. However, providing rich user experiences is fundamental for users to accept such services. Similar to Google Search, simple and intuitive interfaces seamlessly integrated into existing IDE concepts like code completion, quick fixes etc. are the major key to success. 5. Lightweight Programming Models. In web 2.0, mashups (applications that combine several other (web) applications to build new services on top of existing ones) evolved, building new services the application developers never considered. Excellent IDE 2.0 services will encourage others to build their services on top of existing ones by providing public and easy-to-use APIs. Clearly, in the early days we expect such services to be data-driven, i.e., they will leverage the same data for enhancing several aspects of current IDEs or to port existing services to other IDEs. Note that Open Data is necessary to enable such services. However, over time, services will use other services to build what we call IDE mashups. 4. SUMMARY The concepts behind Web 2.0 are a great fit for future IDE services and we expect future services to meet at least one if not almost all of these properties. However, the Software Engineering research community has to play a key role in unleashing the full power of the crowds. First, and most importantly, it has to provide an appropriate environment for building and evaluating IDE 2.0 services. Strong partners like the Eclipse Foundation or Sun/Oracle already support and promote such new IDE concepts today, and their help will be crucial to providing access to large user communities in the future. But there is an incentive for these partners: they will profit from new exciting features, making the IDE itself appear very innovative. Second, the Software Engineering research community is the connective link between practitioners and researchers in machine learning. Most IDEs only contain instances of rather primitive machine-learning algorithms. It will be our job to identify the problems that developers face in their day-to-day work, to provide appropriate data as input for machine learners, and to evaluate and reintegrate these results into IDEs. Thus, IDE 2.0 research will create new fascinating and challenging applications of machine learning aside the current markets. To sum up, IDE 2.0 services have much potential to improve developer productivity and provide a fantastic playground for new algorithms. They bring together several research communities at the same time, to solve a new generation of challenges in software engineering. When tackling the problem now and in a farsighted, IDE 2.0 will be one of the major research areas of the near future. 5. REFERENCES
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01575346/document", "len_cl100k_base": 4345, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 16564, "total-output-tokens": 5695, "length": "2e12", "weborganizer": {"__label__adult": 0.00034046173095703125, "__label__art_design": 0.00022733211517333984, "__label__crime_law": 0.0002384185791015625, "__label__education_jobs": 0.0005908012390136719, "__label__entertainment": 4.3272972106933594e-05, "__label__fashion_beauty": 0.00010782480239868164, "__label__finance_business": 0.00015437602996826172, "__label__food_dining": 0.0002613067626953125, "__label__games": 0.0003490447998046875, "__label__hardware": 0.0004191398620605469, "__label__health": 0.0002582073211669922, "__label__history": 0.0001233816146850586, "__label__home_hobbies": 4.64320182800293e-05, "__label__industrial": 0.00017786026000976562, "__label__literature": 0.00016999244689941406, "__label__politics": 0.00020325183868408203, "__label__religion": 0.00032639503479003906, "__label__science_tech": 0.001972198486328125, "__label__social_life": 8.70823860168457e-05, "__label__software": 0.0038814544677734375, "__label__software_dev": 0.9892578125, "__label__sports_fitness": 0.0002225637435913086, "__label__transportation": 0.0003044605255126953, "__label__travel": 0.0001615285873413086}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25450, 0.03863]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25450, 0.36849]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25450, 0.89624]], "google_gemma-3-12b-it_contains_pii": [[0, 514, false], [514, 5334, null], [5334, 9271, null], [9271, 16266, null], [16266, 22816, null], [22816, 25450, null]], "google_gemma-3-12b-it_is_public_document": [[0, 514, true], [514, 5334, null], [5334, 9271, null], [9271, 16266, null], [16266, 22816, null], [22816, 25450, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25450, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25450, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25450, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25450, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25450, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25450, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25450, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25450, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25450, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25450, null]], "pdf_page_numbers": [[0, 514, 1], [514, 5334, 2], [5334, 9271, 3], [9271, 16266, 4], [16266, 22816, 5], [22816, 25450, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25450, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
61046a25d35a50396a907ddf52440864719245b5
Package ‘magrittr’ October 13, 2022 Type Package Title A Forward-Pipe Operator for R Version 2.0.3 Description Provides a mechanism for chaining commands with a new forward-pipe operator, %>%%. This operator will forward a value, or the result of an expression, into the next function call/expression. There is flexible support for the type of right-hand side expressions. For more information, see package vignette. To quote Rene Magritte, "Ceci n'est pas un pipe." License MIT + file LICENSE URL https://magrittr.tidyverse.org, https://github.com/tidyverse/magrittr BugReports https://github.com/tidyverse/magrittr/issues Depends R (>= 3.4.0) Suggests covr, knitr, rlang, rmarkdown, testthat VignetteBuilder knitr ByteCompile Yes Config/Needs/website tidyverse/tidytemplate Encoding UTF-8 RoxygenNote 7.1.2 NeedsCompilation yes Author Stefan Milton Bache [aut, cph] (Original author and creator of magrittr), Hadley Wickham [aut], Lionel Henry [cre], RStudio [cph, fnd] Maintainer Lionel Henry <lionel@rstudio.com> Repository CRAN Date/Publication 2022-03-30 07:30:09 UTC **debug_fseq** **Description** This is a utility function for marking functions in a functional sequence for debugging. **Usage** ```r debug_fseq(fseq, ...) ``` ```r undebug_fseq(fseq) ``` **Arguments** - `fseq` a functional sequence. - `...` indices of functions to debug. **Value** `invisible(NULL)`. debug_pipe Description This function is a wrapper around browser, which makes it easier to debug at certain places in a magrittr pipe chain. Usage depbug_pipe(x) Arguments x a value Value x extract Aliases Description magrittr provides a series of aliases which can be more pleasant to use when composing chains using the %>% operator. Details Currently implemented aliases are extract '][' extract2 '[[ inset '[[' inset2 '[][= use_series '$' add '+' subtract '-' multiply_by '*' raise_to_power '^' multiply_by_matrix '%*% divide_by '/' divide_by_int '%/%' mod '%%' is_in '%in%' Examples ```r iris %>% extract(, 1:4) %>% head good.times <- Sys.Date() %>% as.POSIXct %>% seq(by = "15 mins", length.out = 100) %>% data.frame(timestamp = .) good.times$quarter <- good.times %>% use_series(timestamp) %>% format("%M") %>% as.numeric %>% divide_by_int(15) %>% add(1) ``` FAQ: What is the gender of the pipe? Description In Magritte’s original quote “Ceci n’est pas une pipe,” the word "pipe" is feminine. However the magrittr package quotes it as “Ceci n’est pas un pipe,” with a masculine "pipe." This lighthearted misappropriation is intentional. Whereas the object represented in Magritte’s painting (a pipe that you can smoke) is feminine in the French language, a computer pipe (which is an Anglicism in French) is masculine. freduce **Apply a list of functions sequentially** **Description** This function applies the first function to `value`, then the next function to the result of the previous function call, etc. **Usage** `freduce(value, function_list)` **Arguments** - `value` initial value. - `function_list` a list of functions. **Value** The result after applying each function in turn. --- **functions** *Extract the function list from a functional sequence.* **Description** This can be used to extract the list of functions inside a functional sequence created with a chain like `. %>% foo %>% bar`. **Usage** `functions(fseq)` **Arguments** - `fseq` A functional sequence ala magrittr. **Value** A list of functions **Description** Whereas `%>%` is lazy and only evaluates the piped expressions when needed, `%!>%` is eager and evaluates the piped input at each step. This produces more intuitive behaviour when functions are called for their side effects, such as displaying a message. Note that you can also solve this by making your function strict. Call `force()` on the first argument in your function to force sequential evaluation, even with the lazy `%>%` pipe. See the examples section. **Usage** ```r lhs %!>% rhs ``` **Arguments** - `lhs` A value or the magrittr placeholder. - `rhs` A function call using the magrittr semantics. **Examples** ```r f <- function(x) { message("foo") x } g <- function(x) { message("bar") x } h <- function(x) { message("baz") invisible(x) } # The following lazy pipe sequence is equivalent to `h(g(f()))`. # Given R's lazy evaluation behaviour, `f()` and `g()` are lazily # evaluated when `h()` is already running. This causes the messages # to appear in reverse order: NULL %>% f() %>% g() %>% h() # Use the eager pipe to fix this: NULL %!>% f() %!>% g() %!>% h() # Or fix this by calling `force()` on the function arguments f <- function(x) { force(x) message("foo") } ``` print.fseq \[ x \] \[ g <- \text{function}(x) \{ \text{force}(x) \text{message}("bar") x \} \] \[ h <- \text{function}(x) \{ \text{force}(x) \text{message}("baz") \text{invisible}(x) \} \] # With strict functions, the arguments are evaluated sequentially \[ \text{NULL} %>% f() %>% g() %>% h() \] # Instead of forcing, you can also check the type of your functions. # Type-checking also has the effect of making your function lazy. --- **print.fseq** *Print method for functional sequence.* ### Description Print method for functional sequence. ### Usage ```r ## S3 method for class 'fseq' print(x, ...) ``` ### Arguments - `x` A functional sequence object - `...` not used. ### Value `x` [[.fseq Extract function(s) from a functional sequence. Description Functional sequences can be subset using single or double brackets. A single-bracket subset results in a new functional sequence, and a double-bracket subset results in a single function. Usage ```r ## S3 method for class 'fseq' x[[...]] ## S3 method for class 'fseq' x[[...]] ``` Arguments - `x`: A functional sequence - `...`: index/indices. For double brackets, the index must be of length 1. Value A function or functional sequence. %<>% Assignment pipe Description Pipe an object forward into a function or call expression and update the `lhs` object with the resulting value. Usage `lhs %<>% rhs` Arguments - `lhs`: An object which serves both as the initial value and as target. - `rhs`: a function call using the magrittr semantics. Details The assignment pipe, %<>%, is used to update a value by first piping it into one or more rhs expressions, and then assigning the result. For example, `some_object %<>% foo %>% bar` is equivalent to `some_object <- some_object %>% foo %>% bar`. It must be the first pipe-operator in a chain, but otherwise it works like %>%. See Also %>%, %T>%, %$ Examples ```r iris$Sepal.Length %<>% sqrt x <- rnorm(100) x %<>% abs %>% sort is_weekend <- function(day) { # day could be e.g. character a valid representation day %<>% as.Date result <- day %>% format("%u") %>% as.numeric %>% is_greater_than(5) if (result) message(day %>% paste("is a weekend!")) else message(day %>% paste("is not a weekend!")) invisible(result) } ``` --- %$ Exposition pipe Description Expose the names in lhs to the rhs expression. This is useful when functions do not have a built-in data argument. Usage ```r lhs %$% rhs ``` Arguments - **lhs**: A list, environment, or a data.frame. - **rhs**: An expression where the names in lhs is available. Details Some functions, e.g. `lm` and `aggregate`, have a data argument, which allows the direct use of names inside the data as part of the call. This operator exposes the contents of the left-hand side object to the expression on the right to give a similar benefit, see the examples. See Also `%>%`, `%<>%`, `%T>%` Examples ```r iris %>% subset(Sepal.Length > mean(Sepal.Length)) %%% cor(Sepal.Length, Sepal.Width) data.frame(z = rnorm(100)) %%% ts.plot(z) ``` --- %>% Pipe Description Pipe an object forward into a function or call expression. Usage ```r lhs %>% rhs ``` Arguments - **lhs**: A value or the magrittr placeholder. - **rhs**: A function call using the magrittr semantics. Details **Using %>% with unary function calls:** When functions require only one argument, `x %>% f` is equivalent to `f(x)` (not exactly equivalent; see technical note below.) **Placing lhs as the first argument in rhs call:** The default behavior of `%>%` when multiple arguments are required in the rhs call, is to place `lhs` as the first argument, i.e. `x %>% f(y)` is equivalent to `f(x, y)`. **Placing lhs elsewhere in rhs call:** Often you will want `lhs` to the `rhs` call at another position than the first. For this purpose you can use the dot (.) as placeholder. For example, `y %>% f(x, .)` is equivalent to `f(x, y)` and `z %>% f(x, y, arg = .)` is equivalent to `f(x, y, arg = z)`. Using the dot for secondary purposes: Often, some attribute or property of \( \text{lhs} \) is desired in the \( \text{rhs} \) call in addition to the value of \( \text{lhs} \) itself, e.g. the number of rows or columns. It is perfectly valid to use the dot placeholder several times in the \( \text{rhs} \) call, but by design the behavior is slightly different when using it inside nested function calls. In particular, if the placeholder is only used in a nested function call, \( \text{lhs} \) will also be placed as the first argument! The reason for this is that in most use-cases this produces the most readable code. For example, \( \text{iris} \%\%\% \text{subset} \left( 1 : \text{nrow(\text{\textunderscore})} \%\% 2 == 0 \right) \) is equivalent to \( \text{iris} \%\%\% \text{subset}(\text{\textunderscore} : 1 : \text{nrow(\text{\textunderscore})} \%\% 2 == 0) \) but slightly more compact. It is possible to overrule this behavior by enclosing the \( \text{rhs} \) in braces. For example, \( 1:10 \%\%\% \{ \text{c}(\text{min(\text{\textunderscore})}, \text{max(\text{\textunderscore})}) \} \) is equivalent to \( \text{c}(\text{min}(1:10), \text{max}(1:10)) \). Using \%\%\% with call- or function-producing \( \text{rhs} \): It is possible to force evaluation of \( \text{rhs} \) before the piping of \( \text{lhs} \) takes place. This is useful when \( \text{rhs} \) produces the relevant call or function. To evaluate \( \text{rhs} \) first, enclose it in parentheses, i.e. a \( \%\%\% \left( \text{function}(x) \times 2 \right) \), and \( 1:10 \%\%\% \left( \text{\texttt{call("sum")}} \right) \). Another example where this is relevant is for reference class methods which are accessed using the \$ operator, where one would do \( x \%\%\% \left( \text{\texttt{rc}\$f} \right) \), and not \( x \%\%\% \text{\texttt{rc}\$f} \). Using lambda expressions with \%\%\%: Each \( \text{rhs} \) is essentially a one-expression body of a unary function. Therefore defining lambdas in \texttt{magrittr} is very natural, and as the definitions of regular functions: if more than a single expression is needed one encloses the body in a pair of braces, \( \{ \text{\texttt{rhs}} \} \). However, note that within braces there are no "first-argument rule": it will be exactly like writing a unary function where the argument name is "." (the dot). Using the dot-place holder as \( \text{lhs} \): When the dot is used as \( \text{lhs} \), the result will be a functional sequence, i.e. a function which applies the entire chain of right-hand sides in turn to its input. See the examples. Technical notes The \texttt{magrittr} pipe operators use non-standard evaluation. They capture their inputs and examines them to figure out how to proceed. First a function is produced from all of the individual right-hand side expressions, and then the result is obtained by applying this function to the left-hand side. For most purposes, one can disregard the subtle aspects of \texttt{magrittr}'s evaluation, but some functions may capture their calling environment, and thus using the operators will not be exactly equivalent to the "standard call" without pipe-operators. Another note is that special attention is advised when using non-\texttt{magrittr} operators in a pipe-chain (\(+\), \(-\), $\$, etc.), as operator precedence will impact how the chain is evaluated. In general it is advised to use the aliases provided by \texttt{magrittr}. See Also \%<>\%, \%T\%, \%% Examples ```r # Basic use: iris %>% head ``` # Use with lhs as first argument iris %>% head(10) # Using the dot place-holder "Ceci n'est pas une pipe" %>% gsub("une", "un", .) # When dot is nested, lhs is still placed first: sample(1:10) %>% paste0(LETTERS[.]) # This can be avoided: rnorm(100) %>% {c(min(.), mean(.), max(.))} %>% floor # Lambda expressions: iris %>% { size <- sample(1:10, size = 1) rbind(head(.[, size]), tail(.[, size]) } # renaming in lambdas: iris %>% { my_data <- . size <- sample(1:10, size = 1) rbind(head(my_data[, size]), tail(my_data[, size]) } # Building unary functions with %>% trig_fest <- . %>% tan %>% cos %>% sin 1:10 %>% trig_fest trig_fest(1:10) --- %T>% Tee pipe --- **Description** Pipe a value forward into a function- or call expression and return the original value instead of the result. This is useful when an expression is used for its side-effect, say plotting or printing. **Usage** 1hs %T>% rhs **Arguments** - **lhs**: A value or the magrittr placeholder. - **rhs**: A function call using the magrittr semantics. Details The `tee` pipe works like `%>%`, except the return value is `lhs` itself, and not the result of `rhs` function/expression. See Also `%>%`, `%<>%`, `%%` Examples ```r rnorm(200) %>% matrix(ncol = 2) %T>% plot %>% # plot usually does not return anything. colSums ``` Index [.fseq([[.fseq], 8 [[.fseq, 8 %!>(pipe-eager), 6 %<>%, 8, 10, 11, 13 %>, 9, 10, 13 %T>, 9–11, 12 %T%, 9, 11, 13 add (extract), 3 and (extract), 3 debug_fseq, 2 debug_pipe, 3 divide_by (extract), 3 divide_by_int (extract), 3 equals (extract), 3 extract, 3 extract2 (extract), 3 faq-pipe-gender, 4 freduce, 5 functions, 5 inset (extract), 3 inset2 (extract), 3 is_greater_than (extract), 3 is_in (extract), 3 is_less_than (extract), 3 is_weakly_greater_than (extract), 3 is_weakly_less_than (extract), 3 mod (extract), 3 multiply_by (extract), 3 multiply_by_matrix (extract), 3 n'est pas (extract), 3 not (extract), 3 or (extract), 3 pipe-eager, 6 print.fseq, 7 raise_to_power (extract), 3 set_attr (extract), 3 set_attributes (extract), 3 set_class (extract), 3 set_colnames (extract), 3 set_names (extract), 3 set_rownames (extract), 3 subtract (extract), 3 undebug_fseq (debug_fseq), 2 use_series (extract), 3
{"Source-Url": "https://cran.r-project.org/web/packages/magrittr/magrittr.pdf", "len_cl100k_base": 4161, "olmocr-version": "0.1.48", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 27859, "total-output-tokens": 5107, "length": "2e12", "weborganizer": {"__label__adult": 0.00023162364959716797, "__label__art_design": 0.0003409385681152344, "__label__crime_law": 0.000164031982421875, "__label__education_jobs": 0.0002505779266357422, "__label__entertainment": 9.703636169433594e-05, "__label__fashion_beauty": 6.878376007080078e-05, "__label__finance_business": 8.404254913330078e-05, "__label__food_dining": 0.0002435445785522461, "__label__games": 0.0003452301025390625, "__label__hardware": 0.0003504753112792969, "__label__health": 0.00014007091522216797, "__label__history": 0.00010824203491210938, "__label__home_hobbies": 5.7220458984375e-05, "__label__industrial": 0.0001806020736694336, "__label__literature": 0.00015497207641601562, "__label__politics": 0.00012826919555664062, "__label__religion": 0.00026679039001464844, "__label__science_tech": 0.003993988037109375, "__label__social_life": 9.202957153320312e-05, "__label__software": 0.0224609375, "__label__software_dev": 0.9697265625, "__label__sports_fitness": 0.00015294551849365234, "__label__transportation": 0.0001392364501953125, "__label__travel": 0.00014150142669677734}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14534, 0.01908]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14534, 0.18513]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14534, 0.70195]], "google_gemma-3-12b-it_contains_pii": [[0, 1110, false], [1110, 1426, null], [1426, 2013, null], [2013, 2790, null], [2790, 3515, null], [3515, 4742, null], [4742, 5465, null], [5465, 6291, null], [6291, 7350, null], [7350, 8761, null], [8761, 12294, null], [12294, 13339, null], [13339, 13617, null], [13617, 14534, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1110, true], [1110, 1426, null], [1426, 2013, null], [2013, 2790, null], [2790, 3515, null], [3515, 4742, null], [4742, 5465, null], [5465, 6291, null], [6291, 7350, null], [7350, 8761, null], [8761, 12294, null], [12294, 13339, null], [13339, 13617, null], [13617, 14534, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 14534, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14534, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14534, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14534, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14534, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14534, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14534, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14534, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14534, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14534, null]], "pdf_page_numbers": [[0, 1110, 1], [1110, 1426, 2], [1426, 2013, 3], [2013, 2790, 4], [2790, 3515, 5], [3515, 4742, 6], [4742, 5465, 7], [5465, 6291, 8], [6291, 7350, 9], [7350, 8761, 10], [8761, 12294, 11], [12294, 13339, 12], [13339, 13617, 13], [13617, 14534, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14534, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
4f3ea1dded6140ab56157e250a7708aa96930f52
Photí—A Fisheye View of Bubbles J. I. Churcher TR-COSC-15/93 The contents of this work reflect the views of the author(s) who is/are responsible for the facts and accuracy of the data presented. Responsibility for the application of the material to specific cases, however, lies with any user of the report and no responsibility in such cases will be attributed to the author(s) or to the University of Canterbury. Have a nice day. Photi—A Fish-eye view of Bubbles Neville Churcher* Fisheye views have recently been employed in a number of application domains to enhance user benefits from graphically presented information. These techniques employ transformations which emphasize regions of interest while suppressing, but retaining, global detail. Applications to date have concentrated on the production of aesthetically pleasing views of essentially static objects such as digraphs. In this paper I investigate the extension of the concepts to interactive applications. The implementation of Photi, a diagramming tool for Smith’s database schema design technique, is used as an example. INTRODUCTION The management of complexity remains a major issue in many areas of computer science. Examples include software engineering, database design and network management. It has long been recognised that diagrammatic techniques are an excellent means of highlighting important concepts and connections while suppressing details which are not relevant in the current context. The structured programming revolution of the 1970s, followed by the development of information engineering in the 1980s saw the development of techniques for representing concepts such as software architecture, control flow, data flow and data structure.\(^1\) Computerised versions of many techniques emerged as CASE products became widespread. The use of multiple, independently scrollable, windows enabled “wall charts” to be replaced by system views more closely related to the task at hand. However, in many cases, the essential problems of complexity management have not been solved. In order to be truly useful, techniques must be capable of scaling up to “real” problems—sadly, many candidates are limited to text book examples. The key to effective scaling is often an abstraction concept, such as “levelling” for DFDs, which allows fine detail to be suppressed thereby allowing the user to concentrate on features which are particularly “relevant” at the current level. One major advantage that diagramming tools have over their paper-based counterparts is the ability to recover such detail as required by “exploding” diagram components either in situ or in a separate window. However, there are many diagramming techniques which apparently do not include natural analogues of the of the levelling process. The fisheye views proposed by Furnas\(^2\) represented an attempt to develop universally applicable concepts to help HCI workers determine how best to present information in order to combine an accurate and detailed representation of the point of interest (focus) and its “neighbourhood” while retaining important “landmarks” in the remainder. Further, it is necessary to be able to present different views as the focus of interest changes. These concepts are not limited to graphical applications. Furnas presents an example showing how the fisheye view concept might be applied to a structure-sensitive editor. \*Department of Computer Science, University of Canterbury, Private Bag 8004, Christchurch, New Zealand A number of applications of fisheye views have been made. Examples include the 3D visualisation of a file system, the browsing of graphs and the visualisation of "linear" information structures and of maintenance data. The applications cited use fisheye views as a presentation technique for delivering views of an essentially static system. The motivation for this paper was to investigate the suitability of fisheye techniques in interactive environments where the system, as well as the view, is subject to change. The dynamic nature of diagramming applications introduces a number of additional issues which must be addressed if fisheye views are to become generally applicable. Diagrammatic techniques have much in common and are generally based on the graph structures discussed in reference 4. Diagrams contain components (nodes), and connections (edges), each of which is represented by symbols and lines of the various types appropriate to each technique. Smith’s method employs nodes which are “bubbles” represented by labelled ellipses, which may be nested or grouped, and connections represented by two types of arrows. This technique, which is unwieldy in its manual (i.e. pencil and paper) form, has been chosen to illustrate the significant advantages offered by applications using fisheye techniques. Smith’s method is further described in the next section and the following section gives a brief summary of fisheye techniques. Next is a description of Photi, the Tcl/Tk application I have developed to investigate the application of fisheye concepts to a specific diagrammatic technique. This is followed by a discussion of some of the issues arising in the application of fisheye concepts to diagramming. Finally, some conclusions and directions for future work are presented. SMITH’S METHOD Smith’s method has been selected as a case study for the application of fisheye concepts to diagramming techniques. However, the results are applicable to a wide range of techniques in systems analysis and other areas. Diagrams are typically constructed from a similar set of components—symbols, connections and labels—although properties such as symbol shape, component colour, line and text style carry technique-specific information. The method is a semi-formal synthesis procedure for constructing a database schema from a set of elementary facts. Although it has its weaknesses, Smith’s method is potentially suitable for small to medium sized problems and, particularly with software assistance, is capable of delivering good results while requiring little theoretical knowledge. Effective use of other techniques requires a greater formal background in areas such as data dependencies. The method consists of three steps. First a dependency list is constructed to record the elementary facts. These include attribute names, single-valued dependencies (SVDs), multi-valued dependencies (MVDs) and composite attributes. This information is then collated in the form of a bubble diagram. Figure 1 shows the university example described in reference 7. Individual attributes are denoted by labelled bubbles and groups of attributes (composites) are enclosed by unlabelled bubbles. SVDs are represented by single-headed arrows, MVDs by double-headed arrows. The additional levels of bubbling around some attributes represents the independence of dependencies involving that attribute. For example, the double bubble around attribute Student corresponds to the two dependencies each Student has a given Major and Year and Each Student in a given Class and Section has several Exam_Scores. Finally, a set of procedures is applied to derive the appropriate relation schemas. Figure 2 shows some of the relations corresponding to the diagram of figure 1. Prime attributes appear raised and non-prime attributes appear sunken for each relation. Only a handful of references to Smith’s method are to be found in the literature. The method is sensitive to the choice of dependency list and, without computer assistance, becomes unmanageable for all but the most trivial systems. Constructs such as N:M or ternary relationships may not have unique or obvious representations in terms of textual descriptions of SVDs and MVDs. As has been argued elsewhere, it is better for the user to work at the level of the corresponding diagram and for the dependency lists to be maintained by software. Figure 1: A dependency diagram Figure 2: Relations for university example FISHEYE VIEWS Information overload is a potentially serious problem in most human-computer interactions. Computer-based information presentation techniques have many advantages—such as scrolling and multiple windows—over their paper-based counterparts. However, these alone have not proved sufficient to provide the power and flexibility required in order to visualise complex information effectively. The fisheye view approach\(^2\) is based on the view through a fisheye lens: the entire “world” remains visible; points near the focus are visible in great detail; the level of detail decreases with distance from the focus; some distortion is apparent, particularly at the periphery of the view. A further refinement combines these ideas with those used in the production of maps for tourists: the sizes of the symbols used for major landmarks are larger than those for comparable, but less “interesting”, features and much detail is omitted for clarity. A key concept is the degree of interest function (DOI) which includes contributions from the a priori importance (API) of each component as well as contributions depending on its distance from the focus. Fisheye views are then constructed by displaying the components in decreasing DOI order until either all components are shown or a cutoff value is reached. The exact nature of the various functions involved depends on the application domain—a DOI suitable for binary trees will not be suitable for program listings or maps—and the success or otherwise of the application depends critically on an appropriate choice being made. We consider a rectangular region containing a focal point \((x_f, y_f)\). It is convenient to work with normalised coordinates \((\hat{x}, \hat{y})\) whose magnitudes range from 0 at the focus to 1 at the boundary. Spatial transformations of the form \[ \hat{x}' \equiv G(\hat{x}) \] (1) are then used to construct the fisheye view. The first derivative \(G'(\hat{x})\) gives the magnification, or scale factor. The approach used by Sarkar and Brown\(^4\) refined Furnas’ basic ideas for application to graphs. The spatial transformation used was \[ G(\hat{x}) \equiv \frac{(d + 1)\hat{x}}{d\hat{x} + 1} \] (2) where \(d\) is a constant known as the distortion factor. The same transformation is applied independently to the \(\hat{y}\) coordinates. Similar transformations have been developed in other fields.\(^12\) For example, the polyfocal transformation\(^13\) was derived for cartographic application and uses \[ G(\hat{x}) \equiv \hat{x} + \frac{A\hat{x}}{1 + C\hat{x}^2} \] (3) where \(A\) and \(C\) are constants. For convenience, transformation functions are assumed to be continuous, monotonic and invertible over the range of \(\hat{x}\) under consideration. PHOTI Photi is an implementation of Smith’s method. It was designed as both a platform for experimenting with the application of fisheye transformations and as a teaching tool. Photi’s palette contains a typical assortment of tools for (in left to right order in figure 1): moving and resizing bubbles, creating bubbles, creating SVDs and MVDs, deleting objects, grouping bubbles and examining object detail. The rightmost palette tool (represented by the magnifying glass icon) is used to move the focal point—whose position on the drawing area is marked with the same symbol—to a new location. The palette also contains a slider which is used to adjust the value of the distortion factor \(d\) in equation 2. A value of zero corresponds to the undistorted Figure 3: Example from reference 7—illustrating distortion and refocussing effects or "flat" case. Adjustment of \(d\) or the position of the focal point triggers transformations which move and scale all objects according to the new focal point, using the current distortion value. Figure 3 illustrates a number of Photi’s capabilities using an example diagram from Smith’s paper.\(^7\) The undistorted view is shown in figure 3(a) with the focal point located near the bubble labelled B. Leaving the focus unchanged while increasing the distortion produces the view of figure 3(b). Figures 3(c) and 3(d) show the effect of a zoom operation centred on the upper left corner of the popup menu. The bubble group containing L and M has been expanded while retaining the emphasis on the region surrounding the focal point. The zooming operations are described in a subsequent section on multiple foci. The effect of moving the focal point, while keeping the distortion constant, is shown in figure 3(e) while figure 3(f) shows the effect of highlighting prime bubbles as described elsewhere in connection with non-geometric transformations. Perhaps the most fundamental of the operations on the diagram layout is the ability to alter the distortion while keeping the focus fixed. This is used to "set the tone" for further viewing. In addition, a number of further operations are activated via the magnifying glass tool: - Reset the focus while keeping the distortion constant (see figures 3(b) and 3(e)), enabling the user to pan across the diagram examining areas of interest. - Perform a "zoom" on the current—possibly already distorted—view, allowing several patches of detail to be examined simultaneously. - Revert to "flat" configuration. This is equivalent to setting \(d\) to zero and discards the effects of prior transformations, including zooming. In practice, these operations have proved sufficiently natural to enable new users to become proficient rapidly. The "flat" coordinates of all new bubbles are stored and are updated appropriately when the bubbles are moved or resized. When a bubble is created in a distorted view, the inverse transformation is used to compute the corresponding "flat" coordinates. Component movement introduces a slight complication. One could require that, as a bubble is dragged around using the arrow tool, its size should change as if it had been moved to its present location by a fisheye transformation with the current parameters. Another option would be to delay updating the size until the corresponding mouse-up event. Repositioning a bubble in Photi does not trigger any resizing operations. It was felt unwise to distract the user by displaying a continuously changing component (as well as incurring the overhead of computing the changes!) as the potential gains are small. Attempting to anticipate the user’s reasons for moving a component seems equally unwise. No major functionality is lost since only one bubble is moved at a time and resizing operations are available with the same tool. Photi is implemented using Tcl/Tk.\(^{14}\) Tcl is a powerful and flexible interpreted language while Tk provides a widget set for GUI development. If desired, the interpreter can be augmented with user procedures implemented in C rather than Tcl. In applications such as Photi, this is most appropriate for commands which are called repeatedly in response to mouse motion events. The computation of the crossing points of dependencies and bubbles, which is required frequently when bubbles are moved, has been implemented via a C procedure. Performance is satisfactory (on a Sun ELC a new view resulting from a distortion change is complete in less than 2 seconds for the diagrams shown in this paper) for small to medium sized systems. However, experimentation suggests that significant performance gains are possible as more commands are implemented in C. Keyed lists are used to maintain details of bubbles, dependencies and relations. Diagrams may be saved in ASCII form for subsequent re-use. Alternatively, the relations may be exported to an Ingres database—providing an illustration of Tcl’s use to “glue” applications together. DIAGRAMMING APPLICATIONS The system described by Sarkar and Brown\(^4\) was directed towards providing fisheye views of read-only information. Diagramming tools have some additional requirements. In particular, it is necessary to allow the user to create, modify or delete diagram components independently of the transformations used to produce a particular view. Above all, the user must be presented with a "natural" interface if fisheye techniques are to offer any improvement over current methods. Layout Another significant difference concerns the layout of the diagrams. The algorithms commonly used for laying out tree or graph structures were developed for applications such as circuit design and are concerned primarily with the adjacency matrix representation of the system. Thus, they concentrate on constructing a "good" layout in terms of constraints such as minimising connection crossings or total connection length. It is not always straightforward to express concepts such as which components "belong together" in terms of such constraints and it is appropriate to attach a high weighting to the component position suggestions available from the user. These include the positions at which components are created or to which they are explicitly moved. In applications such as Photi, a point-and-click approach is used to create components so some user-specified position preference is available. While some "cleaning-up" may be acceptable— perhaps to snap components to a grid—dramatic changes of component positions are confusing and counterproductive. A single, apparently minor, change—such as deletion of a component—can potentially produce an enormous change in the "best" layout. Fisheye techniques do not suffer from this problem to the same extent. Users retain the ability to perform localised layout changes in regions of interest while scaled views diminish the adverse impact of features such as line crossings in the remainder of the diagram. Clearly, it is desirable to combine the advantages of constraint based layout algorithms with fisheye views. In practice, this might best be achieved by applying such algorithms to undistorted views and employing trans­ formation functions which preserve desirable layout properties. For example, the transformations used in Photi will not introduce overlaps between components. Non-geometric transformations The transformations involved in the generation of a new fisheye view are of two fundamental types. Geometric transformations describe the repositioning of objects. Equations 1-2 give those used in the present work. Other transformations involve the re-computation of size and detail for each object, given its new location. The non-geometric transformations re-size each item according to a scaling function which should satisfy any restrictions appropriate for the diagrams in question, such as the requirement that the scaled components should not overlap. In the case of diagrammatic techniques there may be both continuous and discrete components. Component labelling is an example of the latter category: as the component size decreases (con­ tinuously), the size of the font used for labels is reduced to the next available size, typically by multiples of some discrete unit such as point size. Eventually, the component may become "too small" and its label may be omitted. Information about the colour, shape and border thickness of components is also likely to be treated with the use of cut-offs. The computation of visual worth for a component involve factors such as its importance (API), size in "flat" co-ordinates, amount of detail (and its granularity) which can be shown meaningfully (cutoff points) and position (after spatial transformation). For Smith's method, it would seem reasonable to base the a bubble's importance on its com­ plexity, involving the number of levels, group structure and numbers of SVDs and MVDs. Another scheme might assign higher API values to prime attributes than to non-prime attributes. Photi treats all bubbles as equal. Part of the justification for this is the fact that it is not known at bubble creation time whether a bubble will remain prime, or how many levels and dependencies it will have—thus establishing a true a priori importance is not realistic. A more appropriate technique for diagramming applications would be to provide a posteriori importance facilities. In the case of Smith's method, these might include highlighting prime bubbles or bubble chains corresponding to individual relations. Figure 3(f) shows prime bubbles emphasize­ d by shading only—further emphasis could be added by boosting their sizes. 7 The specific form presented in reference 4 avoids component overlap, includes contributions from the API and has thresholds for detail and inclusion. However, this introduces additional interface complexity—the user controls the view by adjusting five separate parameters. One major advantage of fisheye views is immediacy—if the user has to spend time “tweaking” the view then it is debatable whether any substantial improvement over the more common approach of scrolling (possibly multiple) windows has been achieved. The computation of visual worth is likely to vary significantly between different diagrammatic techniques and a generally applicable system must be capable of handling a variety of styles. One must also realise that factors contributing to importance may be weighted differently depending on the individual user, task or system being examined. A major advantage of Tcl/Tk is the ease with which the parameters, and even the functional form, of the various transformation functions may be modified. Photio displays all components, dispensing with cutoffs at the expense of cluttering the perimeter of the diagram with small objects, since the very presence of a dense region may trigger further investigation by the user. Cutoff schemes will depend on technique-specific factors such as symbol type or current role. For example, bubbles which are not determinants and have no levels might be safely suppressed if they become smaller than some cutoff size. One must also consider the possibility of different scaling factors for the x and y directions leading to loss of component shape preservation. Multiple foci Applications of the single focus fisheye view indicate that it is well suited to situations where the access to the detail of a component and its near neighbours, in the context of a background of other components, is important. However, the nature of diagrams is such that users frequently wish to examine simultaneously several “patches” of detail. The fisheye counterpart to a split-screen view is the use of multiple foci. The transformations are applied independently, with the visual worth of a component being determined by superimposing their effects. There are some potential drawbacks with this approach. In particular, spurious regions of high magnification may arise through the “interference” of the transformations. This effect depends on the form of the transformation (and hence magnification) function. The polyfocal projection, whose transformation function is that of equation 3, avoids the problem because, unlike equation 2, the corresponding magnification function is non-monotonic. Photio side-steps this inconvenience by providing a “multiple focus substitute” via the popup menu shown in figure 3(c). The zooming operations leave the focus unchanged but cause a further transformation to take place at half the current distortion—regarding the current, possibly already distorted, view as “effectively flat”—with its apparent focus at the current mouse position. The popup menu is used to distinguish clearly the user action for initiating this command from that for simply changing the focal point. A more sophisticated approach might allow for different distortion parameters for the transformations about each focus. However, the additional demands this places on the user are likely to negate the possible benefits. In practice, the ability to flatten and refocus conveniently, combined with the zooming operations, has proved adequate. CONCLUSIONS & FUTURE WORK Smith’s method is an example of a technique which, in its manual form, is intractable for all but the simplest systems but which becomes remarkably attractive with appropriate software support. Experiments with Photio indicate that, with appropriate extensions, fisheye methods are indeed applicable to Smith’s and other diagramming techniques. The construction or modification of diagrams is essentially a creative task. Serious frustration and potential errors may result from disruption of the user’s train of thought due to poor system performance. Consequently, acceptable response times are likely to be shorter than for static applications. Fortunately, most editing operations are incremental and only changes to the distortion or focus require re-computation of the entire view. Implementing more of Photi's capabilities as C commands would improve performance. For very large systems some combination of scrolling and fisheye views may be appropriate. An enhancement planned specifically for Photi, but applicable to other techniques, is the provi- sion of "what-if" scenarios—enabling the user to see the potential consequences of a contemplated change—through the use of multiple drawing windows. More imaginative possibilities include working in periodic coordinates, perhaps wrapped on a sphere so that the diagram appears as if painted on a track ball. Multiple focus fisheye views have potential applications in groupware systems where several users share a single diagram. The ability to draw attention to regions using relatively localised fisheye transformations offers more scope than the current alternatives such as multiple cursors. Non-geometric transformations involve factors which may be subjective or method dependent. Variations are possible even in "static" systems. For example, the API for the states on the U.S. map display given by Sarkar and Brown, might be based on population, contribution to GNP or strength of national guard forces. A minimal "sensible" set of default behaviours should be provided and users must have the ability to customise these. This will be particularly important in the case of fisheye presentation management environments for CASE or IPSE applications where a variety of techniques must be handled in a consistent manner. Further investigation of the distinction between a priori and a posteriori importance contributions is also required. Tel is a suitable language for prototyping and implementation. Its advantages include ease of integration with other applications (the Ingres DBMS in Photi's case) and the ability to allow arbitrarily complex interaction with the user via the direct or indirect entry of new scripts. ACKNOWLEDGEMENTS I am grateful to Andy Cockburn for arousing my interest in fisheye transformations, and to the many contributors to comp.lang.tcl whose ideas I have borrowed. REFERENCES Hall, 1985. hierarchical information. In Proc. ACM SIGCHI '91 Conf. on Human Factors in Computing smoothly integrated. In Proc. ACM SIGCHI '91 Conf. on Human Factors in Computing
{"Source-Url": "https://ir.canterbury.ac.nz/bitstream/handle/10092/11748/churcher_report_15-93.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 5347, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 23081, "total-output-tokens": 6507, "length": "2e12", "weborganizer": {"__label__adult": 0.00037217140197753906, "__label__art_design": 0.0029201507568359375, "__label__crime_law": 0.0003695487976074219, "__label__education_jobs": 0.0038089752197265625, "__label__entertainment": 0.00016069412231445312, "__label__fashion_beauty": 0.00021982192993164065, "__label__finance_business": 0.00039839744567871094, "__label__food_dining": 0.0003955364227294922, "__label__games": 0.0005292892456054688, "__label__hardware": 0.0012273788452148438, "__label__health": 0.0007023811340332031, "__label__history": 0.0006766319274902344, "__label__home_hobbies": 0.0001786947250366211, "__label__industrial": 0.0006399154663085938, "__label__literature": 0.0007786750793457031, "__label__politics": 0.0002562999725341797, "__label__religion": 0.0005936622619628906, "__label__science_tech": 0.208984375, "__label__social_life": 0.0001678466796875, "__label__software": 0.040191650390625, "__label__software_dev": 0.7353515625, "__label__sports_fitness": 0.00023996829986572263, "__label__transportation": 0.0005826950073242188, "__label__travel": 0.00024437904357910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28895, 0.02527]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28895, 0.61634]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28895, 0.93292]], "google_gemma-3-12b-it_contains_pii": [[0, 435, false], [435, 3521, null], [3521, 7928, null], [7928, 8003, null], [8003, 11537, null], [11537, 11620, null], [11620, 15949, null], [15949, 20403, null], [20403, 24676, null], [24676, 28133, null], [28133, 28895, null]], "google_gemma-3-12b-it_is_public_document": [[0, 435, true], [435, 3521, null], [3521, 7928, null], [7928, 8003, null], [8003, 11537, null], [11537, 11620, null], [11620, 15949, null], [15949, 20403, null], [20403, 24676, null], [24676, 28133, null], [28133, 28895, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28895, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28895, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28895, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28895, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28895, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28895, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28895, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28895, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28895, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28895, null]], "pdf_page_numbers": [[0, 435, 1], [435, 3521, 2], [3521, 7928, 3], [7928, 8003, 4], [8003, 11537, 5], [11537, 11620, 6], [11620, 15949, 7], [15949, 20403, 8], [20403, 24676, 9], [24676, 28133, 10], [28133, 28895, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28895, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
b50bcebb7c4f9acff8c33a42914987e4e8394e9d
240 points total. Open book, open notes. Maximum 4 hours in a single block, self-timed on the honor system. One point per minute of test time. 1) a) [10 pts] Put the following program in SSA form (you may draw a control flow graph to illustrate your solution): ```plaintext x := 0; do { x := x + 1; z := x; y := 0; if (...) { y := 1; } w := y + z; } while (...); print(x, y, z, w); ``` b) [5 pts] Why would you want a program in SSA form? Simplify analysis. Since each variable is only assigned once, each variable name only refers to a single value (which cannot be killed/redefined). c) [5 pts] Why wouldn’t you want to keep programs in SSA form? Have to maintain SSA invariant across transformations SSA is not executable; have to translate \( \phi \) nodes before code gen 2) a) [10 pts] What is the data dependence graph, using dependence distances, for the following loop nest: \[ \begin{align*} &\text{for } i = 1 \text{ to } N \\ &\quad \text{for } j = 1 \text{ to } N \\ &\quad \quad \text{for } k = 1 \text{ to } N \\ &\quad \quad \quad S_1: a[i,j] := a[i,j] + b[j,k] \ast c[i,k]; \\ &\quad \quad \quad S_2: c[i+1,k-1] := c[i,k] \ast 2; \\ &\quad \quad \quad S_3: b[j,k] := b[j+3,k-2] \ast 5; \end{align*} \] Tricky question due to 2D arrays acting like scalars with respect to one of the 3 loops. No one (including Dave) got this question completely right. \[ \begin{align*} S_1 \delta_{0,0,0} & S_1 \\ S_1 \delta_{0,0,1} & S_1 \\ S_1 \delta_{0,0,1} & S_1 \\ S_2 \delta_{1,0,-1} & S_1 \\ S_2 \delta_{1,0,-1} & S_2 \\ S_2 \delta_{0,1+1} & S_2 \\ S_1 \delta_{0,0,1} & S_3 \\ S_3 \delta_{0,3,-2} & S_3 \\ S_3 \delta_{1+0,0} & S_1 \\ S_3 \delta_{1+3,2} & S_3 \\ S_3 \delta_{0,1+0,0} & S_3 \end{align*} \] b) [10 pts] For the following skeleton of a loop nest: \[ \begin{align*} &\text{for } i = 1 \text{ to } N \\ &\quad \text{for } j = 1 \text{ to } N \\ &\quad \quad \text{for } k = 1 \text{ to } N \\ &\quad \quad \quad S_1: \ldots \\ &\quad \quad \quad S_2: \ldots \end{align*} \] assume the data dependence graph, using dependence directions, is the following: \[ \begin{align*} S_1 & \overset{\delta_{<,=,>}}{\Rightarrow} \\ S_2 & \overset{\delta_{=,=,<}}{\Rightarrow} \end{align*} \] how would you transform this loop, using loop interchange(s), to best parallelize this loop? Explain why your transformation is legal and works well, and why other possible transformations (including doing nothing) are worse or illegal. Only the j loop can be executed in parallel (i and k have loop-carried dependencies), so we want to move the j loop to the most advantageous position. We’ll assume that N > # of available processors. There are 6 possibilities (from outer to inner): k, i, j; k, j, i; j, k, i: These three are illegal due to lexographically negative deps. j, i, k: Best legal permutation for multi-processor. Deps are all lex. non-negative and we get N large chunks of parallel work. i, j, k: (unchanged). Could do this and run j loop in parallel but get smaller unit of parallelism, more synchronization barriers. i, k, j: Best legal permutation for a vector machine (inner loop is parallelized), but lousy for multi-processor (many small chunks of parallel work). 3) Consider the following loop fragment: ```plaintext for i = 1 to N S_1: temp := ...; ... S_5: ... := ... temp ...; end /* temp is dead here */ ``` a) [5 pts] What data dependences exist between statements $S_1$ and $S_5$, based solely on the references to temp? $$S_1 \delta = S_5$$ $$S_5 \delta < S_1$$ $$S_5 \delta^0 < S_1$$ (not required for full credit) b) [5 pts] Why can’t this loop be fully parallelized directly? loop-carried dependence c) [10 pts] What is a transformation to this program that will enable full parallelization? What is the data dependence graph for your transformed loop? Padua and Wolfe call it “node splitting.” We introduce a temporary array to remove the anti-dependence: ```plaintext S1: temp[i] := ...; ... S5: ... := ... temp[i] ...; S_1 \delta = S_5 ``` 4) a) [5 pts] Why is it impossible for two live ranges for the same source variable to interfere? If the live ranges interfere, then they must be simultaneously live, which implies that the two live ranges are really part of the same (larger) live range. b) [10 pts] A common choice for the basic unit of allocation in a register allocator is the live range. An alternative is to convert the program to SSA form and then use variables as the units of allocation. How do these choices differ? Give an example illustrating the difference. merges with defs flowing in on multiple edges. In the example below, live ranges gives us 1 unit of allocation and SSA gives us three units. \[ \begin{array}{ccc} x := ... & x := ... & x_1 := ... & x_2 := ... \\ ...x... & ...x... & x_3 := \phi(x_1,x_2) & ...x_3... \\ \end{array} \] c) [10 pts] Chaitin’s original graph-coloring register allocator combined live ranges that were disjoint and linked by a simple assignment statement (called subsumption or coalescing), essentially running a kind of copy propagation to merge disjoint but adjacent live ranges. How can this merging produce better results than not merging? How can it produce worse results? It can produce better results by eliminating register moves (by forcing the live ranges to be assigned to the same register). It can produce worse results by lengthening live ranges, leading to a more constrained interference graph (fewer nodes with higher degree), which leaves less flexibility, making it harder to avoid spills. 5) a) [10 pts] What is a good reason why you would want to perform register allocation before instruction scheduling? To enable the loads/stores introduced as spill code to be scheduled b) [10 pts] What is a good reason why you’d want to do the opposite, performing instruction scheduling before register allocation? Register allocation introduces false dependencies (by reusing registers) that may inhibit scheduling. 6) a) [10 pts] How can mark/compact or copying garbage collection make allocation faster? All free space is contiguous, so a simple pointer bump suffices to allocate an object b) [10 pts] How can mark/compact or copying garbage collection make subsequent pointer dereferences in the application program go faster? By improving locality (live objects get closer together) 7) Implementing first-class functions in a language like ML can be expensive, in general requiring heap-allocated closure objects and heap-allocated environment objects. Say you are a language designer who wants first-class functions but doesn’t want to incur the cost of fully-general functions. [In this question, treat “ML” as referring to any statically-typed language with first-class function values.] a) [10 pts] What is a minimal restriction on ML, statically enforceable, that would avoid having to create closures, instead enabling function values to be represented simply as a code address as in C? What capabilities would be sacrificed with this restriction? The bodies of nested functions cannot contain references to non-global variables that are defined in lexically enclosing scopes. However, the bodies may still refer to global variables, and the function values are first class (can be stored in any variable/heap cell and returned from functions). Can easily enforce this during name resolution phase of compilation. b) [10 pts] What is a minimal restriction on ML, statically enforceable, that would enable all environments to be stack-allocated? What capabilities would be sacrificed with this restriction? Must enforce a LIFO-usage of function values. Function values cannot be stored into heap-allocated memory or variables defined in lexically-enclosing scopes, and function values may not be returned as the result of a function. This can be enforced by the typechecker. There are no restrictions on variable references in the function’s body, and the function may still be passed as an actual parameter. 8) a) [10 pts] Say you had branch frequencies derived from profile data, giving the percentage of executions at which a branch went to each of its successors. Give a couple of examples of how you would exploit this information in a compiler. In general, any part of the compiler that needs frequency guesses could benefit. For example the scheduler could ensure that the common successor of a branch instruction was fall-through thus improving i-cache behavior. It enables trace scheduling (scheduling across basic block boundaries). The register allocator could use this information for usage estimation. b) [10 pts] Say you had counts for each call arc in a program, derived from profile data. Give a couple of examples of how you would exploit this information in a compiler. To guide inlining, procedure specialization. Reduce compile time by only using high levels of optimization on frequently executed procedures. Improve Wall's link-time register allocation (better guess about which variables should be promoted to being in registers). 9) A basic analysis for object-oriented languages is class analysis, where a set of concrete classes is associated with each variable at each program point. a) [15 pts] Formalize an intraprocedural version of this analysis using a lattice-theoretic framework, where you may assume you have complete knowledge of the whole program’s class hierarchy. Define your domain (recall that a domain is defined by a set of elements and an ordering operator over elements) whose elements are associated with each program point, and specify (redundantly) the top & bottom elements of your domain and your lattice meet (glb) operator to use at merge points. Indicate the direction of analysis and the initial domain element at the start of analysis (taking into account that the procedure has formals $f_1, ..., f_N$). Specify a flow function for each of the following RTL instructions, derived from a Java-like language: \[ \begin{align*} x &:= \text{new } C; & \text{/* where } C \text{ is a concrete class */} \\ x &:= y; \\ x &:= y.\text{msg}(z_1, \ldots, z_k); \\ x &:= (C) y; & \text{/* a checked narrow, succeeding if } y \text{ is an instance of } C \text{ or some subclass of } C, \text{ otherwise terminating the program */} \end{align*} \] First we’ll define the lattice for a single variable. - set of domain elements: Powerset of classes defined in the program \[\leq: \supset\] - top: empty set - bottom: universal set - meet: $\cup$ Then we define the domain for the analysis by applying the n-tuple operator to the domain I defined above ($n = \# \text{ of variables in procedure}$). The analysis direction is forward, and the initial information maps each incoming formal $f$ to $\bot$ if $f$ is unconstrained and to $\{C|C \text{ is a subclass of } \text{specializer}(f)\}$ if $f$ is constrained. Flow functions. I’ll use the notation $\text{Info}_2 := \text{Info}_1[x := \{a,b\}]$ to denote that $\text{Info}_2$ is the same tuple as $\text{Info}_1$, except for the new value of its $x$ component. Similarly $\text{Info}(x)$ is the value in the $x$ component of the Info tuple. I’m also going to assume that we’re compiling a dynamically typed language (or we’re not going to take advantage of static type declarations if we aren’t). \[ \begin{align*} x &:= \text{new } C:\phantom{;} \\ \text{Out} &:= \text{ln}[x := \{C\}] \end{align*} \] \[ x := y; \] \[ \text{Out} := \text{In}[x := \text{In}(y)] \] \[ x := y.\text{msg}(z_1, \ldots, z_k); \] \[ \text{Out} := \text{In}[x := \bot] \] \[ x := (C) y; \] \[ \text{Out} := \text{In}[x := \{ c \in \text{In}(y) \mid c \text{ is a subclass of } C \}] \] b) [15 pts] Extend this analysis to be interprocedural, flow-sensitive, context-insensitive, and optimistic. Explain your initial conditions for analysis. Describe briefly how you organize your worklist, and describe briefly how you process a procedure, using (a possibly modified version of) your intraprocedural analysis above as a subroutine. Have a summary for each procedure that contains lattice elements for local vars, formals, and the proc's return value. Each instance variable gets an associated lattice element as well. Represent global variables as locals of a "top-scope" node. Initially all summaries have all variables bound to T. Put "top-scope" (to handle initialized global variables) and main nodes on worklist. Remove node from worklist and process it using modified intra analysis until the queue is empty. Only major change is the flow function for msg sends/proc calls. 1) do compile time method lookup using current \( \text{In}(y) \). this gives us a set of potential callee procedures 2) Bind \( x \) to the union of the return values of the callee procedures. Add dependency links to force reanalysis (by adding dependents to the worklist) if a return value changes 3) Meet actuals at call-site with formals of each callee method, reanalyzing the method by putting it on the worklist (and updating its incoming formal info) if meet is lower in lattice than current value of formals. Also change instance variable load/store functions to access associated lattice element (setting up dependency links on readers). c) [5 pts] How might context-sensitive analysis improve the quality of this analysis? The direct benefit is improved analysis of the caller procedure, since cardinality of the set of classes returned from the callee procedure may decrease. Indirectly we may get less smearing of formal parameters in the callee (smaller sets of classes for incoming formals). We might also improve the analysis of references to lexically-enclosing variables because there are now multiple sets of classes (one per contour) associated with each variable. d) [5 pts] How might procedure specialization improve the resulting code quality even more? Get more precise information (smaller class sets) for the formal parameters of the procedure, thus enabling more of the messages sent to formal parameters to be statically bound. e) [10 pts] Some languages, such as Java, Smalltalk, and CLOS, allow new classes to be created or dynamically loaded at run-time. How would this complicate your analysis? What parts of your analysis could you keep, and what parts would you have to weaken or give up? The simplest approach would be to disable any parts of the analysis that make use of whole program information. The initial info binds formals to $\bot$. The flow function for checked narrow does nothing. We could keep interprocedural class analysis almost unchanged, after some modifications to our intermediate language. We extend the new operation to allow expressions in addition to class literals. The lattice is extended to track each instance of class Class separately, with a special Class$_\bot$ that represents an unknown instance of class Class. We model the result of a class creation/load as Class$_\bot$. The result of a new operation on a class expression whose possible values include Class$_\bot$ is $\bot$. If we have a message whose receiver is $\bot$, this means that a possibly unknown callee procedure might be invoked. A conservative approximation of this is to assume that the procedure returns $\bot$, stores $\bot$ into all global variables, and that all instance variables reachable from the message sends actual parameters and all global variables are also bound to $\bot$. Unclear how much benefit we'd actually get from doing this analysis, but if class loading is rare, or very localized there might be some. f) [5 pts] How might dynamic compilation as in the Self system be exploited to limit the impact of this language feature? Keep dependency information about which code depends on what assumptions about the class hierarchy and recompile the code when its assumptions are violated. g) [10 pts] Some languages, like Smalltalk and CLOS, allow the class of an object to be changed at run-time as a side-effect. For instance, the RTL statement \[ \text{change\_class}(x, \ C); \] would side-effect the object denoted by $x$ to be an instance of class $C$ instead of whatever it used to be. How would such a language feature complicate your analysis? In the intra case, in addition to forgetting all information about exposed variables (vars defined in lexically enclosing environments), would also need to forget info about the actual parameters and their aliases. A conservative approach would be to forget everything at procedure calls. Could improve on this by doing IP analysis to determine if a particular call-site might actually do this thing. The interproc analysis probably just needs to be modified to allow backflow along dataflow arcs in this special case. Actual flow function for the change\_class node binds $x$ (and all must-aliases of $x$) to $\{C\}$ and adds $C$ to the class sets of all may-aliases of $x$. 10) [10 pts] Why is multiple inheritance difficult to implement efficiently? Cannot use efficient (no padding) object layout and still get the property that instance tables/virtual function tables are at a known offset.
{"Source-Url": "http://courses.cs.washington.edu/courses/cse501/01wi/solns/97-final-answers.pdf", "len_cl100k_base": 4244, "olmocr-version": "0.1.48", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 17851, "total-output-tokens": 4785, "length": "2e12", "weborganizer": {"__label__adult": 0.0003616809844970703, "__label__art_design": 0.0002536773681640625, "__label__crime_law": 0.0002510547637939453, "__label__education_jobs": 0.0008211135864257812, "__label__entertainment": 4.380941390991211e-05, "__label__fashion_beauty": 0.0001341104507446289, "__label__finance_business": 0.00012564659118652344, "__label__food_dining": 0.0004014968872070313, "__label__games": 0.00043272972106933594, "__label__hardware": 0.000927448272705078, "__label__health": 0.0004093647003173828, "__label__history": 0.0002053976058959961, "__label__home_hobbies": 0.00014495849609375, "__label__industrial": 0.0003786087036132813, "__label__literature": 0.00018966197967529297, "__label__politics": 0.00021278858184814453, "__label__religion": 0.0005064010620117188, "__label__science_tech": 0.003612518310546875, "__label__social_life": 0.00011670589447021484, "__label__software": 0.003139495849609375, "__label__software_dev": 0.986328125, "__label__sports_fitness": 0.0003643035888671875, "__label__transportation": 0.0005412101745605469, "__label__travel": 0.00022733211517333984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17137, 0.02355]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17137, 0.40114]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17137, 0.8613]], "google_gemma-3-12b-it_contains_pii": [[0, 815, false], [815, 2481, null], [2481, 4041, null], [4041, 5997, null], [5997, 8871, null], [8871, 11457, null], [11457, 14078, null], [14078, 16914, null], [16914, 17137, null]], "google_gemma-3-12b-it_is_public_document": [[0, 815, true], [815, 2481, null], [2481, 4041, null], [4041, 5997, null], [5997, 8871, null], [8871, 11457, null], [11457, 14078, null], [14078, 16914, null], [16914, 17137, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17137, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17137, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17137, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17137, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 17137, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17137, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17137, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17137, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 17137, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17137, null]], "pdf_page_numbers": [[0, 815, 1], [815, 2481, 2], [2481, 4041, 3], [4041, 5997, 4], [5997, 8871, 5], [8871, 11457, 6], [11457, 14078, 7], [14078, 16914, 8], [16914, 17137, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17137, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
e84feedc5003b89c7f199c409c16cd6a968bc00e
20 Years of the Python Software Foundation LORENA MESA, BOARD CHAIR 2021 was the 20th anniversary of the Python Software Foundation. It was also the year of virtual events, as we organized a virtual PyCon US and continued supporting virtual events through the Grants Program. 2021 had us hire several new team members to expand our operations, say farewell to two long-time staff members, and prepare for an in-person PyCon US in 2022. PyCon US is always the biggest happening for the Python Software Foundation, and for a large part of the Python community. Even though PyCon US 2021 had to be virtual for everyone’s safety, we worked hard to make it an enjoyable and unique experience, with high-quality talks, tutorials and keynotes, and virtual meeting spaces for attendees to learn about our sponsors, discover the job market, explore Python projects and meet fellow Python community members. Although the ongoing pandemic did not let us celebrate the PSF’s 20th anniversary at PyCon US quite the way we would have wanted, we did make the most of it. We recreated two classic Python T-shirts and offered them as gifts during our Spring Fundraiser, held during PyCon US. Our 20th anniversary also involved some staffing changes. Betsy Waliszewski, our Sponsorship Coordinator, retired from the PSF after many years of being our fundraising and administrative backbone. We strengthened our team with four new members: Olivia Sauls (Event Assistant and Administrative Support), Loren Crary (Director of Resource Development), Łukasz Langa (Developer-in-Residence), and Shamika Mohanan (Packaging Project Manager). Last, but not least, was the departure of Ewa Jodłowska, our Executive Director. Ewa made the PSF what it is today, and we owe her an enormous debt of gratitude for her many years of exceptional service to the PSF and the Python community. When she decided it was time for her to move on, she took great care to make sure the PSF staff was in a position to keep everything going. Thomas Wouters, Board Vice Chair and long-time community member, stepped up as Interim General Manager to make sure the staff have everything they need, while the Board of Directors continued the search for the next Executive Director. We have now hired Deb Nicholson as the next Executive Director. In the meantime, the Python Software Foundation continues its work. Even through another year of pandemic challenges, we are proud we met the 2021 goals set in our 2020 Annual Impact Report, although we will continue to work on diversifying revenue and improving sustainability. We continued supporting the Python community through grants and fiscal sponsorships, spending more than $117,000 to support the global Python community in 41 countries, primarily through virtual events, and we celebrated the efforts of 12 community members through Community and Distinguished Service Awards. We also started preparations for an in-person PyCon US 2022, taking great care to do so safely and responsibly. We look forward to expanding our operations in the years to come, and 2022 will be about establishing the groundwork for this. Shamika and the Packaging Work Group are already working on improvements to PyPI to support the community and allow us to diversify income. The new Executive Director will work with the Board of Directors and the rest of the PSF staff to establish more concrete goals for this year, and we will be keeping the community up to date on the progress. Looking back at 20 years of the PSF THOMAS WOUTERS, INTERIM GENERAL MANAGER The Python Software Foundation is twenty years old. As a contributor to Python at the time, I was a founding member, and I was a member of the Board for the first three years. Even though I kept up with the community and some of the work the PSF has done over the years, I didn’t really see the full extent of the PSF’s growth and impact until I came back to the Board a few years ago, and especially now that I’ve taken on the interim General Manager role. Twenty years ago, the PSF was created to hold the Python copyrights and trademarks. It was also created to safeguard the language as an open-source project, and with the idea of raising funds to spend on Python and the community. Despite having some experienced people on the Board (not me), most of us didn’t really have a good idea of the strategic direction of the Foundation. We didn’t know where to go, or how to get there. We did believe Python was special and would remain popular, but I do not think any of us expected it to grow to be one of the most popular, most widely-used languages in the world. Now, the PSF is where we could not possibly have hoped for. Python is incredibly popular, all over the world and in all areas of business, science and education. We have a thriving, diverse, global community. The PSF hosts critical infrastructure, foremost the Python website and PyPI, the Python Packaging Index. We have actual staff to run our day-to-day operations, to organize PyCon US every year, and to provide Fiscal Sponsorship to Python projects and Grants to communities and events all over the world. We now even have team members to support volunteers working on PyPI and on CPython. Just last year the PSF hired Łukasz Langa as the PSF’s first Developer-in-Residence, a full-time developer to work on CPython, directed by the Python Steering Council. We also hired Shamika Mohanan as the Python Packaging Project Manager, to help us expand PyPI’s functionality and improve sustainability. More on their work later in this report. I think it’s safe to say the Python Software Foundation could not have been so successful if it hadn’t been for Ewa Jodłowska, who was our Executive Director until last year, as well as our early staff members, Kurt Kaiser and Betsy Waliszewski. But while they were instrumental in building up the PSF to its current state, they are far from the only ones who helped us get here. The PSF is a collaborative effort by Board members past and present; everyone who worked on Python itself; everyone who worked on Python projects, documentation, events or community resources; and everyone who supported the community in some way. It is truly a community accomplishment. It is because of the community support and efforts that the PSF weathered the last two years so well, and that we’re in a good enough shape to attempt an in-person PyCon US for the first time in three years. The PSF has a solid paid staff to support the volunteer Board of Directors and all the volunteers who work with us, and we can now make long-term plans and investments. The future of Python is looking bright, and we look forward to working with the global Python community to build it. Highlights of 2021 30TH ANNIVERSARY for the Python Language 20TH ANNIVERSARY for the Python Software Foundation 2,700 Pythonistas attended our second virtual PyCon US—making it a huge success! WE HIRED THESE ROLES: - Event Assistant - Director of Resource Development - Packaging Project Manager - Developer in Residence PYPI INTO THE FUTURE We conducted a PyPI survey to learn about PyPI users needs and priorities to shape the future. OUR COMMUNITY GAVE BACK $373,000 IN DONATIONS IN 2021! 83 COMPANIES & ORGANIZATIONS BECAME SPONSORS GRANT GIVING was restarted in 2021 to connect & hold virtual events. We gave: 131 GRANTS to 129 GROUPS in 41 COUNTRIES WE SUPPORTED 10 existing sponsorees & 3 new fiscal sponsorees: WE RECOGNIZED 12 amazing, resilient, & generous Pythonistas with community and distinguished awards. JAZZ BAND pyppa PyRVA About the Python Software Foundation The Python Software Foundation is the organization behind Python. The PSF is an open membership organization, and we are made up of, governed, and led by the community. We exist to support the full Python ecosystem and community, for all uses and users of Python. If you contribute to the Python community in any way, whether as a member of the PSF, a PSF sponsor, a developer, an educator, a member of a local Python group, an organizer or attendee of a PyCon or other Python event, you are a part of the PSF and you make the work we do possible. Our mission is to promote, protect, and advance the Python programming language, and to support & facilitate the growth of a diverse & international community of Python programmers. We fulfill our mission through these core activities: THE PSF PROVIDES THE STRUCTURE AND STABILITY THAT THE PYTHON LANGUAGE, ITS CONTRIBUTORS AND USERS NEED TO THRIVE. - We distribute Python, free of charge and available to all - We establish PSF licenses, hold intellectual property rights to most Python releases, and protect Python-related trademarks - We help coordinate and put resources behind improvements to the Python codebase and new Python releases THE PSF EMPOWERS THE PEOPLE WHO MAKE UP THE PYTHON COMMUNITY, AND WE PROMOTE CONNECTION, DIVERSITY, AND GROWTH WITHIN THE PYTHON COMMUNITY. - We host PyCon US, the annual event for celebrating, learning about, and coming together around Python. - We provide grants to groups across the globe hosting events for and by Pythonistas supporting each other and welcoming newcomers. - We fiscally sponsor 12 Python-related community groups, conferences, and projects to allow them to continue their contributions to the community with organizational infrastructure behind them. THE PSF FACILITATES ADVANCES IN THE PYTHON LANGUAGE, PYTHON-RELATED TECHNOLOGY AND EDUCATION RESOURCES. - We host PyPI, the Python package repository and our Packaging Project Manager is working on how to best serve PyPI users - Our Developer in Residence amplifies volunteer contributors’ impact by processing CPython pull requests and tackling structural improvements - We encourage Python-related research in the public interest, and disburse Science and Education Grants to educators, researchers, and learners. Looking forward, the PSF aims to help the Python ecosystem grow and thrive by reaching and welcoming an ever larger and more diverse group of Pythonistas to enjoy using Python and being part of our community. 2021 GOALS Investing in Packaging In 2021, the PSF dedicated resources to advancing the PyPI package repository. Shamika Mohanan joined the PSF as Python Packaging Project Manager in August 2021, fulfilling a key PSF goal for 2021. In that role, Shamika has been focused on working closely with the packaging community to identify development opportunities that would improve the packaging ecosystem. As the first major initiative, Shamika worked with the packaging community that uses PyPI as a team. By soliciting feedback from the community in a series of three surveys, Shamika has identified prioritized features that will add value to PyPI. The surveys highlighted the important role that PyPI plays in any Python user’s workflow and the consensus was generally positive. The most requested features were namespace support, organization accounts that can manage multiple users and projects, and private packages. Another recurring theme in the surveys was a demand for the ability to establish provenance of a package, and we hope that namespace support and organization account features will go a long way in serving this community need. The development of these new features will also create a path for PyPI to become a self-sustained service. The full survey report is available here. We are grateful to Bloomberg for generously providing the funding for this role and sponsoring this valuable work. A Note from our Inaugural Developer in Residence—Łukasz Langa One of the PSF’s key goals for 2021 was to hire a Developer in Residence, a brand new role with the goal of providing a developer with full-time focus on CPython and amplifying the impact of our volunteer contributors. Thanks to a generous Google grant, in August 2021, Łukasz Langa became the first PSF Developer in Residence! Read on to hear in his own words what he was able to accomplish in his first several months, and what he’s aiming to do in the year ahead. I’ve been really rooting for Python Software Foundation to be able to sponsor core Python development. After applying for the position and being chosen as the inaugural CPython Developer in Residence, I felt this is a huge opportunity to accelerate the velocity of the project. I blogged extensively on what my views on the position were and what role I thought I can most productively fulfill. Long story short, I decided to start by unblocking other existing core developers through code review and pull request merging. In 2021, I managed to review and close 886 pull requests which lead to closing of 217 issues on the bug tracker. I reviewed another 94 pull requests and authored close to 50 of my own. I did not have a particular area of focus, instead reviewing pull requests as they came, to get a better feel of where the most action is, and to get to know the project better. It’s over a million lines of code after all. Even though I contributed to CPython for over 10 years before taking on this job, there were large parts of the codebase I haven’t touched before. The most exciting PR I reviewed was probably the addition of `except*` (as described by PEP 654). Through the year my work shifted somewhat. I did some research on what the open pull requests on GitHub constitute and wrote a blog post on that in particular. I met Sam Gross who is working on removing the GIL from CPython, organized a meeting around it and am now mentoring Sam to become a CPython core developer. I also co-organized the virtual CPython core sprint week. While being the CPython Developer in Residence I continued releasing CPython, putting three releases of Python 3.9 and one security release of Python 3.8 in the second half of 2021. I helped Pablo Galindo Salgado with various minor release-related busywork for his Python 3.10 releases as well. I’m always open to feedback, based on which by end of year I switched gears from reviewing random PRs to more far-reaching projects. In particular, I’m currently involved in the migration of bugs.python.org to GitHub issues. I’m very excited for what 2022 will bring! Don’t be a stranger, if you have any comments on how you think I could improve my work and positive impact, let me know! I am tremendously grateful for this opportunity. 2021 GOALS Infrastructure Our infrastructure continued to support the Python community by hosting resources and downloads on python.org, documentation on docs.python.org, packages on pypi.org, and more. In addition to these public facing resources the PSF infrastructure continued to support the development workflows of CPython, hosting of mailing lists on mail.python.org, and deliver us.pycon.org to support the annual PyCon US. We also support the community by maintaining domain registration, DNS, mail, and more for our fiscal sponsorees and projects like PyPy and Jython. Last year, python.org served over 110 billion downloads for Python releases, and PyPI saw 126,545,477,066 downloads for the >369K projects hosted there. That’s just over 10 billion per month. Yes, billions with a B! To serve that many downloads requires 324.1 Petabytes of data transfer, or 82.2 Gbps of bandwidth, 24x7x365. Thank You to Fastly The astounding traffic demands on the PSF infrastructure have continued to see year-over-year growth. Thanks to Fastly the teams that maintain the services we support, particularly PyPI, have nevertheless been able to focus on improvements and features, rather than scaling to that increased load. We are grateful to Fastly for making the online services that the PSF provides possible, so that we can invest time and resources into advancing our infrastructure to better meet community wants and needs. PyCon US 2021 The second-ever virtual PyCon US took place from May 12 to May 18, 2021. We were excited to connect Pythonistas around the world, and we welcomed over 2,700 participants! PyCon US 2021 By the Numbers: - **100+** virtual sessions, tutorials, summits, talks, keynotes, lightning talks & more! All of which had over 23,000 unique views! - **24,000** virtual visits to 49 virtual booths including sponsors, startup row companies, open source projects and poster sessions. - **49** virtual booths including sponsors, startup row companies, open source projects and poster sessions. PyCon Tanzania, 2021 - K15 Photos Grants After being paused for most of 2020, we restarted the grants program with a focus on virtual events, as well as some in-person events. The PSF distributed $117K in grants in 2021, to 129 groups in 41 countries around the world! “I want my school to be the center of Python training for the youth in Uganda” - MR. CHARLESE MUBIRU, the director of Kikaaya College School Bulenga “I love the way you simplify learning programming with Python.” - AUGUSTINE BISASO, a final year student at Makerere University Fiscal Sponsorship The PSF maintains a fiscal sponsorship program to support Python events and projects by handling their back office work and facilitating donations, so the projects can concentrate on furthering their goals. The program continues to grow, and we currently fiscally sponsor 13 amazing groups including Pallets (Flask), PyLadies, PyCascades, Boston Python, PhillyPUG, PuPPy, PyBeach, PyHawaii, PyMNTos, and San Diego Python; and 3 new fiscal sponsorees who joined the PSF in 2021: Jazzband, PyPA, and PyRVA. We are grateful to our generous sponsors who support our work and the Python ecosystem! Board Débora Azevedo Tania Allard Kushal Das Dustin Ingram Jannis Leidel Van Lindberg (General Counsel) Lorena Mesa (Chair) Marlene Mhangami (Vice Chair) Joannah Nanjekye (Vice Chair) Jeff Triplett (Treasurer) Thomas Wouters (Vice Chair Elect) Nina Zakharenko Staff Jackie Augustine, Director of Events Joe Carey, Accounting Manager Loren Crary, Director of Resource Development Phyllis Dobbs, Controller Ee Durbin, Director of Infrastructure Laura Graves, Senior Accountant Deb Nicholson, Executive Director Olivia Sauls, Event Assistant and Administrative Support Fundraising Campaigns The PSF hosted two fundraisers in 2021! In the spring, we celebrated the 20th anniversary of the PSF and the 30th anniversary of Python with a campaign that included commemorative T-shirts. We also had an end-of-the-year fundraiser in partnership with JetBrains, which raised $25K through generous community support. In 2021, the PSF raised over $373K in donations. We’re grateful to each one of our donors for your choice to contribute your support into your Python community. We thank all our donors who give back to Python! PyCon US 2022 PyCon US 2022 will take place April 27th–May 3rd, nestled in the mountains of Salt Lake City, Utah. Please see our event page. PyCon US 2022 will allow our community to reconnect in a safe way to network, learn, share ideas, and create new relationships and partnerships—for the first time since 2019! SPONSORSHIPS Sponsorships make PyCon US possible and keep it affordable and accessible to the widest possible audience. Your PyCon US sponsorship supports the Python Software Foundation’s broader mission, so through PyCon US and beyond you’ll also get Foundation benefits and the option to support CPython Development, as well as PyPI benefits with exposure via us.pycon.org, pypi.org, python.org, and associated social media accounts. Apply to sponsor today! ## PSF Financials 2020-2021 Consolidated Financial statement ($ in thousands) <table> <thead> <tr> <th>REVENUE</th> <th>2021</th> <th>2020</th> </tr> </thead> <tbody> <tr> <td>Program Service Revenue</td> <td>$1,405</td> <td>$762</td> </tr> <tr> <td>Contributions, Membership Dues, &amp; Grants</td> <td>$1,303</td> <td>$1,491</td> </tr> <tr> <td>Other Revenue</td> <td>$5</td> <td>$58</td> </tr> <tr> <td><strong>TOTAL REVENUE</strong></td> <td><strong>$2,713</strong></td> <td><strong>$2,311</strong></td> </tr> </tbody> </table> <table> <thead> <tr> <th>EXPENSES</th> <th>2021</th> <th>2020</th> </tr> </thead> <tbody> <tr> <td>Program Service Expenses</td> <td>$566</td> <td>$784</td> </tr> <tr> <td>Staffing</td> <td>$768</td> <td>$656</td> </tr> <tr> <td>Information Technology</td> <td>$267</td> <td>$200</td> </tr> <tr> <td>Insurance</td> <td>$109</td> <td>$93</td> </tr> <tr> <td>Legal</td> <td>$154</td> <td>$128</td> </tr> <tr> <td>Other Expenses</td> <td>$97</td> <td>$65</td> </tr> <tr> <td><strong>TOTAL EXPENSES</strong></td> <td><strong>$1,961</strong></td> <td><strong>$1,926</strong></td> </tr> </tbody> </table> <table> <thead> <tr> <th>NET INCOME</th> <th>2021</th> <th>2020</th> </tr> </thead> <tbody> <tr> <td><strong>$752</strong></td> <td><strong>$385</strong></td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>ASSETS</th> <th>2021</th> <th>2020</th> </tr> </thead> <tbody> <tr> <td>Cash and Cash Equivalents</td> <td>$4,772</td> <td>$3,671</td> </tr> <tr> <td>Accounts Receivable - Net</td> <td>$243</td> <td>$415</td> </tr> <tr> <td>Other Current Assets</td> <td>$48</td> <td>$48</td> </tr> <tr> <td><strong>TOTAL ASSETS</strong></td> <td><strong>$5,063</strong></td> <td><strong>$4,134</strong></td> </tr> </tbody> </table> <table> <thead> <tr> <th>LIABILITIES</th> <th>2021</th> <th>2020</th> </tr> </thead> <tbody> <tr> <td>Deferred Revenue</td> <td>$334</td> <td>$103</td> </tr> <tr> <td>Accounts Payable &amp; Liabilities</td> <td>$194</td> <td>$249</td> </tr> <tr> <td><strong>TOTAL LIABILITIES</strong></td> <td><strong>$528</strong></td> <td><strong>$352</strong></td> </tr> </tbody> </table> <table> <thead> <tr> <th>NET ASSETS</th> <th>2021</th> <th>2020</th> </tr> </thead> <tbody> <tr> <td>Net Assets without Donor Restrictions</td> <td>$3,514</td> <td>$3,107</td> </tr> <tr> <td>Net Assets with Donor Restrictions</td> <td>$1,021</td> <td>$675</td> </tr> <tr> <td><strong>TOTAL NET ASSETS</strong></td> <td><strong>$4,535</strong></td> <td><strong>$3,782</strong></td> </tr> </tbody> </table> <table> <thead> <tr> <th>TOTAL LIABILITIES &amp; NET ASSETS</th> <th>2021</th> <th>2020</th> </tr> </thead> <tbody> <tr> <td><strong>$5,063</strong></td> <td><strong>$4,134</strong></td> <td></td> </tr> </tbody> </table> Program Service Expenses 2021 Expenses by Category ($ in thousands) Total Program Service Expenses: $566K Grant Disbursement The PSF resumed funding worldwide Python events and projects in the Fall of 2020. Although grant requests have been lower than usual due to pandemic fatigue, we look forward to continuing to support Python around the world with plans to increase our grants budget in future years. PSF Grant Disbursement from 2015-2021 - Python Conference Grants (53.9%) - PyLadies Workshops (11.1%) - Outreach & Education (6.2%) - Other Grants (2.7%) - Meetup Subscription Grant (3.6%) - Kids Coding Camp (1.2%) - Equipment & Hardware (0.5%) - Django Girls Workshops (12.1%) - Development (6.4%) - Ambassador Program (1.8%) - Workshops (3.0%) - Training (2.9%) - Python Sprints (4.6%) Grants by Continents After being paused for most of 2020, we restarted the grants program with a focus on virtual events, as well as some in-person events. The PSF distributed $117,157.68 in grants in 2021, to 129 groups in 41 countries around the world. Looking Forward With Our New Executive Director The PSF Board of Directors undertook a comprehensive search for a new Executive Director to oversee the PSF’s continued growth and stability. We welcomed Deb Nicholson as our new Executive Director in April, 2022. Deb is a longtime supporter of the Python community with her work as a founding Board member and tutor at OpenHatch and local organizer for the Boston Python User Group. She brings to the PSF 15 years of open source experience and over 25 years of non-profit experience. “In 2022, we’ll be continuing the work to strengthen and diversify the global Python community. In particular, I’m looking forward to working with the Board and the wider Python community to identify new opportunities to support education and outreach. With the PSF staff, I’ll be focused on increasing our technical and organizational capacities, so that we can even better serve Python users.” - DEB NICHOLSON, EXECUTIVE DIRECTOR How Can I Get Involved? All of the PSF’s initiatives help maintain and support the Python community and Python tools that you and the Python community use daily. This work can only be done with the generous financial support that organizations like yours provide. Whether you want to give back to this vibrant community or perhaps introduce your business or brand to like-minded industry professionals, the Python Software Foundation can help you accomplish those goals. Your contributions matter and they make an impact! The PSF is recognized by the IRS as a 501(c)(3) non-profit charitable organization. We hope you’ll join us as a sponsor & help make a difference in the Python ecosystem. We’d love to talk with you. Please get in touch by contacting: Sponsors@python.org
{"Source-Url": "https://s3.dualstack.us-east-2.amazonaws.com/pythondotorg-assets/media/files/PSF_Annual_report_2021_final.pdf", "len_cl100k_base": 5915, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 34122, "total-output-tokens": 6504, "length": "2e12", "weborganizer": {"__label__adult": 0.00044608116149902344, "__label__art_design": 0.00042819976806640625, "__label__crime_law": 0.000286102294921875, "__label__education_jobs": 0.0119781494140625, "__label__entertainment": 0.0001575946807861328, "__label__fashion_beauty": 0.00015091896057128906, "__label__finance_business": 0.005859375, "__label__food_dining": 0.0006842613220214844, "__label__games": 0.0006337165832519531, "__label__hardware": 0.0004105567932128906, "__label__health": 0.0007128715515136719, "__label__history": 0.0002567768096923828, "__label__home_hobbies": 0.0002187490463256836, "__label__industrial": 0.0003039836883544922, "__label__literature": 0.000324249267578125, "__label__politics": 0.0004949569702148438, "__label__religion": 0.00043654441833496094, "__label__science_tech": 0.005176544189453125, "__label__social_life": 0.0008072853088378906, "__label__software": 0.0304107666015625, "__label__software_dev": 0.9384765625, "__label__sports_fitness": 0.0002880096435546875, "__label__transportation": 0.0003812313079833984, "__label__travel": 0.0004730224609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23778, 0.05969]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23778, 0.00698]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23778, 0.94495]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 3477, false], [3477, 6732, null], [6732, 7586, null], [7586, 10120, null], [10120, 11533, null], [11533, 14351, null], [14351, 15786, null], [15786, 16415, null], [16415, 17458, null], [17458, 17546, null], [17546, 18115, null], [18115, 18666, null], [18666, 19447, null], [19447, 20976, null], [20976, 21085, null], [21085, 21776, null], [21776, 22032, null], [22032, 23778, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 3477, true], [3477, 6732, null], [6732, 7586, null], [7586, 10120, null], [10120, 11533, null], [11533, 14351, null], [14351, 15786, null], [15786, 16415, null], [16415, 17458, null], [17458, 17546, null], [17546, 18115, null], [18115, 18666, null], [18666, 19447, null], [19447, 20976, null], [20976, 21085, null], [21085, 21776, null], [21776, 22032, null], [22032, 23778, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 23778, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23778, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23778, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23778, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23778, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, true], [5000, 23778, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23778, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23778, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23778, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23778, null]], "pdf_page_numbers": [[0, 0, 1], [0, 3477, 2], [3477, 6732, 3], [6732, 7586, 4], [7586, 10120, 5], [10120, 11533, 6], [11533, 14351, 7], [14351, 15786, 8], [15786, 16415, 9], [16415, 17458, 10], [17458, 17546, 11], [17546, 18115, 12], [18115, 18666, 13], [18666, 19447, 14], [19447, 20976, 15], [20976, 21085, 16], [21085, 21776, 17], [21776, 22032, 18], [22032, 23778, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23778, 0.19577]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
b950cd7ae5c0b413191d7de54f46c0155848ed09
Designing Neural Networks for Real-Time Systems Pearce, Hammond; Yang, Xin; Roop, Partha S.; Katzef, Marc; Strøm, Tórrur Biskopstø Published in: IEEE Embedded Systems Letters Link to article, DOI: 10.1109/LES.2020.3009910 Publication date: 2020 Document Version Peer reviewed version Link back to DTU Orbit Citation (APA): General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. - Users may download and print one copy of any publication from the public portal for the purpose of private study or research. - You may not further distribute the material or use it for any profit-making activity or commercial gain - You may freely distribute the URL identifying the publication in the public portal If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Designing Neural Networks for Real-Time Systems Hammond Pearce, Xin Yang, Partha S. Roop, Marc Katzef, Tórur Biskopstø Strøm Abstract—Artificial Neural Networks (ANNs) are increasingly being used within safety-critical Cyber-Physical Systems (CPSs). It is important to validate both the timing and functional correctness of these systems. However, most approaches in the literature consider guaranteeing only the functionality of ANN based controllers. This issue stems largely from the implementation strategies used within common neural network frameworks — their underlying source code is often simply unsuitable for formal techniques such as static timing analysis. As a result, developers of safety-critical CPS must rely on informal techniques such as measurement based approaches to prove correctness, techniques that provide weak guarantees at best. In this work we address this challenge. We propose a design pipeline whereby neural networks trained using the popular deep learning framework Keras are compiled to functionally equivalent C code. This C code is restricted to simple constructs that may be analysed by existing static timing analysis tools. As a result, if compiled to a suitable time-predictable platform all execution bounds may be statically derived. To demonstrate the benefits of our approach we execute an ANN trained to drive an autonomous vehicle around a race track. We compile the ANN to the Patmos time-predictable controller, and show that we can derive worst case execution timings. I. INTRODUCTION In safety-critical Cyber-Physical Systems (CPSs), timing correctness can be as important as functional correctness. Consider the case published in [1] where an autonomous F1/10 vehicle is using LiDAR to drive safely through a racetrack, as depicted in Figure 1. A corner approaches. The vehicle must react appropriately, ensuring that two properties are met: firstly, that the corner is detected and the steering is changed to avoid crashing; and secondly, that the decision to steer the car around the corner is completed in a timely fashion. In other words, if the car controller takes too long to process the change in the road, the output of that controller is incorrect and will potentially lead to a crash. Unfortunately, while traditional control code for CPS such as autonomous vehicles is designed to be amenable to both functional verification (e.g. ensure that the car will turn) and timing verification (e.g. ensure that this decision will take place quickly enough), often modern control systems involving Artificial Intelligence (AI) do not. Instead, popular learning frameworks such as Keras [2] and Caffe [3] rely on training/validation, simulation, and deployment testing, approaches which only provide weak guarantees of correctness [4]. In addition, approaches such as those provided through TensorFlow Lite [5] are popularising the usage of these kinds of neural networks in embedded devices without further examination of the safety implications involved. To address this there is a push in the literature to bring formal methodologies into the AI domain [6], [7], especially within the usage of Artificial Neural Networks (ANNs) such as Multi-Layer Perceptrons (MLPs) and Convolutional Neural Networks (CNNs). For example, the case study in [1] was verified using reachability analysis, whereby the sigmoid activation function inside the neurons of the MLP controller were converted into a non-linear hybrid system before being analysed using a tool they called Flow* [8], [9]. Despite this, the issue of timing verification of ANN-based controllers has received scant attention [4]. In this work we seek to address this challenge by providing a new design pipeline for converting neural networks trained in Keras to time-predictable C code. This generated code, once deployed upon a suitable time-predictable architecture, may then be statically analysed to formally derive timing bounds. The rest of this paper is organised as follows. Section II discusses the state of the art in this area. Section III discusses our solution to this challenge, where Keras neural networks are compiled to functionally equivalent C code. Finally, Section IV presents our evaluation and Section V concludes. II. BACKGROUND While it is sometimes common to consider real-time execution as simply fast execution, for safety-critical applications such as those found within CPSs it is not enough to simply execute your software quickly [10], as rarely-executed branches not covered by testing may hide the presence of delays or other timing anomalies [11]. As such, for a system to be truly real-time, it must be proven to meet its timing requirements via techniques such as Static Timing Analysis (STA). Unfortunately, while there has been plenty of work in the literature focussed on increasing the execution speed of neural networks (for instance the work presented in [12], where Redmon et al. present a fast execution framework for CNNs), the issue of formal timing predictability for ANNs has received less attention. In many cases, this is related to the underlying implementations of the neural network libraries used. For STA of software to take place, two key requirements must be met [10]. Firstly, it must be possible to convert program binaries into traversable control flow graphs. This can be very difficult given the presence of program interpreters (for instance if the neural network is implemented in a language such as Python); the presence of a complex runtime and/or Operating System (OS); and/or the presence of dynamic control flows (for instance code generalised to run many neural networks rather than being specialised to one). Secondly, the code must be targeted at a time-predictable processor so that execution times can be derived for the paths through the control flow graph. As such, software implementations of neural networks can be very difficult to time. For instance, TensorFlow Lite [5], which is one of the most popular tools used by industry to embed neural networks in cyber-physical devices, features a runtime and requires Linux to operate — and as such it is not amenable to formal STA. To address this challenge, in [4] Roop et al. proposed synchronous execution of neural networks, where they may be designed with synchronous programming languages such as Esterel and then executed on a time-predictable processor such as Patmos [13]. However, their approach was limited, and not all networks could be timed. Other approaches, observing that neural networks can be compiled to hardware rather than software (such as in Had-doc2 [14]) focus on timing hardware implementations of neural networks [15]. Other hardware-based approaches also exist, although these mainly focus on performance [16] rather than predictability. In addition, deploying neural networks to reconfigurable hardware is much more complex than that provided through software tools such as Keras. III. Keras2C: OUR DESIGN APPROACH There is no fundamental reason that implementations of neural networks must be so difficult to time. At their heart, neural networks such as MLPs and CNNs are just different kinds of feed-forward mathematical functions. In this section we detail how a neural network, trained in Keras [2] to drive autonomous vehicle around the track in Figure 1, may be formally implemented in a time-predictable manner. A. Design Process The Keras2C design process is depicted in Figure 2. Users will (1) design and train their networks using the Keras framework then (2) use the normal validation process to check functionality. From (3) the saved Keras network files (an architecture.json which describes the network structure, and a weights.h5 storing the network’s trained weights) are given to the Keras2C tool which generates a folder of .c and .h files. These may then be (5) compiled with the open-source Patmos-clang compiler, and the output of the compilation further analysed with the existing STA tool platin [17] to derive Worst-Case Execution Time (WCET). If satisfactory, these files may then be considered the final implementation (6) of the neural network. B. Compiling Multi-Layer Perceptrons (MLPs) Let us examine this more formally for an MLP type neural network, as this is suitable for driving the autonomous vehicle around the track in Figure 1. An MLP can be encoded as Definition III.1. Definition III.1. An MLP can be formalised as a tuple \( M = (I, O, L, N, \alpha, B, C, W, f) \), where: - \( I \) is a finite collection of \( n \) input variables with its domain being \( I = \mathbb{R}^n \) - \( O \) is a finite collection of \( m \) output variables with its domain being \( O = \mathbb{R}^m \) - \( L \) is a set of layers where \( l_0 \) represents the input layer and \( l_{|L|-1} \) represents the output layer. - \( N \) is a set of neurons. - \( \alpha : L \rightarrow N \) is a neuron mapping function \( \alpha(l_n) \) where no neuron can appear in more than one layer. Neurons in the input layer are mapped to inputs \( I \) (i.e. \( |\alpha(l_0)| = |I| \), and neurons in the output layer are mapped to outputs \( O \) (i.e. \( |\alpha(l_{|L|-1})| = |O| \)). - \( B : N \rightarrow \mathbb{R} \) is a function \( B(n_i) \) which returns a real-valued bias for a given neuron. - \( C \subseteq N \times N \) is an ordered set of inter-layer connections \( c_{i,j} \) between neurons \( n_i \) and \( n_j \) such that \( n_i \in \alpha(l_k) \) and \( n_j \in \alpha(l_{k+1}) \) (i.e. connections can only go from a neuron in one layer to a neuron in the next layer). - \( W : C \rightarrow \mathbb{R} \) is a function \( W(c_{i,j}) \) which returns a real-valued weighting for a given connection \( c_{i,j} \in C \). - \( f : \mathbb{R} \rightarrow \mathbb{R} \) is the neuron activation function (e.g. ReLU). An MLP according to Definition III.1 can be straightforwardly executed using Algorithm 1. Here, lines 1-3 set the input neuron layer to simply be their input values from \( I \). Lines 4-12 then compute the value of all other neurons (in the hidden and output layers), by first setting all other neurons to their initial value i.e. their bias (line 6) and then summing their source neuron values multiplied by their connection weights (lines 7-9) before passing the sum into their activation function (line 10). Finally, the output collection \( O \) can be produced (lines 13-15). It is straightforward to see that if Algorithm 1 is directly realised as static C code with bounded for loops then this execution will be time-predictable. However, while this approach is fundamentally the same as any used by the popular machine learning frameworks such as Keras, they focus on dynamic approaches for improving average-case execution... Algorithm 1 Execution of an MLP $M$ 1: for all $n_i \in \alpha(l_0)$ do 2: $n_i \leftarrow$ from_input($l, n_j$) 3: end for 4: for each $l \in L \setminus l_0$ do 5: for each $n_i \in \alpha(l)$ do 6: $n_i \leftarrow B(n_i)$ 7: for each $c_{h,i} \in C$ from $n_h$ to $n_i$ do 8: $n_i \leftarrow n_i + n_h \cdot W(c_{h,i})$ 9: end for 10: $n_i \leftarrow f(n_i)$ 11: end for 12: end for 13: for all $n_i \in \alpha(l_k)$ where $k = |L| - 1$ do 14: set_output($O, n_i$) 15: end for C. Other network types Definition III.1 and Algorithm 1 are designed for static execution of MLPs. However, any stateless feedforward neural networks (such as CNNs) may be executed in a similar manner, as they too may be simply considered sequences of mathematical operations. IV. Evaluation In this section, we evaluate the efficacy of our approach for a range of available benchmarks against two other methodologies. Firstly, we examine our approach in a formal setting by benchmarking it against the existing results in [4] for time-predictable execution of neural networks on the Patmos architecture. Then, we examine the raw scalability and raw performance of our approach more generally by benchmarking it against the popular TensorFlow Lite framework. A. Comparison with Synchronous ANNs (SANNs) [4] introduces a synchronous framework for execution of ANNs that they term SANN. In this section we will benchmark our approach against their proposal. 1) Methodology: Using the SANN approach, neural networks are trained offline and then compiled in a framework involving the Esterel programming language to C code. However, while they could execute both CNNs and MLPs, only MLPs were compiled to predictable C code, which could then be executed over the time-predictable Patmos architecture, detailed here. 2) The Patmos time-predictable processor: The Patmos [13] processor is one part of the larger T-CREST [18] project, which features a series of time-predictable hardware and associated tools, including interconnect, memories, and software tool-chain including the LLVM-based compiler Patmos-clang and the WCET analysis tool Platin [19]. Patmos is a RISC style processor optimised for WCET analysis. An example of this optimization is the use of special cache types to aid WCET analysis: a stack cache, reserved for stack allocated data, and a method cache for full function caching. If the function does not fit into a cache block, it is broken into smaller functions by the compiler. 3) Benchmarking: For this approach we compare four benchmarks from SANN [4]. Of the four, only XOR and Adder had formal analysis presented in the original paper, however our tool can generate predictable code for any MLP and CNN from Keras. In addition, we also present the results for F1/10 running on Patmos, presented both with and without the fix16 library used in the SANN benchmarks. All benchmarks were analysed over a single-core 50MHz Patmos processor. <table> <thead> <tr> <th>Benchmark</th> <th>Number of connections</th> <th>SANN WCET (ms) from [4]</th> <th>Keras2C WCET (ms)</th> </tr> </thead> <tbody> <tr> <td>XOR</td> <td>9</td> <td>0.82</td> <td>0.1</td> </tr> <tr> <td>Adder</td> <td>15</td> <td>0.49</td> <td>0.1</td> </tr> <tr> <td>Rabbit</td> <td>576</td> <td>Unavailable</td> <td>2.93</td> </tr> <tr> <td>Wolf</td> <td>840</td> <td>Unavailable</td> <td>4.03</td> </tr> <tr> <td>F1/10 (fix16)</td> <td>24,320</td> <td>N/A</td> <td>8.83</td> </tr> <tr> <td>F1/10 (float)</td> <td>24,320</td> <td>N/A</td> <td>3.235.39</td> </tr> </tbody> </table> Table I: Benchmarking Keras2C against SANN. Presented times are “per invocation”. As can be seen in Table I, the Keras2C code runs faster than the SANN approach. We believe this is due to overheads introduced by the Esterel compilation process. Our approach is also more amenable to STA, as the CNN library utilised in the original approach is not suitable for analysis. 4) Results for the F1/10 case study: Consider the track depicted in Figure 1. It is 20m by 10m with a width of 1.5m. The racing car starts from the mid-point of the left side of the track, drives forwards and makes two left turns, and will finally stop at the mid-point of the right side of the track. The vehicle is initially stationary, and it accelerates until its velocity reaches 2.4 $\text{ms}^{-1}$. It typically takes the car approximately 20s to complete the track. In the original case study, the car’s controller is expected to emit a control output every 100ms (i.e. every 0.24m of travel distance at top speed). If this deadline is not met then an error will be introduced. If the deadline continues to fail to be met then further errors are introduced. Eventually these errors may build up sufficiently that the car will crash. Utilising our approach, the F1/10 case study is guaranteed to execute in 82.83 $\text{ms}$ when utilising the fix16 library for fixed point math. However, if this library is not used, timing analysis becomes inaccurate due to the difficulties involved with predicting execution time for floating point computations [20], and meeting the 100ms deadline cannot be guaranteed. 5) Analysing the LeNet-5 benchmark: To demonstrate our ability to analyse CNNs, we also analyse the 60,000 parameter benchmark LeNet-5 [21]. Given the same Patmos core at 50MHz, the toolchain derives a WCET of 751.84 $\text{ms}$ (when using fx16) and 30.381 $\text{ms}$ (when using floats). B. Performance Comparison with TensorFlow Lite TensorFlow Lite is a popular tool for executing neural networks on platforms such as smartphones and embedded computers (e.g. the ARM-based Raspberry Pi). It uses a dynamic runtime with a background garbage collector to execute its neural networks, and requires a full underlying OS. As such it is not amenable to STA, unlike Keras2C. <table> <thead> <tr> <th>Benchmark</th> <th>Number of connections</th> <th>TensorFlow Lite</th> <th>Keras2C</th> </tr> </thead> <tbody> <tr> <td></td> <td>Avg. (ms)</td> <td>WCET (ms)</td> <td>Avg. (ms)</td> </tr> <tr> <td>XOR</td> <td>9</td> <td>0.027</td> <td>15.10</td> </tr> <tr> <td>Adder</td> <td>15</td> <td>0.024</td> <td>10.24</td> </tr> <tr> <td>Rabbit</td> <td>7/6</td> <td>0.046</td> <td>10.15</td> </tr> <tr> <td>Wolf</td> <td>840</td> <td>0.048</td> <td>13.07</td> </tr> <tr> <td>F1/10</td> <td>24320</td> <td>0.096</td> <td>2.269</td> </tr> </tbody> </table> Table II: Benchmarking Keras2C against TensorFlow Lite. Presented times are “per invocation”. Lower is better. Table II presents our comparison with TensorFlow Lite on a Raspberry Pi Model 3B, where we measured the execution times of the network over a random input using the system clock. This is repeated for one million invocations. For fairness, we omit time taken for the I/O of both frameworks as well as the first 100 ticks of TensorFlow Lite due to its significant “wind up” time. As can be seen, while our approach is faster in the average case for the smaller networks, TensorFlow Lite outperforms our C code for the larger F1/10 network. This is likely due to its internal optimisation routines which focus on average case performance. However, these optimisations come at a cost, with unexpected delays in the worst case and overheads introduced for small networks. As such, in all benchmarks our framework (as it is optimised for timing predictability) had a lower measured WCET. V. CONCLUSIONS Neural networks are increasingly being adopted within CPS. However, while these systems might have safety-critical timing requirements, there are few approaches for verifying timing properties of neural network based code. Through a series of benchmarks we have demonstrated our approach, Keras2C. It addresses the challenge through a simple-to-use toolchain for converting neural networks trained in Keras to C code amenable to static timing analysis. Keras2C is made freely available at https://airtransformer.com. Future work could examine how certain optimisations could be integrated to improve average-case performance without compromising our timing predictability. REFERENCES
{"Source-Url": "https://backend.orbit.dtu.dk/ws/files/217217554/Hkkr_09143184.pdf", "len_cl100k_base": 4715, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 17005, "total-output-tokens": 6998, "length": "2e12", "weborganizer": {"__label__adult": 0.0006976127624511719, "__label__art_design": 0.000720977783203125, "__label__crime_law": 0.0006895065307617188, "__label__education_jobs": 0.0007724761962890625, "__label__entertainment": 0.00017142295837402344, "__label__fashion_beauty": 0.00034737586975097656, "__label__finance_business": 0.000457763671875, "__label__food_dining": 0.0006394386291503906, "__label__games": 0.0010328292846679688, "__label__hardware": 0.007167816162109375, "__label__health": 0.0014324188232421875, "__label__history": 0.0004892349243164062, "__label__home_hobbies": 0.0002732276916503906, "__label__industrial": 0.001468658447265625, "__label__literature": 0.0003633499145507813, "__label__politics": 0.0005488395690917969, "__label__religion": 0.0008974075317382812, "__label__science_tech": 0.48046875, "__label__social_life": 0.0001386404037475586, "__label__software": 0.005828857421875, "__label__software_dev": 0.491943359375, "__label__sports_fitness": 0.0006380081176757812, "__label__transportation": 0.0024623870849609375, "__label__travel": 0.00031495094299316406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25256, 0.05313]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25256, 0.26743]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25256, 0.8694]], "google_gemma-3-12b-it_contains_pii": [[0, 1325, false], [1325, 6400, null], [6400, 12105, null], [12105, 17599, null], [17599, 25256, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1325, true], [1325, 6400, null], [6400, 12105, null], [12105, 17599, null], [17599, 25256, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25256, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25256, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25256, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25256, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25256, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25256, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25256, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25256, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25256, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25256, null]], "pdf_page_numbers": [[0, 1325, 1], [1325, 6400, 2], [6400, 12105, 3], [12105, 17599, 4], [17599, 25256, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25256, 0.12121]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
ef8a406c1c18fa4c2304dabf85c8e4bbb0f24bc7
CSE 341, Autumn 2015, Ruby Introduction Summary Disclaimer: This lecture summary is not necessarily a complete substitute for attending class, reading the associated code, etc. It is designed to be a useful resource for students who attended class and are later reviewing the material. Introducing Ruby This lecture is an introduction to Ruby. The corresponding code demonstrates many different language features. Because the book Programming Ruby, 2nd Edition by Dave Thomas and various free online tutorials are more than sufficient, the lecture materials may not describe in full detail every language feature we use. The course website provides installation and basic usage instructions for Ruby. Note in particular that officially we will be using version 2.2.3 of the language. You will need the tk graphics library for Assignment 7. Ruby Features Most Interesting for a PL Course Ruby is a large, modern programming language with various features that make it popular. Some of these features are useful for a course on programming-language features and semantics, whereas others are not useful for our purposes even though they may be very useful in day-to-day programming. Our focus will be on object-oriented programming, dynamic typing, blocks (which are almost closures), and mixins. We briefly describe these features and some other things that distinguish Ruby here: - Ruby is a pure object-oriented language, which means all values in the language are objects. In Java, some values that are not objects are null, 13, true, and 4.0. In Ruby, every expression evaluates to an object. - Ruby is class-based: Every object is an instance of a class. An object’s class determines what methods an object has. As in Java, you call a method “on” an object, e.g., obj.m(3,4) evaluates the variable obj to an object and calls its m method with arguments 3 and 4. Not all object-oriented languages are class-based; see, for example, Javascript. - Ruby has mixins: A later lecture will describe mixins, which strike a reasonable compromise between C++’s multiple inheritance and Java’s interfaces. Like Java, every Ruby class has one superclass, but it can include any number of mixins, which, unlike interfaces, can define methods (not just require their existence). - Ruby is dynamically typed: Just as Racket allowed calling any function with any argument, Ruby allows calling any method on any object with any arguments. If the receiver (the object on which we call the method) does not define the method, we get a dynamic error. - Ruby has many dynamic features: In addition to dynamic typing, Ruby allows instance variables (Java’s fields) to be added and removed from objects and it allows methods to be added and removed from classes while a program executes. - Ruby has convenient reflection: Various built-in methods make it easy to discover at run-time properties about objects. As examples, every object has a method class that returns the object’s class, and a method methods that returns an array of the object’s methods. - Ruby has blocks and closures: Blocks are almost like closures and are used throughout Ruby libraries for convenient higher-order programming. Indeed, it is rare in Ruby to use an explicit loop since collection classes like Array define so many useful iterators. Ruby also has fully-powerful closures for when you need them. • Ruby is a *scripting language*: There is no precise definition of what makes a language a scripting language. It means the language is engineered toward making it easy to write short programs, providing convenient access to manipulating files and strings (topics we won’t discuss), and having less concern for performance. Like many scripting languages, Ruby does not require that you declare variables before using them and there are often many ways to say the same thing. • Ruby is *popular for web applications*: The Ruby on Rails framework is a popular choice for developing the server side modern web-sites. Recall that, taken together, Haskell, Racket, Ruby, and Java, cover all four combinations of functional vs. object-oriented and statically vs. dynamically typed. Our focus will be on Ruby’s object-oriented nature, not on its benefits as a scripting language. We also won’t discuss at all its support for building web applications, which is a main reason it is currently so popular. As an object-oriented language, Ruby shares much with Smalltalk, a language that has basically not changed since 1980. Ruby does have some nice additions, such as mixins. Ruby is also a large language with a “why not” attitude, especially with regard to syntax. Haskell and Racket (and Smalltalk) adhere rather strictly to certain traditional programming-language principles, such as defining a small language with powerful features that programmers can then use to build large libraries. Ruby often takes the opposite view. For example, there are many different ways to write an if-expression. **Objects, Classes, Methods, Variables, Etc.** The code associated with this lecture contains an example class definition for rational numbers, which is a useful complement to the general rules described here. *Class and method definitions* Since every *object* has a *class*, we need to define classes and then create *instances* of them (an object of class \( \mathcal{C} \) is an instance of \( \mathcal{C} \)). (Of course, Ruby also predefines many classes in its language and standard library.) The basic syntax (we will add features as we go) for creating a class \( \text{Foo} \) with *methods* \( m_1, m_2, \ldots m_n \) can be: ```ruby class Foo def m1 ... end def m2 (x, y) ... end ... def mn z ... end end ``` Class names must be capitalized. They include method definitions. A method can take any number of arguments, including 0, and we have a variable for each argument. In the example above, \( m_1 \) takes 0 arguments, \( m_2 \) takes two arguments, and \( m_n \) takes 1 argument. Not shown here are method bodies. Like Haskell and Racket functions, a method implicitly returns its last expression. Like Java, you can use an explicit `return` statement to return immediately when helpful. (It is bad style to have a return at the end of your method since it can be implicit there.) Method arguments can have defaults in which case a caller can pass fewer actual arguments and the remaining ones are filled in with defaults. If a method argument has a default, then all arguments to its right must also have a default. An example is: ```ruby def myMethod (x,y,z=0,w="hi") ... end ``` **Instance variables** An object has a class, which defines its methods. It also has instance variables, which hold values (i.e., objects). Many languages, including Java, use the term fields instead of instance variables for the same concept. Unlike Java, our class definition does not indicate what instance variables an instance of the class will have. To add an instance variable to an object, you just assign to it: if the instance variable does not already exist, it is created. All instance variables start with an @, e.g., @foo, to distinguish them from variables local to a method. (Ruby also has class variables, which are like Java’s static fields. They are written @@foo.) Each object has its own instance variables. Instance variables are mutable. An expression (in a method body) can read an instance variable with an expression like @foo and write an instance variable with an expression @foo = newValue. Instance variables are private to an object. There is no way to directly access an instance variable of any other object. **Calling methods** The method call e0.m(e1, ..., en) evaluates e0, e1, ..., en to objects. It then calls the method m in the result of e0 (as determined by the class of the result of e0), passing the results of e1, ..., en as arguments. As for syntax, the parentheses are optional. In particular, a zero-argument call is usually written e0.m, though e0.m() also works. To call another method on the same object as the currently executing method, you can write self.m(...) or just m(...). (Java works the same way except it uses the keyword this instead of self.) In OOP, another common name for a method call is a message send. So we can say e0.m e1 sends the result of e0 the message m with the argument that is the result of e1. This terminology is “more object-oriented” — as a client, we do not care how the receiver (of the message) is implemented (e.g., with a method named m) as long as it can handle the message. **Constructing an object** To create a new instance of class Foo, you write Foo.new (...) where (...) holds some number of arguments (where, as with all method calls, the parentheses are optional and when there are zero or one arguments it is preferred to omit them). The call to Foo.new will create a new instance of Foo and then, before Foo.new returns, call the new object’s initialize method with all the arguments passed to Foo.new. That is, the method initialize is special and serves the same role as Java’s constructors. Typical behavior for initialize is to create and initialize instance variables. In fact, the normal approach is for initialize always to create the same instance variables and for no other methods in the class to create instance variables. But Ruby does not require this and it may be useful on occasion to violate these conventions. It is a run-time error to call Foo.new with a number of arguments that the initialize method for the class cannot handle. **Expressions and Local Variables** Most expressions in Ruby are actually method calls. Even `e1 + e2` is just syntactic sugar for `e1.+ e2`, i.e., call the `+` method on the result of `e1` with the result of `e2`. Another example is `puts e`, which prints the result of `e` (after calling its `to_s` method to convert it to a string) and then a newline. It turns out `puts` is a method in all objects (it is defined in class `Object` and all classes are subclasses of `Object` — the next lecture discusses subclasses), so `puts e` is just `self.puts e`. Not every expression is a method call. The most common other expression is some form of conditional. There are various ways to write conditionals; see the example code. As the next lecture discusses, loop expressions are rare in Ruby code. Like instance variables, variables local to a method do not have to be declared: The first time you assign to `x` in a method will create the variable. Everything is an Object Everything is an object, including numbers, booleans, and `nil` (which is often used like `null` in Java). For example, `-42.abs` evaluates to `42` because the `Fixnum` class defines the method `abs` to compute the absolute value and `-42` is an instance of `Fixnum`. (Of course, this is a silly expression, but `x.abs` where `x` currently holds `-42` is reasonable.) All objects have a `nil?` method, which the class of `nil` defines to return `true` but other classes define to return `false`. Like in Haskell and Racket, every expression produces a result, but when no particular result makes sense, `nil` is preferred style (much like Haskell’s `()` and Racket’s `void-object`). That said, it is often convenient for methods to return `self` so that subsequent method calls to the same object can be put together. For example, if the `foo` method returns `self`, then you can write `x.foo(14).bar("hi")` instead of ```ruby x.foo(14) x.bar("hi") ``` Some Syntax, Semantics, and Scoping To Get Used To Ruby has a fair number of quirks that are often convenient for quickly writing useful programs but may take some getting used to. Here are some examples; you will surely discover more. - There are several forms of conditional expressions, including `e1 if e2` (all on one line), which evaluates `e1` only if `e2` is true (i.e., it reads right-to-left). - Newlines are often significant. For example, you can write ```ruby if e1 e2 else e3 end ``` But if you want to put this all on one line you need to write `if e1 then e2 else e3 end`. Note, however, indentation is never significant (only a matter of style). - Conditionals can operate on any object and treat every object as “true” with two exceptions: `false` and `nil`. - You can define a method with a name that ends in `=`, for example: ```ruby def foo= x @blah = x * 2 end ``` As expected, you can write `e.foo=(17)` to change `e`'s `foo` field to be 34. Better yet, you can adjust the parentheses and spacing to write `e.foo = 17`. This is just syntactic sugar. It “feels” like an assignment statement, but it is really a method call. Stylistically you do this for methods that mutate an object’s state in some “simple” way (like setting a field). - Where you write `this` in Java, you write `self` in Ruby. - The methods of a class do not all have to be defined in the same place. If you write `class Foo ... end` multiple times in a program, all the methods are part of class `Foo`. (Any repeated methods replace earlier definitions, even for instances of the class that have already been created.) - Remember variables (local, instance, or class) get automatically created by assignment, so if you mis-spell a variable in an assignment, you end up just creating a different variable. ### Visibility, Getters/Setters As mentioned above, instance variables are private to an object: only method calls with *that object* as the receiver can read the fields. As a result, the syntax is `@foo` and the self-object is implied. The syntax `self.@foo` is not allowed since it is redundant and `x.@foo` would break the privacy rules. Notice even other instances of the same class cannot access the instance variables. This is quite object-oriented: you can interact with another object only by sending it messages. Methods can have different *visibilities*. The default is `public`, which means any object can call the method. There is also `private`, which like with instance variables, allows only the object itself to call the method (from other methods in the object). In-between is `protected`: A protected method can be called by any object that is an instance of the same class or any subclass of the class. There are various ways to specify the visibility of a method. Perhaps the simplest is within the class definition you can put `public`, `private`, or `protected` between method definitions. Reading top-down, the most recent visibility specified holds for all methods until the next visibility is specified. There is an implicit `public` before the first method in the class. To make the contents of an instance variable available and/or mutable, we can easily define getter and setter methods, which by convention we can give the same name as the instance variable. For example: ```ruby def foo @foo end def foo= x @foo = x end ``` If these methods are public, now any code can access the instance variable `@foo` indirectly, by calling `foo` or `foo=` (and, as noted above, calls to the latter can be written as `e1.foo = e2`). It often makes sense to instead make these methods `protected`. The `Rational` class in the associated code uses protected getter methods to good effect: The getters are needed to implement addition by another instance of `Rational`, but we do not make the numerator and denominator publicly available. The advantage of the getter/setter approach is it remains an implementation detail that these methods are implemented as getting and setting an instance variable. We, or a subclass implementer, could change this decision later without clients knowing. We can also omit the setter to ensure an instance variable is not mutated except perhaps by a method of the object. Because getter and setter methods are so common, there is shorter syntax for defining them. For example, to define getters for instance variables `@x`, `@y`, and `@z` and a setter for `@x`, the class definition can just include: Top-Level You can define methods, variables, etc. outside of an explicit class definition. The methods are implicitly added to class Object, which makes them available from within any object’s methods. Top-level expressions are evaluated in order when the program runs. So instead of Ruby specifying a main class and method with a special name (like main), you can just create an object and call a method on it at top-level. Dynamic Class Definitions A Ruby program (or a user of the REPL) can change class definitions while a Ruby program is running. Naturally this affects all users of the class. Perhaps surprisingly, it even affects instances of the class that have already been created. That is, if you create an instance of Foo and then add or delete methods in Foo, then the already-created object “sees” the changes to its behavior. This is usually dubious style, but it leads to a simpler language definition: defining classes and changing their definitions is just a run-time operation like everything else. It can certainly break programs: If I change or delete the + method on numbers, I would not expect many programs to keep working correctly. It can be useful to add methods to existing classes, especially if the designer of the class did not think of a useful helper method. Duck Typing Duck typing refers to the expression, “If it walks like a duck and quacks like a duck, then it’s a duck” though a better conclusion might be, “then there is no reason to concern yourself with the possibility that it might not be a duck.” In Ruby, this refers to the idea that the class of an object (e.g., “Duck”) passed to a method is not important so long as the object can respond to all the messages it is expected to (e.g., “walk to x” or “quackNow”). For example, consider this method: ```ruby def mirror_update pt pt.x = pt.x * -1 end ``` It is natural to view this as a method that must take an instance of a particular class Point (not shown here) since it uses methods x and x= defined in it. And the x getter must return a number since the result of pt.x is sent the * message with -1 for multiplication. But this method is more generally useful. It is not necessary for pt to be an instance of Point provided it has methods x and x=. Moreover, the x and x= methods need not be a getter and setter for an instance variable @x. Even more generally, we do not need the x method to return a number. It just has to return some object that can respond to the * message with argument -1. Duck typing can make code more reusable, allowing clients to make “fake ducks” and still use your code. In Ruby, duck typing basically “comes for free” as long you do not explicitly check that arguments are instances of particular classes using methods like instance_of? or is_a? (see next lecture). Duck typing has disadvantages. The most lenient specification of how to use a method ends up describing the whole implementation of a method, in particular what messages it sends to what objects. If our specification reveals all that, then almost no variant of the implementation will be equivalent. For example, if we know \( i \) is a number (and ignoring clients redefining methods in the classes for numbers), then we can replace \( i+i \) with \( i*2 \) or \( 2*i \). But if we just assume \( i \) can receive the + message with itself as an argument, then we cannot do these replacements since \( i \) may not have a \( * \) method (breaking \( i*2 \)) or it may not be the sort of object that \( 2 \) expects as an argument to \( * \) (breaking \( 2*i \)).
{"Source-Url": "http://courses.cs.washington.edu/courses/cse341/15au/ruby/ruby-intro-summary.pdf", "len_cl100k_base": 4369, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 17788, "total-output-tokens": 4744, "length": "2e12", "weborganizer": {"__label__adult": 0.0005016326904296875, "__label__art_design": 0.0004343986511230469, "__label__crime_law": 0.00026702880859375, "__label__education_jobs": 0.00572967529296875, "__label__entertainment": 8.255243301391602e-05, "__label__fashion_beauty": 0.00018024444580078125, "__label__finance_business": 0.00018405914306640625, "__label__food_dining": 0.000591278076171875, "__label__games": 0.00048232078552246094, "__label__hardware": 0.0004549026489257813, "__label__health": 0.0003581047058105469, "__label__history": 0.00020754337310791016, "__label__home_hobbies": 0.00012373924255371094, "__label__industrial": 0.0003483295440673828, "__label__literature": 0.0003895759582519531, "__label__politics": 0.00019991397857666016, "__label__religion": 0.0006008148193359375, "__label__science_tech": 0.001926422119140625, "__label__social_life": 0.00015819072723388672, "__label__software": 0.00502777099609375, "__label__software_dev": 0.98095703125, "__label__sports_fitness": 0.00030803680419921875, "__label__transportation": 0.0004346370697021485, "__label__travel": 0.000278472900390625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19548, 0.01668]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19548, 0.64684]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19548, 0.91041]], "google_gemma-3-12b-it_contains_pii": [[0, 3370, false], [3370, 6301, null], [6301, 9603, null], [9603, 12394, null], [12394, 15971, null], [15971, 19002, null], [19002, 19548, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3370, true], [3370, 6301, null], [6301, 9603, null], [9603, 12394, null], [12394, 15971, null], [15971, 19002, null], [19002, 19548, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19548, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19548, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19548, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19548, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 19548, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19548, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19548, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19548, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19548, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19548, null]], "pdf_page_numbers": [[0, 3370, 1], [3370, 6301, 2], [6301, 9603, 3], [9603, 12394, 4], [12394, 15971, 5], [15971, 19002, 6], [19002, 19548, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19548, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
47309f75d75ef6854f01d89e1ca3d2d2442354aa
Connectivity via Web Services: An Analysis for Interoperable E-Commerce. Emmanuel Opara Prairie View A & M University Omprakash Gupta Prairie View A&M University Follow this and additional works at: https://scholarworks.lib.csusb.edu/ciima Part of the Management Information Systems Commons Recommended Citation Available at: https://scholarworks.lib.csusb.edu/ciima/vol4/iss4/9 This Article is brought to you for free and open access by CSUSB ScholarWorks. It has been accepted for inclusion in Communications of the IIMA by an authorized editor of CSUSB ScholarWorks. For more information, please contact scholarworks@csusb.edu. Connectivity via Web Services: An Analysis for Interoperable E-Commerce. Emmanuel Opara Prairie View A&M University, Prairie View Texas (936) 857-4011, Fax (936) 857-2105; Emmanuel_Opara@pvamu.edu Omprakash Gupta Prairie View A&M University, Prairie View Texas (936) 857-4011, Fax (936) 857-2105; Omprakash_Gupta@pvamu.edu ABSTRACT The Web Services platform depicts a systematic, interoperable, and messaging infrastructure on which enterprise systems build sophisticated and dynamic applications for data mobility on the web. This article examine how scalability and interoperability of data using Web Services can promises a dramatic progression for both the technological and the business aspects of e-commerce. INTRODUCTION Global competition, strict regulatory mandates, and the desire for measurable business growth are driving the need for flexible and adaptable systems to support business agility in all types of industries. The ability to quickly access and report information in a consistent and timely manner can ensure that an organization is able to comply and compete with current and future mandates (Agrawal 2001). Scalability and interoperability of data using Web services show promises of a dramatic progression for both the technological and the business aspects of e-business strategies (Simeon 2001). The procedures and manner that corporate entities are using the World Wide Web varies significantly. Majority of these organizations are starting to identify methodologies that will allow internal applications to interact with enterprise systems at various organizations through the application of Web services. The 21st century has shown continuous turbulence in the global economy and this has resulted in the focus of industrial leaders to modify their strategies from the busted Internet –based dot.com strategy to a more focused global strategy that enable businesses to become more dexterous, efficient and effective in their pursuit of excellence and competitiveness. Enterprise organizations are introducing series of Web services to their information technology (IT) infrastructure so as to integrate applications within the enterprise and to sustain their markets or territories by reaching out to outside clients and customers. The effect of this is to create enterprise efficiencies and expose various organizations to new sources of markets and revenue. Computing industry leaders are striving for solutions on how to provide high level of interoperability across platforms, programming languages and applications that will enable end-users to solve integration problems easily. The solution seems to be in Web services. **PURPOSE OF THIS STUDY** The purpose of this study is to define and examine how Web services extend the capabilities of current technologies, by identifying innovative methodologies of achieving cross platform interoperability based on Internet standards and to show how Web services can contribute to organizational efficiency. **LITERATURE REVIEW** A Web service is a network accessible function or an application service that can be toggled-on using standards Web protocols. This service is applicable to all types of connectivity or networking environment. Web Services support business-to-business, peer-to-peer, business-to-consumer or department-to-department interactions (Coyle 2002). Studies have indicated that XML Web services are the primary building blocks in the progression to distribute computing on the Web (Teng et.al 2002). Coyle F (2002), Doman A (2003), Hooper L (2001) among others noted that Web services are software components that are loosely coupled and distributed, encapsulated business functionality and are programmatically accessible using standards that describe service-oriented, component-based application architecture. Web services are projected to be the next generation in the evolution of software architecture because it represents a new model for software applications with some architectural features that distinguishes the new software applications from the earlier ones (Fan et.al 2000). Earlier studies found that enterprise systems have been trying to develop enterprise solutions for automating system-to-system collaborations, within or across respected trusted organizational domains, as these various organizations try to enhance productivity by reducing operation costs (Davis, F et al.1989, Glushko et al. 1999, Hopkins. J, 2000). Most recent studies by Hampe, J. (2001), Sens, T. (2002), and Dyer, R (2003), have paid considerable attention as to how Web service specifications can offer communication conduit among the heterogeneous computational environments that are used in the development and hosting of system applications. Sens T (2002) found that enterprise usage of web services will reduce costs of doing business and improve the efficiency of information technology (IT). Coyle F (2002), concludes that by 2005, enterprise systems that have implemented Web services aggressively, could expect a 35 percent increase in the efficiency of various IT development projects. Kotok, A (2000), McCright, A (2001), Sarvega (2002) among others found that B2B enterprises in the e-commerce environment that implement Web services will benefit by transforming their business processes through which various enterprise collaborate with each other. Lin J. and Lu, H (2000) noted that Web Services lies at the convergence of two technologies. First among these technologies is the evolution of Service Oriented Computing (SOC) where data and business configuration is open to the elements of the Web system through programmatic interfaces and integration. The authors emphasized that the origin for Service Oriented Computing can be traced to previous distributed component technologies such as remote procedure calls (RPC), interface description language (IDL) and CORBA. Second on the list of technologies is the Web system infrastructure that encompasses worldwide communications technology that uses standard protocols such as hypertext transfer protocol (HTTP), extensible markup language (XML), simple object access protocol (SOAP) as a formatting language. **THE NEED FOR INTEROPERABILITY ACROSS DIFFERENT PLATFORMS** The need for enterprise interoperability has been an issue among several industries that depend on information technology for competitiveness. Interoperability can be achieved using the Open protocol ![Figure 1 Scalability and Operability of Data](image) **Figure 1 Scalability and Operability of Data** Web Services platform as shown on Figure 1. Here a Web Service application is accessible over open protocols. Business to Business Integration (B2Bi) and Enterprise Application Integration (EAI) are area of focus for interoperability e-commerce. Enterprise phases of B2Bi areas represent the business interactions between different enterprises. An illustration can be seen when one business desires to purchase raw materials or business supplies from another business, both organizations will have to interact and exchange information (Hampe 2001). A problem on connectivity might be that both might not be using the same technology suite and as a result, might not be able to communicate in real time. Another analogy is that many enterprise organizations that are very successful with breakthroughs in a given platform, might want to extend their technologies to end-users. The problem here might be that end-users might not want to be locked into any particular platform or technology suite. As a result, interoperability will become a major challenge for enterprise organizations that are striving for an above average return on investments. Virtually almost every organization, use the internet to streamline communication and reduce cost with the business partners. Their strategy is to ensure that enterprise systems are secure, reliable and interoperable with little downtown. With systems continuously evolving, successful businesses can grow more rapidly and effectively. The enterprise integration application (EAI) environment create integrated infrastructure for connecting disparate enterprise systems, applications and data sources across organizational system. EAI solutions provide duplex, bidirectional solution to share seamlessly and to exchange data among enterprise resource planning platform (ERP), data warehouses, databases, and other systems within the enterprise (Doman 2003). An example can be shown using a system that has a Window® 98 LAN, a Window Millennium Web server running ASP.NET, a Peachtree Accounting System, and several Unix-based MRP and SAP systems. For the system to be effective in meeting the need of the enterprise, these systems need to collaborate by communicating and exchanging information (Kotok 2000). In the past, the problem in this type of operation has been how the make the various components talk to each other in real-time. Most organizations have relied on short term, expensive and ineffective solutions such as using printers and fax machines as part their organizational integration strategy. Interoperability will become a reality when Web servers and Web browsers can communicate and becomes available to all major platforms. The success of the integrations will mean that all window clients can access applications running on Unix servers while Macintosh clients can exchange information with mainframes. Further, servers running Windows can communicate with clients on Linux machines. **NETWORK TIERS AND SPECIFICATION USED IN WEB SERVICES** Some of the specifications that play great role in Web services include XML, SOAP, WSDL, UDDI (see figure 2). The major network tiers as shown in Figure 2 depict the evolution of Web Services. These layers are TCP/IP, HTTP/HTML and XML. The stacks of layers built successfully on top of each other have the core to data movement on the Web (see figure 2). **The TCP/IP Protocol** This is the first tier among the network. It is the backbone protocol of the Web on which higher-level, standards protocols such as HTTP and XML rely. This layer guarantees the migration of data across the wire in sets of packets. It further enhances transmission of data across public networks by emphasizing reliability of data transportation and physical connectivity (Kobielus 2003). **HTML over HTTP Layer** This is the second tier among the network tiers. It is a presentation tier that focuses on browser-based search, retrieval and sharing of information on the internet. The HTML over HTTP layer focuses on Graphic User Interface (GUI)-based navigation and also on the manipulation of presentation formats on the World Wide Web (Kobielus 2003). **XML Layer** To achieve scalability and operability, the XML layer which is the third and most compelling tier on the Internet seems to provide the solution for machine to machine (M2M) communication. As a widespread service platform, XML provide a technical specification and a critical stepping stone in Web design and development (Hooper 2001). XML version 1.0 is a published, restricted, open standard that is accessible to developers to build. XML, as syntax for data description, is definition-driven. This process is accomplished by the use of Document Type Definition (DTDs) and schemas. XML allows information to be manipulated programmatically. This suggests that tags and syntax can be agreed in advance. Further, interfaces can be defined while processing can be standardized. XML strength is shown by its capability in integrating Web-resident data into enterprise applications and also by coordinating enterprise logics that hold component pieces together in a cybernetic system (Jiang 2000). XML is ideal as a transport framework for loosely coupled Web services. The effect of this is that scalability and operability will be achieved. Further, enterprise entities will notice increase in productivity while cost will be reduced resulting in improve services (Roberto et.al 2001). **Web Service Language (WSDL)** WSDL technology is part of the XML family that is used for describing network services which act as a set of endpoints that operate on messages which contain either document-oriented or procedure-oriented information. WSDL reinforces the description of endpoints and corresponding behaviors. This technology is an additional re-enforcement layer over XML schema definitions that describe the actual messages transmitted over the net. **Simple Object Access Protocol (SOAP)** Soap is a frivolous, extensible, XML-based protocol for information substitute in a decentralized, distributed environment. It defines the formatting of a message and the basic delivery options independent of programming language, operating system, or platform. Soap is the communication protocol for XML Web services. It is a specification that defines the XML format for messages. SOAP introduces an extensible framing apparatus for XML messages. Its similarities are illustrated by the distinction involving the headers and body (payload) as shown in the example below. ```xml <soap:Envelope xmlns:soap="..."> <soap:Header> <!-- extensible headers --> </soap:Header> <soap:Body> <!--Payload --> </soap:Envelop> This section illustrates a “GetSpringSemesterGrade” Web request from the Records Department of Prairie View A&M University Registrar’s Office. <SOAP-ENV:Envelope xmlns: SOAP-ENV="http://schemas.xmlsoap.org/soap/encoding/"” SAOP-ENV: encodingstyle="http://schemas.xmlsoap.org/soap/soap/encoding/“” <SOAP-ENV:Header> <t:Transaction xmlns:t="some-URI”> SOAP-ENV:mustUnderstand="1" 5 ``` 104 Below is a short illustration of the above request that is based on the “Body” content of the request. ```xml <SOAP-ENV:HEADER> <m:GetSpringSemesterGrade xmlns:m="some-URL"> DEF </m:GetSpringSemesterGrade> </SOAP-ENV:BODY> </SOAP-ENV:Envelope> ``` Below is the response from the record department of the university on student grade that was queried. ```xml <SOAP-ENV:BODY> <m:GetSpringSemesterGradeResponse xmlns:m="some-URL"> <grade>A</grade> </m:GetSpringSemesterGradeResponse> </SOAP-ENV:BODY> ``` This example shows how SOAP operates. Here the request prompted the GetSpringSemesterGrade request operation. Note that the response defines a “GetSpringSemesterGradeResponse” operation. A practice common to SOAP calls appending “Responses” to the end of a “Request” operation create a “Response” structure. The output structure contains an element called the “Grade” which returns the results of the method invocation, possibly as a float. **THE UNIVERSAL DESCRIPTION DISCOVERY OF INFORMATION (UDDI)** This technology is an industrial-wide registry where available services about the Web are published. It is an example of a yellow or white page directory that holds listings of services available on the net. UDDI server provides a single point of reference to all available services within an organization. The process standardizes how Web services are discovered. UDDI accomplishes this objective by defining a SOAP-based application program interface (API) for querying centralized Web service repositories. With this technology, developers and programmers can appreciate the technical details of Web services. The illustration below shows Web services for an IT firm in an Enterprise Resources Planning (ERP) and Enterprise Application Integration (EAI) platform (Figure 3). The sequence of the steps is as follows: First, the user logs on to the organizational portal to request information. Here, the portal application running within an application server collects information about Web services from multiple internal applications. These applications are made possible by the customer relations management (CRM) and ERP applications by querying the private UDDI registry. Next, the portal application receives information about Web services offered by internal applications using private UDDI registry and invokes these services over the intranet. The location and WSDL binding information of the Web Services is sent to the application server. Binding information for frequently used Web Services are cached by the application, to avoid the resource intensive and time consuming dynamic binding. The diagram shows that the Web Services loosely integrate portal with CRM and ERP applications. See table 1 for a step by steps narration for Web Services. **Web Services in Action** 1. **Request** 2. **UDDI Registry (Private)** 3. **Get Location and Bind Request** 4. **Get User Personal Information** 5. **Get User Account Information** 6. **Send the Information To the Users** *Figure 3 – Lim et al. (2002)* Logical Procedure for Web Services <table> <thead> <tr> <th>Step</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Client logs on to the enterprise portal and query for information</td> </tr> <tr> <td>2</td> <td>The application supporting the portal framework retrieves information about Web Services provided by the CRM and ERP applications through the private UDDI registry</td> </tr> <tr> <td>3</td> <td>The location and WSDL binding information of Web Services is sent to the application server</td> </tr> <tr> <td>4</td> <td>The application invokes the Web Services published by the CRM application and retrieves the personal information, such as name, social security number, mailing address and email of the user. The communication is based on SOAP</td> </tr> <tr> <td>5</td> <td>The application invokes the Web Services published by the ERP application and retrieves the account information such as account number, balance and transaction history of the user. This communication is based on SOAP</td> </tr> <tr> <td>6</td> <td>The information is then formatted and sent to the user.</td> </tr> </tbody> </table> (Table 1: The Cycle for Web Services - Kobielus, 2003) **IMPLICATION TO DEVELOPERS** The study found that most of the peripheries needed for web services such as XML, WSDL and SOAP especially SOAP’s simple, real-time, RPC-style components are already integrated on every computing platform. Most of the benefits B2B enterprises etc. acquired through e-commerce proprietary protocol will instead be delivered as web services that are based on XML and HTTP. Further, full integrated suite of XML, SOAP and WSDL will be used to enhance scalability and interoperability of data on the Web. Example of processing entities that can benefit from this services include credit card processing institutions, Google use of SOAP for its search-engine API, Amazon.com offering catalog queries by either SOAP or by using less complicated HTTP GETs and POSTs protocols. **CONCLUSION** In this study, we have demonstrated that Web Services can offer a platform neutral approach for integrating applications, so that various entities will be able to integrate diverse systems in a manner supported by standards rather than proprietary systems. The capability of an entity to migrate to real-time information straddling across multiple machines, platforms, and peer-to-peer systems is one of the most important driving forces behind the adoption of Web services. Organizations who have not migrated into the Web service platform, should do so now because it will enhance the scalability and operability of data across various platforms resulting in improved productivity and reduction in cost of doing business. Further, we have established that Web services is one of the keys to data scalability and operability on the Web as B2B enterprise entities try to maintain an above average return on their investments and sustain competitive advantage as they pursue for excellence in the global economy. We also established that a sustain success of an e-business application requires the aptitude to execute prolonged, system to system, peer-to-peer collaboration among the participating entities, within or across trusted domain of a business. Enterprise systems should start using Web services in internal applications integration projects at the programming interface, remote procedure call (RPC) and application programming interface levels. This process will acquaint the information technology team with the technology issues involved in using Web services for external application integration (B2B integration) projects. REFERENCES Caswell, S. “Credit Card Fraud Crippling Online Merchants.” E-Commerce Times (March 2000). Roberto Chiennici and Rahul Sharma, “SOAP 1.1 in the Java Platform: Introducing the JAX-RPC Technology,” from a talk at the JavaOne developer conference, June 2001 Sarvega, ZapThink Briefing Note. “Intelligent XML and Web Services Switching.” November, 2002
{"Source-Url": "https://scholarworks.lib.csusb.edu/cgi/viewcontent.cgi?article=1350&context=ciima", "len_cl100k_base": 4252, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 23273, "total-output-tokens": 5861, "length": "2e12", "weborganizer": {"__label__adult": 0.00037741661071777344, "__label__art_design": 0.0005249977111816406, "__label__crime_law": 0.0005154609680175781, "__label__education_jobs": 0.0024871826171875, "__label__entertainment": 0.00018966197967529297, "__label__fashion_beauty": 0.0001984834671020508, "__label__finance_business": 0.006763458251953125, "__label__food_dining": 0.0005245208740234375, "__label__games": 0.0006055831909179688, "__label__hardware": 0.0011949539184570312, "__label__health": 0.0007753372192382812, "__label__history": 0.00045371055603027344, "__label__home_hobbies": 9.91225242614746e-05, "__label__industrial": 0.000606536865234375, "__label__literature": 0.00074005126953125, "__label__politics": 0.0004911422729492188, "__label__religion": 0.00047516822814941406, "__label__science_tech": 0.1431884765625, "__label__social_life": 0.0001838207244873047, "__label__software": 0.0819091796875, "__label__software_dev": 0.75634765625, "__label__sports_fitness": 0.0002620220184326172, "__label__transportation": 0.0008931159973144531, "__label__travel": 0.00033736228942871094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24331, 0.03475]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24331, 0.10764]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24331, 0.88451]], "google_gemma-3-12b-it_contains_pii": [[0, 814, false], [814, 2653, null], [2653, 5275, null], [5275, 7818, null], [7818, 11064, null], [11064, 12102, null], [12102, 14508, null], [14508, 16141, null], [16141, 17546, null], [17546, 20304, null], [20304, 23232, null], [23232, 24331, null]], "google_gemma-3-12b-it_is_public_document": [[0, 814, true], [814, 2653, null], [2653, 5275, null], [5275, 7818, null], [7818, 11064, null], [11064, 12102, null], [12102, 14508, null], [14508, 16141, null], [16141, 17546, null], [17546, 20304, null], [20304, 23232, null], [23232, 24331, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24331, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24331, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24331, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24331, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24331, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24331, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24331, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24331, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24331, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24331, null]], "pdf_page_numbers": [[0, 814, 1], [814, 2653, 2], [2653, 5275, 3], [5275, 7818, 4], [7818, 11064, 5], [11064, 12102, 6], [12102, 14508, 7], [14508, 16141, 8], [16141, 17546, 9], [17546, 20304, 10], [20304, 23232, 11], [23232, 24331, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24331, 0.05063]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
e01b43d6e02f6aa510e50c11a0516262ce3be090
A New Tool for Visualizing Blaise Logic Jason Ostergren, Rhonda Ash, The University of Michigan Overview The Health and Retirement Study (HRS) is a national longitudinal survey, administered biennially since 1992, on a variety of topics associated with aging and retirement. The HRS instrument is modified in the period between each wave both to improve existing sequences and to handle new circumstances, new research and new public policy, all while maintaining its longitudinal value. HRS has put considerable effort over the years into redesigning many sequences and sections scattered across the instrument. These redesigns require input from people at every level from the researchers to the programmers to the interviewers, with each group possessing different understandings of the workings of the instrument and what can or should change. One section which has been the focus of multiple redesigns relates to employment and pensions. HRS produced the third major redesign of this sequence for its 2012 instrument. This occurred, somewhat unusually, in a series of face-to-face meetings between HRS programmers, testers, specification writers, and Co-Investigators. The meetings typically involved the use of a projector to demo relevant parts of the instrument and the review of printed specification documents. The redesign was more laborious than it needed to be in part because it was difficult for all parties to envision the effects of changes as they were being discussed. Afterward, HRS began work on a tool to facilitate the redesign process in the future. The goal was to develop a visual representation of the instrument which could be useful in such a setting. The tool needed to allow easy-to-follow editing of the visualization which would help to preview the effect of a proposed change on flow and then could later be deciphered by programmers and turned into useable code. The result of this effort is a tool we refer to as “Visual Blaise,” which is the topic of this paper. Visual Blaise is a Windows desktop application that graphs the logic of a Blaise instrument one block at a time. It allows elements to be dragged and dropped, cut and pasted, and allows for the addition of new elements. Finally, it is capable of exporting a text file with the logic for each block written as a RULES section in correct Blaise syntax. Problem Description In the fall of 2011, HRS embarked on a months-long redesign of the portion of its instrument covering pensions. The pension section has a history of complex logic, spread out across multiple blocks and loops, interspersed with sections covering employment, job history, and other related topics. In fact, this section has seen rewrites and changes almost every wave since Blaise was adopted in 2002, but the changes for the 2012 wave were meant to be more fundamental. While the longitudinal value of the section obviously had to be preserved as much as possible, our Study Director wanted to completely change the structure of the data, consolidating all pension questions into one contiguous area of the instrument in a single array. In addition, he was willing to alter, drop or add content far more than would typically be allowed in a rewrite. Above all, the section needed to be made friendlier to respondents and interviewers, who had frequently gotten lost in prior designs, thus impeding the process of collecting high quality data. The process of this major rewrite constitutes the backdrop for the development of the Visual Blaise tool discussed in this paper. One part of that process was simply deciding on and prototyping a Blaise program structure that would roughly match the known indispensable content and the desired data structure outcome. Another part was analyzing and merging similar content from separate pension sections into one. Yet another was designing a flow which was flexible enough to handle the fact that respondents are often unsure about key information regarding their pensions. However, the development staff could not make the decisions about what to cut, what to add, and how to handle the new logic independently. The development of the rewrite became an iterative process (unusual for HRS) centered on weekly meetings in which programmers, spec writers and researchers were present together. Typically, the programmers would demo the latest version of the section, explaining limitations, and showing what would happen to particular types of respondents. The researchers would discover problems with the design and suggest solutions, tinker with question text, codeframes and skips, and would also propose new sequences to handle certain kinds of respondents or pensions more optimally. Spec writers would attempt to record the ideas and fixes that were suggested and turn them into actionable changes for the next round. As an example of this, the thorniest issues tended to revolve around the need to guide respondents into the correct sequences and to exclude them from other ones, all while providing escape hatches for respondents who get stuck in an inappropriate area. This is a uniquely difficult portion of the HRS interview precisely because respondents’ understanding of their own pensions is often sketchy. This means that when a major flow-control question comes up, the design has to take into account that the respondent may answer incorrectly and only later discover their mistake due to nonsensical follow-ups. Getting the design right was thus a dance among the competing needs for logical clarity from the programmers to write reliable code, the need to allow respondents multiple points at which to correct their paths, and the need to avoid asking too many clarifying questions, among other things. This unusual design process also highlighted a gap in our documentation abilities. While we could demo using the DEP and a projector and we could pass around questionnaire documentation, neither of these could really convey to everyone present how the sequences were bound together by the logic and what it would mean to move a sequence, for example, or add a new escape hatch question in the middle of a sequence. Existing flowchart tools like Delta have not gained favor at HRS, perhaps due to the unsuitability of the format for finding and understanding long complex sequences. We found ourselves making flowcharts by hand, which was time-consuming, or occasionally looking directly at code, which was unsuitable for some people present, or, most often, going with quick sketches on the whiteboard or nothing at all to look at when making decisions. This inability to discuss changes from a common understanding of the flow led to a lot of backtracking when discussions or demos in subsequent meetings revealed logical failures in the design. After the end of the development period for the HRS 2012 instrument, permission was obtained to work on a prototype for a tool that could answer the need for some way of visualizing survey logic which would be accessible to people with different backgrounds. Additionally, we wanted to be able to manipulate the visualization, so as to make it possible to observe the effects of changes immediately, particularly when there were several pieces to handle at the same time. That prototype, which we call “Visual Blaise” (a temporary name that may stick due to inertia), will be described in detail below. Solution Concept There are a few stylistic and organizational things about the way the flow chart in Visual Blaise operates which are intended to be different from typical flow charts and other Blaise tools such as Delta. We do not have a deep understanding of the history or classification of flow charts, so there is no grand theory behind the style of the diagrams used in this tool, only our observations about what ways of thinking about flow seemed to speak best to HRS staff. Figure 1: a portion of a block, containing the HRS questions “P047_” and “P113_” as shown in the Delta tool (note for comparison: the “P149_” variable shown at the bottom is actually a keep statement). Figure 2: the same logic, as shown in the “Visual Blaise” tool. The first significant stylistic difference was to make the chart horizontal with the shallowest layer of logic always at the top. Typical flow charts (and Delta) begin at the top and flow downward, expanding to the left and right, with “True” arrows sometimes angling rightward and other times angling left. The reason for making the chart horizontal is mainly about the screen real estate available on widescreen monitors and desks with multiple monitors (which are common at HRS). That is, a typical monitor setup can display much more logic horizontally than vertically. Perhaps the opposite is true if a paper printout is the goal, but we were more interested in a group looking at a screen in a meeting room (and, in any case, paper would not allow dynamic changes, so it couldn’t be the main view). The vertical ordering of the flow chart according to logical depth matters, in part, because one question that seems to float to the top of design discussions at HRS regularly is which questions are asked of (almost) everyone. When splits only angle in one direction and are consistent about which direction “True” and “False” arrows go, it is possible to glance across the top and instantly survey what are the common questions. Another ambition driving this project was to use colors and motion to improve understanding of the diagram. The arrows are color coded and dashed, if necessary, to help identify their role (bright white coloring for “True” arrows, red for “Else” arrows and gray/green for “False” arrows). It is possible to click and drag a statement icon around (on release, it snaps back into place) for purposes of clarifying its connections in particularly busy segments. The way Visual Blaise handles “Else” is significant also. While “True” arrows always point diagonally downward and to the right and “False” arrows always point horizontal or upwards and to the right, “Else” arrows are distinguished by pointing directly downwards. This helps to keep the diagram compact, but it also helps in situations where there is a series of “ElseIfs” by visually setting that construction apart from the normal flow. The last significant difference in this visualization was to break it up by blocks. As was suggested above, part of the reason that few at HRS have been interested in Delta may be that the navigation of the instrument (to find the sequences of interest) is done via the statement tree control. The statement tree control is very difficult to navigate for HRS, because there are many layers of difficult to follow conditions above the main blocks, making it hard to unearth the major pieces of the block hierarchy. Of course there is a find feature, but it seems like the inability to just “see” the main blocks scares off many users. What this means in Visual Blaise is that the visualization features the logic for only one block at a time, and a user must navigate to child blocks or parent blocks manually (via mouse, keys, or menu, as will be described later). At first this seems like a limitation, but since the logic of a block is largely independent of its parent or child blocks, it doesn’t hinder understanding too badly. More importantly, it limits the “canvas” a user is looking at to a manageable amount, free from artifacts of parent block logic which tend to be decreasingly relevant the farther removed they are (HRS has a proliferation of testing conditions and gate conditions at the top level which clutter our variable universes and contribute to the aforementioned difficulties “seeing” main blocks in the statement tree control). Object Structure This paper will not go into great detail about how Visual Blaise is programmed, except for two topics that relate directly to handling Blaise logic: importing the metadata and the internal object structure. This is helpful for understanding how editing works in this tool, which is described in the next section. The first topic, importing, has to do with the fact that Visual Blaise does not use the Blaise API at all. Instead, it makes use of the xml file produced by HRS’ BlaiseRules tool described in the IBUC 2010 proceedings. That tool makes use of the RulesNavigator in the Blaise API to extract all the logic and metadata from a specified compiled Blaise file (.bmi) and outputs it in an xml format, organized hierarchically by logic. Visual Blaise reads this xml file and converts it into an object structure in memory which it can manipulate in order to generate flow charts. It is important to note that the BlaiseRules tool is now incorporated into MQDS (as of April 2012), and the xml output discussed here is actually produced and saved as a byproduct of running MQDS, so that anyone who has installed MQDS can make use of it. One other point to make about importing is that Visual Blaise has an option to import an abridged version of the instrument, which ignores things like keep statements and assignments, in order to focus attention on asked questions. It retains any conditions necessary to reach the asked questions, but it ignores any conditions which only lead to discarded statements. Visual Blaise permanently flags this version as abridged, and it is not possible to switch back to the full version. However, it seems quite useful for some people, since the reduced amount of logic makes it easier to understand. While building this software, the notion was to develop (though what it was called wasn’t investigated until it came time to write this paragraph) something like a “node graph architecture” for the core object structure and visualization. This would allow for combining the visual advantages of a flow chart with the drag and drop editing capabilities that might be found in a tree view. Additionally, it was imagined as being coded like a linked list, so as to allow easy insertion and removal of elements. Under the hood, that meant that as the metadata was imported (as described above) it was converted into statement objects, which would become the nodes in the graph. Each Statement object contained a minimum of one and a maximum of two object references. Most statements have only a “next statement” reference, kind of like a linked list. Condition and loop (for..do) statements, however, have two. Typically this would be a “True” statement which represents where the program flows if the statement evaluates true and a “next statement” for when it evaluates false. However, where there is an else --- (including elseif) involved, there is no real “next statement” (actually it is present, but for cosmetic purposes) and instead there is a reference to an “Else” statement. With this set of nodes and links, the logic of any block can be described. After that, it is not terribly complicated to display them graphically. As for the block hierarchy, as the metadata is imported, Visual Blaise keeps a running collection of these linked statement collections for each block. This master collection is keyed so that each set of statements can be matched with their corresponding parent block statement, so as to link the entire datamodel together. **Editing Visually** A big part of the goal of this tool is to allow editing of the logic within the flow chart. This would make it possible to experiment with changes in a way that is more accessible to non-programmers, with the idea that the results could be saved for later use by the programmer who would turn them into Blaise code. To that end, the ability to drag and drop or cut and paste statements in the flow chart was incorporated. The drag and drop capability is limited to a single statement. If a selected statement is dragged over another and dropped, Visual Blaise will insert the dragged statement before the statement it is dropped on. The cut and paste capability operates similarly, via right-click pop-up menu when a statement is selected. The “cut” statement is inserted before the statement where “paste” is selected. Cut and paste functionality has the added advantage of being able to move the statement to another block. The cut and paste feature can also operate on multiple statements, which is useful for moving whole sequences. Any sequence which can be selected can be cut and pasted in the same fashion as a single statement. Selection of multiple statements is possible by selecting a starting (leftmost) statement and then holding down the Shift key and pressing arrow keys until the desired sequence is highlighted, similar to highlighting text via keyboard in a word processor. However, the way selection occurs is limited to prevent nonsensical segments from being operated upon. What this means is that when a user proceeds beyond a split (condition or loop) with the selection, the entirety of the loop or condition, including everything within it, is brought into the selection as a whole. If that is more than the user intends to move, then the user would have to select the relevant subsidiary parts and move them one at a time. There were potentially other ways to handle this multi-select functionality, but we opted for the one which seemed most likely to prevent the user from unintentionally breaking the structure of nested conditions. Finally, Visual Blaise allows other operations to be performed on selected or multi-selected statements such as deletions or inserts of new statements. **Output** Once changes to logic have been specified, they can be saved in Visual Blaise format and retrieved later. It is assumed that ultimately a programmer will look at the changes and implement them in the Blaise editor. There are a few ways that a programmer could approach this. One would be to work directly from the Visual Blaise flow chart on screen or by printing a copy of the relevant blocks, if appropriate. Another would be to work from a log of changes produced by Visual Blaise while edits are taking place. Finally, Visual Blaise allows any block (or all blocks) to be exported to a text file showing the rules section(s). The output is formatted so it can be pasted over the existing rules of a block and compiled in Blaise (assuming the Fields and other parts are already set), with end ifs and the like properly inserted. However, comments and some other markup are not preserved, so a more likely use case is that a programmer would simply use this output as a specification, or else a compare tool to merge the changes. **Operating Details of Visual Blaise** The remainder of this paper describes how Visual Blaise works from the user perspective, by illustrating some of the main functions. **Preparation Before Opening Visual Blaise** In order to run Visual Blaise, you will need the compiled files from your survey (.Bmi, .Bxi, etc). A separate program needs to be used to generate an XML file from those compiled files for use as input. As noted above, this can be done by the program called “Blaise Rules” included with MQDS, or by simply running MQDS itself. The XML that Blaise Rules generates is in the format that Visual Blaise uses to construct a flowchart. **Figure 3: an example of a question statement in XML output from Blaise Rules** ![XML Output Example](image) ```xml <Statement id="1499.2" xmlns="http://www.w3.org/1999/xhtml"> <StatementText>Were you born in the United States?</StatementText> <StatementType>Assertion</StatementType> <InvolvedFields> <InvolvedField> <LocalName>BORN</LocalName> <OriginName>BORN</OriginName> </InvolvedField> <FieldMethod>Simple</FieldMethod> <FieldKind>single</FieldKind> <FullyQualifiedNameFieldName>Born_B002_</FullyQualifiedNameFieldName> <DisplayDataType>text</DisplayDataType> <Required>True</Required> <Default><Display>Yes</Display><DefaultValue>Y</DefaultValue></Default> <RefusedAllowed>True</RefusedAllowed> </InvolvedFields> <Languages> <Language LanguageId="1" LanguageName="CORENG" LanguageDesc="" LanguageHidden="False"> <QuestionText>&#160;Were you born in the United States?</QuestionText> <Description>BORN IN US</Description> <Code id="1" name="YSS"> <CodeText>Yes</CodeText> </Code> <Code id="S" name="N"> <CodeText>No</CodeText> </Code> </Language> </Languages> </Statement> ``` Visual Blaise Functionality Once you have obtained an appropriate XML from your survey, run the Visual Blaise executable. You will see a standard Windows menu and several tabs that will be described below. Figure 4: the starting screen of Visual Blaise File Menu There are several File->Open menu options to choose from. The **From XML** option will provide a full view of the metadata with KEEPs, SHOWs, CHECKs, SIGNALs, and expanded logic. This view is great for a technical review, but may be too much information for a casual observer who does not need to know all the internal Blaise details. The **From XML, Abridged** option will omit several background elements and show only the major flow information. This view may be easier to follow for the “not-so-techy” users. **Figure 5: Visual Blaise showing all statements, including keeps, assignments and conditions that do not lead to asked questions** **Figure 6: Visual Blaise showing an abridged version of the same sequence** **Edit Menu** If you are looking for a specific variable within the block in which you are currently located, you can use the menus **Edit Find** field functions, or [CRTL]+ F. Then, [F3] will allow continued searching for the next instance of that variable. The Edit menu allows **Undo** (up to 4 levels) or [CRTL]+ F. **View Menu – Select Block: Finding Where and Getting There** It can be frustrating to find a variable in a very large instrument. Visual Blaise provides the ability to move to a specific block without having to navigate through the editor, block by block. This feature is located under the View menu. A drop-down of blocks is shown. Sub-blocks are provided as off-shoot drop-downs. This makes it easy to see the structure of your survey and move to the block that you wish to review and alter. **Figure 7: selecting a block via the “View” menu** **Window Menu – Show Key** To help with the navigation and symbols there is a Key. This is toggled on and off in the Window menu option. Attribute Tabs At the bottom of the screen there are three tabs: “Statement Text”, “Question and Codes” and “Warnings”. As you move around in the flow diagram you will see attributes of each element presented. The information displayed will depend on the type of element selected. Statement Text Figure 9: with a variable selected Figure 10: with conditional routing selected, the “Statement Text” tab shows the text of the condition - **Checks or Signals** are represented by two elements, one indicating a hard or soft check and the other with the content of the check in the “Statement Text” tab: $(((B006_ >= 1880) \text{ AND } (B006_ <= A501_CurDateYear)) \text{ OR } B006_ = \text{NONRESPONSE})$ - **Calculations** show the assignment in the “Statement Text” tab: $B005S := \text{'Mexico - assigned'}$ - **Procedures or Blocks** display the call and the parameter list. $\text{TXT_B094 \{Procedure\}} \ (\ \{\text{Parameter list:}\pi\text{Respondents1X065ACouplenss, piInitA106.NumContactKids, FLB094}}$ **Question and Codes Tab** The Question and Codes tab displays the variable text and code frame. **Warnings Tab** The warning tab provides an alert to possible errors when changing the flow location of a variable. For example: 8:56 PM: Moved or cut B008 used in (Check Statement) NOT ((B006 = EMPTY AND B007 = EMPTY) AND B008 = EMPTY) CORENG "You must answer one of these categories" **Use Case** The following section will attempt to describe the design process and the uses of this tool in that process. We will walk through a few of the main commonly requested activities, including maneuvering through the flow chart, adding and deleting variables, manipulation of variables, and tracking connections. Starting with the specifications from the last wave, the first step would be to review any reported problems during the data collection and determine the need to alter the specifications. The relevant Co-Investigators, by reviewing data and monitoring current affairs, will request changes to the specifications. Our Design Coordinators collect these requests and oversee the development process of the main survey instrument. This process starts with making decisions and developing specifications based on these change requests, on field feedback, on staff analysis of data, and on wait-list items that were not handled in the previous wave. Programmers will then add new content, cut unneeded fields, or change the sequence of the questions that are asked. For more complicated sequences, a tool like Visual Blaise provides a visualization of the change and its impact on the block before actually making the change. Users are able to add, delete or change locations of a field and see what happens to the flow. Getting Familiar with the Tool – Mouse and Keyboard Controls and Feedback When the user clicks the mouse on a statement, it becomes selected and Visual Blaise offers some visual feedback: - A flashing border is drawn around the selected statement. - An asked question which is selected also displays its descriptor in a transparent gray box below the icon. - A flashing highlight is drawn around the routing arrow extending from the statement, or from multiple arrows if the selected statement is a condition. Figure 12: the highlighted border, routing arrow, and descriptor box of a selected question - Clicking and dragging a statement allows the user to see more clearly via the motion of the arrows where the connections lead when a segment is particularly busy. - Pressing the right arrow key or the down arrow key moves the focus (and highlighting) from one selected statement to the next via the routing arrows. Pressing the left arrow key moves the focus to the previously selected statement (reversing the previous “history” of key presses, so it will not move further back than the selection starting point. - Pressing the Enter key or double-clicking on a Block or Procedure statement will change the view to show the rules for that sub-Block or Procedure instead. Pressing the Backspace key or double-clicking on the “Back” return arrow icon at rightmost end of the rules will return the view to the parent Block or Procedure. - Holding shift and pressing the right arrow key expands the selection to include multiple statements for the purpose of performing large moves or cuts. Adding a Field Via the View menu, Find function, or mouse and keyboard controls, the user can locate the part of an instrument where a change, such as adding a new question, needs to take place. Once at that location, right-clicking on a statement will cause a pop-up menu appear with options for inserting, deleting, or cutting statements. Insert will allow selection of what type of element you want add. Selecting a Question or Block type will prompt the user for the name of the new question or block. Figure 13: pop-up prompt for field name ![Figure 13: pop-up prompt for field name](image) The new field will be inserted to the left of that element. (In the future you will be able to enter the question text and codes here, but for now only the Rules section is included in the output.) Deleting a Field Similarly, a selected question (or other statement) can be deleted via the same right-click menu. The warning tab will display references to the deleted field that need to be deleted or fixed (however, this is not comprehensive; warnings are still a work in progress at this stage). Moving Elements Around The following example illustrates a field move request that utilizes Visual Blaise’s element moving capabilities. Suppose that new fields were inserted last wave and, after reviewing the data, the investigators would now like to move fields around for better flow and coverage. They decide to move B013_ to before B011_, which can be accomplished in Visual Blaise by clicking and dragging B013_ above B011_ and dropping it there. Since there is no logic dependent on B013_, the dragged statement will be inserted to the left of B011_ and no warnings will be generated. Notice that all of the flow arrows automatically realign as appropriate for this change. More complicated moves might produce warning messages. If there are references to the element being moved in other statements passed over by the move, each reference should be reviewed and altered, if needed. For example, if a field is moved to a location after a conditional statement that references it, that reference most likely would need to be deleted. The warning messages are meant to alert the user to such contingencies. Example: FieldA if FieldA = yes then FieldB endif If Field A is moved to a location after Field B the conditional routing reference may need to be deleted or updated to something else. Another issue that arises during section redesign efforts is the need to move a large group of logic as a group. To select more than one element, first click on the statement you want to start with and then press shift-right arrow until all desired statements are selected. **Figure 16: selecting multiple statements following the question B085** ![Selecting statements](image) Then, to move the selected group, right click on the group and select “cut.” Next, select the element before which the group should be inserted, right click and select paste. **Figure 17: multiple selected statements cut-and-pasted before B085** ![Cut-and-paste](image) In the above example, the Warnings tab reports that the far right IF condition in the moved group refers to the field that now comes after it. This flow will need to be fixed or deleted. ### Producing and Reviewing Output Now that the flow has been reviewed and updated, it is possible to output the changes into a Rules section readable by Blaise. Selecting “Save” from the file menu offers two options: to save the flow to a Visual Blaise file or to save to a Blaise rules text file. - Saving to a Visual Blaise file will save any changes to this point so you can retrieve and continue later. - Saving to a Rules file will produce a text file with the logic that is represented in the Visual Blaise view. An option to save the Rules section of every block or only the currently viewed block is provided. The output after the change from the above example would appear like this: ``` IF B085_ = YES THEN B086_.ASK IF B086_ <> YES THEN B087_.ASK ENDIF ENDIF B085_.ASK ``` **Conclusion** We have only just finished this tool as a beta version and begun to use it in our design process. We are hopeful that this tool will allow investigators and coordinators to experiment with the design of a section and refine the flow to an optimal state more efficiently and with a better understanding of what they will be able to achieve by making each change. We believe this tool will also result in considerable time savings for the programmer, as the output can potentially be merged into the main instrument and, with a little “tweaking,” become the next wave’s rules.
{"Source-Url": "https://www.blaiseusers.org/2013/papers/8c.pdf", "len_cl100k_base": 6498, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 30523, "total-output-tokens": 7362, "length": "2e12", "weborganizer": {"__label__adult": 0.0002372264862060547, "__label__art_design": 0.0005688667297363281, "__label__crime_law": 0.0003685951232910156, "__label__education_jobs": 0.0054779052734375, "__label__entertainment": 8.535385131835938e-05, "__label__fashion_beauty": 0.00013494491577148438, "__label__finance_business": 0.0008497238159179688, "__label__food_dining": 0.00022029876708984375, "__label__games": 0.0004911422729492188, "__label__hardware": 0.0007271766662597656, "__label__health": 0.0004737377166748047, "__label__history": 0.0002574920654296875, "__label__home_hobbies": 0.0001379251480102539, "__label__industrial": 0.0003705024719238281, "__label__literature": 0.0002682209014892578, "__label__politics": 0.00032711029052734375, "__label__religion": 0.0003361701965332031, "__label__science_tech": 0.031219482421875, "__label__social_life": 0.00018727779388427737, "__label__software": 0.05126953125, "__label__software_dev": 0.9052734375, "__label__sports_fitness": 0.00015783309936523438, "__label__transportation": 0.0002646446228027344, "__label__travel": 0.00015079975128173828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31456, 0.0151]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31456, 0.48082]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31456, 0.93669]], "google_gemma-3-12b-it_contains_pii": [[0, 3720, false], [3720, 7394, null], [7394, 8081, null], [8081, 10694, null], [10694, 14657, null], [14657, 18387, null], [18387, 20453, null], [20453, 21055, null], [21055, 21766, null], [21766, 22454, null], [22454, 22788, null], [22788, 23582, null], [23582, 25212, null], [25212, 26808, null], [26808, 28591, null], [28591, 29078, null], [29078, 30357, null], [30357, 31456, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3720, true], [3720, 7394, null], [7394, 8081, null], [8081, 10694, null], [10694, 14657, null], [14657, 18387, null], [18387, 20453, null], [20453, 21055, null], [21055, 21766, null], [21766, 22454, null], [22454, 22788, null], [22788, 23582, null], [23582, 25212, null], [25212, 26808, null], [26808, 28591, null], [28591, 29078, null], [29078, 30357, null], [30357, 31456, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31456, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31456, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31456, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31456, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31456, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31456, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31456, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31456, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31456, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31456, null]], "pdf_page_numbers": [[0, 3720, 1], [3720, 7394, 2], [7394, 8081, 3], [8081, 10694, 4], [10694, 14657, 5], [14657, 18387, 6], [18387, 20453, 7], [20453, 21055, 8], [21055, 21766, 9], [21766, 22454, 10], [22454, 22788, 11], [22788, 23582, 12], [23582, 25212, 13], [25212, 26808, 14], [26808, 28591, 15], [28591, 29078, 16], [29078, 30357, 17], [30357, 31456, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31456, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
61944ee10d70a739bcfe12053d849896b4526c78
<table> <thead> <tr> <th>Title</th> <th>Speech processing tools - An introduction to interoperability</th> </tr> </thead> <tbody> <tr> <td>Authors</td> <td>Christoph Draxler, Toomas Altosaar, Sadaoki Furui, Mark Liberman, Peter Wittenburg</td> </tr> <tr> <td>Citation</td> <td>Proc. INTERSPEECH 2011, , , pp. 3229-3232,</td> </tr> <tr> <td>Pub. date</td> <td>2011, 9</td> </tr> <tr> <td>Copyright</td> <td>(c) 2011 International Speech Communication Association, ISCA</td> </tr> <tr> <td>DOI</td> <td><a href="http://dx.doi.org/">http://dx.doi.org/</a></td> </tr> </tbody> </table> Powered by T2R2 (Tokyo Institute Research Repository) Speech Processing Tools – An Introduction to Interoperability Christoph Draxler¹, Toomas Altosaar², Sadaoki Furui³, Mark Liberman⁴, Peter Wittenburg⁵ ¹Institute of Phonetics and Speech Processing, LMU Munich, Germany ²Aalto University School of Science and Technology, Espoo, Finland ³Dept. of Computer Science, Tokyo Institute of Technology, Japan ⁴Dept. of Linguistics, University of Pennsylvania, Philadelphia PA, USA ⁵Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands draxler@phonetik.uni-muenchen.de, Toomas.Altosaar@hut.fi, furui@cs.titech.ac.jp, myl@cis.upenn.edu, Peter.Wittenburg@mpi.nl Abstract Research and development in the field of spoken language depends critically on the existence of software tools. A large range of excellent tools have been developed and are widely used today. Most tools were developed by individuals who recognized the need for a given tool, had the necessary conceptual and programming skills, and were deeply rooted in the application field, namely spoken language. Excellent tools are a prerequisite to research. However, tool developers rarely receive academic recognition for their efforts. Journals, conferences and funding agencies are interested in the results of the work on a research question while the tools developed to achieve these results are of less interest. This makes it difficult to publish articles on tools, and it is next to impossible to obtain funding for their development. The Interspeech 2011 special event on speech processing tools aims to provide a forum for tool developers to improve their academic visibility and thus enhance their motivation to continue developing the software needed by the community. This year, the focus is on interoperability – how can different tools be integrated into the workflow, data be exchanged between tools, queries work across tools, and a consistent user interface be achieved. 1. Introduction Historically, the development of software tools for speech processing can be divided into three phases. In the pre-PC era, processing power, memory and network bandwidth were limited, and only very few institutions could afford computers. Furthermore, only trained operators could use the software which in general was highly specialized. In this era, the mainframe was the dominant system architecture and software was often designed to reuse existing modules and to share data since resources were limited. The PC era saw an exponential increase in processing power and available memory, and a high degree of standardization for hardware, programming languages and graphical user interfaces. A typical desktop computer allowed processing hitherto unknown amounts of data while standardization resulted in software being developed for large audiences. Researchers, technology developers and the general audience now had access to sophisticated software tools for processing speech, ranging from simple audio recording to annotation editors, signal processing, speech recognition and speech synthesis systems. As a side effect, the "old" ideals of tools sharing resources and data became less and less important. In the current Internet era, corpus and data sizes have outgrown the capabilities of individual workstations. Resources are now increasingly made available on the web, and web services provide the necessary processing power. This will create a fundamental change in the way research is performed: researchers will expect to have their data, as well as their favorite working environment and tools, accessible in a virtual workplace in a browser. During the transition of the pre-PC era to the post-PC era, several software systems were developed for operating on discrete-time signals - and more specifically - on speech. They represent real cases where software development for signals and speech was funded and reported through Ph.D. theses, reports, and deliverables that nearly every speech researcher is aware of, has used, or is still even using today. Today’s speech researchers may not be aware that early software tools played an integral role in speech research with far reaching effects. For example, the TIMIT corpus was processed and annotated on a software system called SPIRE [1] that ran on Symbolics Lisp machines and was freely available to universities via a license. SPIRE itself was influenced by earlier work by Gary Koepe who formulated a representation for discrete-time signals in software [2]. Koepe extended his theoretical work to cover both signal representation and processing, as well as a system called SDB [3] that was used for the storage and management of signals and their properties. This latter work was used especially for speech processing research and applications. Funded work on other speech-specific software tools also existed, e.g., QuickSig [4] that was applied to speech corpus annotation and speech processing, as well as Stanis-LAX that noted the need for advanced tools and architectures for speech processing environments [5]. 1.1. Workflow The typical phases of an end-to-end workflow in speech processing, along with the associated data types and a selection of well-known tools are shown in fig. 1. Clearly, proceeding through the workflow involves the use of many different tools, but even more it requires handling different types of data: signal data (audio, video, sensor), graphics data, program code and plain, mark-up or styled text. Presently, every tool in the workflow uses its own native data format. Some tools provide import- and export functions, but significant amounts of data conversion and reformatting is still necessary. A global data model for the speech processing workflow, based on the concept of a corpus, has been proposed by [6]. Such a data model will be immediately useful for web-based and collaborative work, but transformation costs, both in terms of converting data and in terms of adapting software to access data in the global data model, are high. 1.2. Tools workshops and conferences In recent years there have been a number of initiatives to disseminate knowledge about existing tools, to share development efforts, and to bring together researchers and tool developers to define the requirements for future tools. These initiatives include, e.g., a workshop on standards for phonological corpora organized by U. Gut in Augsburg in 2009 and the workshop on ‘New Tools and Methods for Very-Large-Scale Phonetics Research’ organized by the Phonetics Laboratory of UPenn in 2011, and others, e.g., the European CLARIN project [7]. As a result, informal cooperation between tool developers and discussion groups have been established. For example, the effort of the developers of ELAN, ANVIL and EXMARaLDA to define and implement a common data exchange format [8], the language archive technology (LAT) discussion groups hosted by the Max Planck Institute of Psycholinguistics in Nijmegen, the newsletters and discussion forums of LDC, ELRA and other speech resource providers, now exist. However, contributions to these initiatives do not result in publications, let alone high-impact publications. In the last 10 years, only one issue of ‘Speech Communication’ has been devoted to tools (Vol. 22, Numbers 1-2, 2001). The Romans knew what this means: *Quod non est in actis non est in mundo*. On the other hand, tools are mentioned in many publications. Table 1 summarizes the results of a simple Google search for a given tool in the body of web-accessible PDF documents of large speech-related conferences 1.3. Interoperability Encyclopedia Britannica defines interoperability in the context of computer science as the ability of the user of one member of a group of disparate systems (all having the same functionality) to work with any of the systems of the group with equal ease. The English Wikipedia features several definitions of interoperability, ranging from very general to domain-specific, e.g., software interoperability. In the context of speech processing tools, the following definition from the IEEE Glossary is useful: interoperability is the ability of two or more systems or components to exchange information and to use the information that has been exchanged. The article distinguishes syntactic from semantic interoperability and has a paragraph on software interoperability. The article also emphasizes the importance of standardization. 2. Standardization With large amounts of text becoming available on the Internet, and with the growing number of well-defined speech and text corpora, standardization has received increased attention. Several standards bodies have taken up the task of standardizing terminology, methods and tools for speech and text processing. These bodies include institutions such as the International Standards Organisation (ISO), the World Wide Web Consortium (W3C) and national standards bodies, interest groups such as the Text Encoding Initiative (TEI) and the ISOCat data category registry, and industry consortia such as the MPEG forum. 3. Semantic interoperability Among spoken language researchers and technology developers there is now a general consensus about the entities involved in speech processing. Speech is a *time-dependent physical signal*, it is processed... using digital signal processing methods and algorithms, and it is described (transcribed, segmented, annotated, etc.) using textual representations. A speech corpus is a structured collection of speech, possibly derived signal data, and textual data [9]. A segment is a tuple consisting of an anchor, an optional extension, and a label. In time-dependent segments, the anchor is a time-point, the extension is a duration. In time-independent segments the anchor is a unique identifier. The label describes the segment contents, commonly in free-form or structured text. A tier holds segments of a common type. Within a tier, segments are ordered, e.g., according to the time points or a sequence index. Constraints govern the organization of segments in a tier: segments may overlap, there may be gaps, etc. For a given collection of speech and text data, there may exist many tiers. In general, tiers relate to each other by some relationship which can be named and which specifies quantitative constraints. The schema specifies the tiers and the relationships between the tiers. A schema not only allows checking the formal integrity of corpus data, but it supports semantically meaningful cross-tier queries and may also be used to build flexible query and visualization tools. The Annotation Graph formalism provides a formal framework for speech processing, notably the annotation of speech [10]. However, it lacks a schema description. Many speech processing tools implicitly implement a schema in which the tiers are independent of each other. Other tools have implicit or explicit 1:n relationships, only Emu has explicit 1:n and n:m relationships [11]. 3.1. Data model overview A data model is a formal framework describing a collection of data. The hierarchical data model organizes data in 1:n relationships: there is a single root element, only one path from the root to any given element, and the relationships are not named. The network data model allows more than one root element and named 1:n relationships; different paths are distinguished by name. The query languages in both data models are navigational, i.e., searching for a specific element is achieved by traversing the hierarchy or network. The relational data model organizes data in relational tables with attributes, relationships are expressed via the relational join operator based on comparisons between attributes of relational tables; in principle, relationships may exist between any two relational tables. The query language is declarative, i.e., queries are formulated using relational algebra or calculus, and the concept of navigation is alien to the relational model. The object-oriented data model distinguishes two basic types of relationship: is-a and uses. The is-a-relationship organizes all data in a hierarchy (single inheritance) or a network (multiple inheritance), the uses-relationship is named and may link arbitrary elements. Object-oriented query languages are navigational. Mathematically, the above data models can be seen as graphs with different directionality, quantity and naming constraints. 3.2. Data models for speech processing The data types relevant for speech processing are highly structured: data streams, sequences, trees and network structures are common. Generally speaking, the relational data model is an abstract representation of data; any structure can be represented, but this inflates the number of relational tables and results in lengthy queries. However, due to the simplicity of the data model and the small number of basic data types, relational models of speech corpora will always be very similar, facilitating the exchange of data and the reuse of code. The object-oriented data model is a natural choice for representing the richness of speech processing data. However, due to the unlimited possibilities of constructing complex data types out of other data types, no two object-oriented models of speech corpora will look the same, and hence the exchange of data and the reuse of code is heavily affected. 4. Syntactic interoperability Syntactic interoperability must be divided into at least four levels: the basic encoding level, the data format level, the implementation level, and the programming interface or service level. 4.1. Encoding level The encoding level comprises the digital representation of the basic data types: signal data and text data. Many different audio, video and sensor data formats have been defined, they are often tool- or platform-specific. For uncompressed data and open data formats, this does not pose a problem – in general, data can be converted without loss from one format to another. For text data, Unicode is becoming the de facto standard character set. However, the transition from older character sets and encodings to Unicode is still in progress. Unfortunately, some tools do not yet support Unicode. Although interoperability on the encoding level is feasible – in general, doing a round trip is possible – it often requires a lot of manual work and can be quite time-consuming. 4.2. Data format level The data format level comprises the organisation of the data as it is being used by an application or a web service; it is what is commonly known as an application dependent file format, e.g., a Praat TextGrid or an ELAN annotation file. Only a minority of the file formats in use by today’s tools are formally defined and the specifications being publicly available. Some tools require files to be in a given storage location, others do not. None of the common tools stores its data in a DBMS or has an interface to a database system. Since many different file formats have become de facto standards most tools provide import and export functionality. However, most work on individual files or directories but cannot process entire corpora. Converting from one file format to another normally leads to loss of information, and in general, round-tripping is not possible. 4.3. Implementation level The implementation level relates to the programming languages and libraries used to implement a given speech processing application. Naturally, technology advances, applications become available or software becomes obsolete. Interoperability is a moving target. Currently, the programming languages most commonly used for the implementation of speech processing tools are strongly typed and compiled languages like Java, C++ and C, and interpreted script languages like perl, tcl and python, as well as JavaScript. Besides these, application or domain specific languages are used, e.g., the built-in script language of Matlab, the statistics language R, or the database query language SQL. Every language comes with its own syntax, execution model, basic data types and with its specific programming libraries. Some languages provide features to include precompiled modules, thereby allowing the reuse of existing code. 4.4. Programming interface or service level Many program libraries are accessible via public interfaces. Wrapper classes take care of the conversions necessary on the data format level and thus enable the exchange of data. Such interfaces are common in high level applications, e.g., spreadsheet programs and statistics packages. Carrying the concept of client-server computing one step further is cloud computing. Here, a user establishes a virtual workspace, where he or she has access to all the required data and tools. The tools are implemented as web services, and they are run by some service provider somewhere on the net. For basic tasks such as text processing or standardized workflows such as web link management or financial services, and especially social networks, cloud computing has become very popular and successful. For language research, especially text corpus work, virtual workplaces have started to appear [12]. For speech processing, work on web services and the infrastructure necessary to establish virtual workspaces has begun [13, 7]. 5. Users’ view on interoperability Interoperability has another facet, not covered by the definitions given above: the users’ view. A user works with a given software tool for many reasons: suitability, availability, joy of use, familiarity, etc. and by using a tool he or she wants to get work done. In an ideal world, the tool should reflect the users’ knowledge, skills and best practices in the application domain, i.e., support the user by using the data, algorithms, workflow and terminology of the field. In reality, things are often the other way around: tools impose a specific world view upon the user, so that switching from one tool to the other means moving from one mini-environment to the next. On an abstract level, speech processing can be seen as searching for data according to some conditions and then manipulating this data. Query languages range from very general to highly application- or tool-specific, and they can be written as text, constructed using a wizard, or by interactive exploration of the data itself. In fact, currently every tool implements its own query language, and often this query language directly reflects the data structures of the tool. An alternative is to separate the surface form of the query from the underlying query evaluation mechanism. This underlying mechanism reflects the global domain data model and the formal properties of this model, e.g., relational tables or graphs, whereas the surface form is determined by the user interface which is tool or task specific. This approach encourages tool developers to provide more than one user interface for queries, or to develop application independent query interfaces that can access different query evaluation mechanisms. Furthermore, this approach allows the user to select the query input method that best suits his or her needs, and even to switch between input modes. 6. Conclusion The field of speech and language processing is alive and productive. There is a general consensus about the application domain; formal frameworks have been proposed which underline and guide the development of tools and systems. Excellent tools have been developed and are widely available—in fact, the large number of tools mirrors the richness of the field. Interoperability is a key issue. This may be a bold statement, but semantic interoperability is not the problem, it’s the data format level interoperability in combination with disagreement on the extent of a global data model. Thus, a common minimal core data model encompassing the concept of a corpus is needed, and this data model must be supported by all tools—[6, 8] are first examples of how this ambitious goal can be reached. In the future we need to go even further. 7. References
{"Source-Url": "https://t2r2.star.titech.ac.jp/rrws/file/CTT100631017/ATD100000413/", "len_cl100k_base": 4106, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 14625, "total-output-tokens": 5000, "length": "2e12", "weborganizer": {"__label__adult": 0.0006155967712402344, "__label__art_design": 0.00084686279296875, "__label__crime_law": 0.0006580352783203125, "__label__education_jobs": 0.004535675048828125, "__label__entertainment": 0.0006346702575683594, "__label__fashion_beauty": 0.00023031234741210935, "__label__finance_business": 0.0003726482391357422, "__label__food_dining": 0.0004973411560058594, "__label__games": 0.0011339187622070312, "__label__hardware": 0.0013036727905273438, "__label__health": 0.0013971328735351562, "__label__history": 0.0005588531494140625, "__label__home_hobbies": 8.177757263183594e-05, "__label__industrial": 0.0005102157592773438, "__label__literature": 0.0049896240234375, "__label__politics": 0.000621795654296875, "__label__religion": 0.0008931159973144531, "__label__science_tech": 0.288330078125, "__label__social_life": 0.00033283233642578125, "__label__software": 0.0550537109375, "__label__software_dev": 0.634765625, "__label__sports_fitness": 0.0004949569702148438, "__label__transportation": 0.0007681846618652344, "__label__travel": 0.00023567676544189453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22615, 0.0255]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22615, 0.82082]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22615, 0.91289]], "google_gemma-3-12b-it_contains_pii": [[0, 533, false], [533, 6101, null], [6101, 9820, null], [9820, 16461, null], [16461, 22615, null]], "google_gemma-3-12b-it_is_public_document": [[0, 533, true], [533, 6101, null], [6101, 9820, null], [9820, 16461, null], [16461, 22615, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22615, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22615, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22615, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22615, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22615, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22615, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22615, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22615, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22615, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22615, null]], "pdf_page_numbers": [[0, 533, 1], [533, 6101, 2], [6101, 9820, 3], [9820, 16461, 4], [16461, 22615, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22615, 0.06796]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
0dcab4c4ca0ac9a23fff517fdc9340365611bc0c
Network Applications: High-Performance Server Design (Proactive Async Servers; Operational Analysis; Multi-Servers) Y. Richard Yang http://zoo.cs.yale.edu/classes/cs433/ 10/11/2018 Outline - Admin and recap - High performance server - Thread design - Asynchronous design - Operational analysis - Multiple servers Assignment Three (HTTP server) Part 1 check point Assignment Part 2 posted (there is one to-do place to be fixed today) Recap: Multiplexed, Reactive I/O - A different approach for avoiding blocking: *peek* system state, issue function calls only for those that are *ready* **Basic abstractions** - Channel (source) - Selector - PCB Designing a good FSM is key for a good non-blocking select design. There can be multiple types of FSMs: - **Staged**: first read request and then write response - **Mixed**: read and write mixed Choice depends on protocol and tolerance of complexity, e.g., - HTTP/1.0 channel may use staged - HTTP/1.1/2/Chat channel may use mixed Outline - Admin and recap - High performance servers - Thread design - Per-request thread - Thread pool - Busy wait - Wait/notify - Asynchronous design - Overview - Nonblocking, selected servers--reactive programming - Proactive programming Basic Idea: Asynchronous Initiation and Callback - Issue of only peek: - Cannot handle initiation calls (e.g., read file, initiate a connection by a network client) - Idea: asynchronous initiation (e.g., aio_read) and program specified completion handler (callback) - Also referred to as proactive (Proactor) nonblocking Asynchronous Channel using Future/Completion Handler Java 7 introduces ASynchronousServerSocketChannel and ASynchnorousSocketChannel beyond ServerSocketChannel and SocketChannel - accept, connect, read, write return Futures or have a callback. https://docs.oracle.com/javase/7/docs/api/java/nio/channels/AsynchronousServerSocketChannel.html https://docs.oracle.com/javase/7/docs/api/java/nio/channels/AsynchronousSocketChannel.html ## Asynchronous I/O <table> <thead> <tr> <th>Asynchronous I/O</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>AsynchronousFileChannel</td> <td>An asynchronous channel for reading, writing, and manipulating a file</td> </tr> <tr> <td>AsynchronousSocketChannel</td> <td>An asynchronous channel to a stream-oriented connecting socket</td> </tr> <tr> <td>AsynchronousServerSocketChannel</td> <td>An asynchronous channel to a stream-oriented listening socket</td> </tr> <tr> <td>CompletionHandler</td> <td>A handler for consuming the result of an asynchronous operation</td> </tr> <tr> <td>AsynchronousChannelGroup</td> <td>A grouping of asynchronous channels for the purpose of resource sharing</td> </tr> </tbody> </table> [Link to Oracle documentation](https://docs.oracle.com/javase/8/docs/api/java/nio/channels/package-summary.html) ## Example Async Calls <table> <thead> <tr> <th>Method</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>abstract Future&lt;AsynchronousSocketChannel&gt;</code></td> <td><code>accept()</code>: Accepts a connection.</td> </tr> <tr> <td><code>abstract &lt;A&gt; void</code></td> <td><code>accept(A attachment, CompletionHandler&lt;AsynchronousSocketChannel,? super A&gt; handler)</code>: Accepts a connection.</td> </tr> <tr> <td><code>abstract Future&lt;Integer&gt;</code></td> <td><code>read(ByteBuffer dst)</code>: Reads a sequence of bytes from this channel into the given buffer.</td> </tr> <tr> <td><code>abstract &lt;A&gt; void</code></td> <td><code>read(ByteBuffer [] dsts, int offset, int length, long timeout, TimeUnit unit, A attachment, CompletionHandler&lt;Long,? super A&gt; handler)</code>: Reads a sequence of bytes from this channel into a subsequence of the given buffers.</td> </tr> </tbody> </table> [https://docs.oracle.com/javase/8/docs/api/java/nio/channels/AsynchronousServerSocketChannel.html](https://docs.oracle.com/javase/8/docs/api/java/nio/channels/AsynchronousServerSocketChannel.html) Using Future ```java SocketAddress address = new InetSocketAddress(args[0], port); AsynchronousSocketChannel client = AsynchronousSocketChannel.open(); Future<Void> connected = client.connect(address); ByteBuffer buffer = ByteBuffer.allocate(100); // wait for the connection to finish connected.get(); // read from the connection Future<Integer> future = client.read(buffer); // do other things... // wait for the read to finish... future.get(); // flip and drain the buffer buffer.flip(); WritableByteChannel out = Channels.newChannel(System.out); out.write(buffer); ``` Using CompletionHandler ```java class LineHandler implements CompletionHandler<Integer, ByteBuffer> { @Override public void completed(Integer result, ByteBuffer buffer) { buffer.flip(); WritableByteChannel out = Channels.newChannel(System.out); try { out.write(buffer); } catch (IOException ex) { System.err.println(ex); } } @Override public void failed(Throwable ex, ByteBuffer attachment) { System.err.println(ex.getMessage()); } } ByteBuffer buffer = ByteBuffer.allocate(100); CompletionHandler<Integer, ByteBuffer> handler = new LineHandler(); channel.read(buffer, buffer, handler); ``` Asynchronous Channel Implementation - Asynchronous is typically based on Thread pool. If you are curious on its implementation, please read https://docs.oracle.com/javase/8/docs/api/java/nio/channels/AsynchronousChannelGroup.html Summary: Event-Driven (Asynchronous) Programming - Advantages - Single address space for ease of sharing - No synchronization/thread overhead - Many examples: Google Chrome (libevent), Dropbox (libevent), nginx, click router, NOX controller, ... Problems of Event-Driven Server - Obscure control flow for programmers and tools - Difficult to engineer, modularize, and tune - Difficult for performance/failure isolation between FSMs Another view - Events obscure control flow - For programmers and tools ### Threads ```c thread_main(int sock) { struct session s; accept_conn(sock, &s); read_request(&s); pin_cache(&s); write_response(&s); unpin(&s); } pin_cache(struct session *s) { pin(&s); if( !in_cache(&s) ) read_file(&s); } ``` ### Events ```c AcceptHandler(event e) { struct session *s = new_session(e); RequestHandler.enqueue(s); } RequestHandler(struct session *s) { ...; CacheHandler.enqueue(s); } CacheHandler(struct session *s) { pin(s); if( !in_cache(s) ) ReadFileHandler.enqueue(s); else ResponseHandler.enqueue(s); } ``` ... ```c ExitHandler(struct session *s) { ...; unpin(&s); free_session(s); } ``` Summary: The High-Performance Network Servers Journey - Avoid blocking (so that we can reach bottleneck throughput) - introduce threads, async select, async callback - Limit unlimited thread overhead - Thread pool (share welcome, share Q) - Coordinating data access - synchronization (lock, condition, synchronized) - Coordinating behavior: avoid busy-wait - wait/notify; select FSM, Future/Listener - Extensibility of SW/robustness - language support/design using interfaces Beyond Java: Design Patterns - We have seen Java as an example - C++ and C# can be quite similar. For C++ and general design patterns: HTTP Servers <table> <thead> <tr> <th>Ranking</th> <th>Technology</th> <th>Domains</th> <th>Market Share</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Apache HTTP Server</td> <td>13,593,009</td> <td>52.41%</td> </tr> <tr> <td>2</td> <td>nginx</td> <td>8,244,455</td> <td>31.79%</td> </tr> <tr> <td>3</td> <td>Microsoft IIS</td> <td>2,443,642</td> <td>9.42%</td> </tr> </tbody> </table> Summary: Server Software Architecture - Architectures - Multi threads - Asynchronous - Hybrid - Assigned reading: SEDA - Netty design Figure 5: Staged event-driven (SEDA) HTTP server: This is a structural representation of the SEDA-based Web server, described in detail in Section 5.1. The application is composed as a set of stages separated by queues. Edges represent the flow of events between stages. Each stage can be independently managed, and stages can be run in sequence or in parallel, or a combination of the two. The use of event queues allows each stage to be individually load-conditioned, for example, by thresholding its event queue. For simplicity, some event paths and stages have been elided from this figure. Recap: Best Server Design Limited Only by Resource Bottleneck Some Questions - When is CPU the bottleneck for scalability? - So that we need to add helper threads - How do we know that we are reaching the limit of scalability of a single machine? - These questions drive network server architecture design - Some basic performance analysis techniques are good to have Outline - Admin and recap - High performance server - Thread design - Asynchronous design - Operational analysis - Multiple servers Operational Analysis - Relationships that do not require any assumptions about the distribution of service times or inter-arrival times - Hence focus on measurements - Identified originally by Buzen (1976) and later extended by Denning and Buzen (1978). - We touch only some techniques/results - In particular, bottleneck analysis - More details see linked reading Under the Hood (An example FSM) start (arrival rate $\lambda$) | Network | File I/O | Memory cache | CPU | exit (throughput $\lambda$ until some center saturates) ## Operational Analysis: Resource Demand of a Request <table> <thead> <tr> <th>Resource</th> <th>Visits</th> <th>Units of Resource Time Per Visit</th> </tr> </thead> <tbody> <tr> <td>CPU</td> <td>$V_{CPU}$</td> <td>$S_{CPU}$</td> </tr> <tr> <td>Network</td> <td>$V_{Net}$</td> <td>$S_{Net}$</td> </tr> <tr> <td>Disk</td> <td>$V_{Disk}$</td> <td>$S_{Disk}$</td> </tr> <tr> <td>Memory</td> <td>$V_{Mem}$</td> <td>$S_{Mem}$</td> </tr> </tbody> </table> Operational Quantities - **T**: observation interval - **Ai**: # arrivals to device i - **Bi**: busy time of device i - **Ci**: # completions at device i - **i = 0** denotes system **Arrival rate** \( \lambda_i = \frac{A_i}{T} \) **Throughput** \( X_i = \frac{C_i}{T} \) **Utilization** \( U_i = \frac{B_i}{T} \) **Mean service time** \( S_i = \frac{B_i}{C_i} \) Utilization Law Utilization \( U_i = \frac{B_i}{T} \) \[ = \frac{C_i}{T} \frac{B_i}{C_i} \] \[ = X_i S_i \] - The law is independent of any assumption on arrival/service process. - Example: Suppose NIC processes 125 pkts/sec, and each pkt takes 2 ms. What is utilization of the network NIC? Deriving Relationship Between R, U, and S for one Device - Assume flow balanced (arrival=throughput), Little’s Law: \[ Q = \lambda R = XR \] - Assume PASTA (Poisson Arrival--memory-less arrival--Sees Time Average), a new request sees Q ahead of it, and FIFO \[ R = S + QS = S + XRS \] - According to utilization law, \( U = XS \) \[ R = S + UR \quad \Rightarrow \quad R = \frac{S}{1-U} \] Forced Flow Law Assume each request visits device $i$ $V_i$ times \[ \text{Throughput } X_i = \frac{C_i}{T} \] \[ = \frac{C_i}{C_0} \frac{C_0}{T} \] \[ = V_i X \] Define $D_i = V_i S_i$ as the total demand of a request on device $i$ The device with the highest $D_i$ has the highest utilization, and thus is called the **bottleneck** Utilization $U_i = X_i S_i$ $$= V_i X S_i$$ $$= X V_i S_i$$ Bottleneck vs System Throughput Utilization $U_i = XV_i S_i \leq 1$ $\rightarrow X \leq \frac{1}{D_{\text{max}}}$ Example 1 - A request may need - 10 ms CPU execution time - 1 Mbytes network bw - 1 Mbytes file access where - 50% hit in memory cache - Suppose network bw is 100 Mbps, disk I/O rate is 1 ms per 8 Kbytes (assuming the program reads 8 KB each time) - Where is the bottleneck? Example 1 (cont.) - **CPU:** - $D_{CPU} = 10 \text{ ms (e.q. 100 requests/s)}$ - **Network:** - $D_{Net} = \frac{1 \text{ Mbytes}}{100 \text{ Mbps}} = 80 \text{ ms (e.q., 12.5 requests/s)}$ - **Disk I/O:** - $D_{disk} = \frac{0.5 \times 1 \text{ ms} \times 1\text{M}/8\text{K}}{} = 62.5 \text{ ms (e.q. = 16 requests/s)}$ Example 2 - A request may need - 150 ms CPU execution time (e.g., dynamic content) - 1 Mbytes network bw - 1 Mbytes file access where - 50% hit in memory cache - Suppose network bw is 100 Mbps, disk I/O rate is 1 ms per 8 Kbytes (assuming the program reads 8 KB each time) - Bottleneck: CPU -> use multiple threads to use more CPUs, if available, to avoid CPU as bottleneck Interactive Response Time Law - **System setup** - Closed system with $N$ users (e.g., remote desktops) - Each user sends in a request, after response, think time, and then sends next request - **Notation** - $Z = $ user think-time, $R = $ Response time - The total cycle time of a user request is $R+Z$ In duration $T$, #requests generated by each user: $\frac{T}{R+Z}$ requests Interactive Response Time Law - If $N$ users and flow balanced: System Throughput $X = \text{Toal# req.} / T$ $$X = \frac{N \frac{T}{R+Z}}{T}$$ $$= \frac{N}{R+Z}$$ $$R = \frac{N}{X} - Z$$ Bottleneck Analysis \[ X(N) \leq \min \left\{ \frac{1}{D_{\max}}, \frac{N}{D+Z} \right\} \] \[ R(N) \geq \max \left\{ D, ND_{\max} - Z \right\} \] - Here D is the sum of D_i Proof \[ X(N) \leq \min\{\frac{1}{D_{\text{max}}}, \frac{N}{D+Z}\} \] \[ R(N) \geq \max\{D, ND_{\text{max}} - Z\} \] We know \[ X \leq \frac{1}{D_{\text{max}}} \quad R(N) \geq D \] Using interactive response time law: \[ R = \frac{N}{X} - Z \quad \Rightarrow \quad R \geq ND_{\text{max}} - Z \] \[ X = \frac{N}{R+Z} \quad \Rightarrow \quad X \leq \frac{N}{D+Z} \] Summary: Operational Laws - Utilization law: \( U = XS \) - Forced flow law: \( Xi = Vi X \) - Bottleneck device: largest \( Di = Vi Si \) - Little’s Law: \( Qi = Xi Ri \) - Bottleneck bound of interactive response (for the given closed model): \[ X(N) \leq \min\left\{ \frac{1}{D_{\text{max}}} , \frac{N}{D+Z} \right\} \] \[ R(N) \geq \max \{ D, ND_{\text{max}} - Z \} \] In Practice: Common Bottlenecks - No more file descriptors - Sockets stuck in TIME_WAIT - High memory use (swapping) - CPU overload - Interrupt (IRQ) overload while (true) { identify_and_fix_bottlenecks(); drink(); sleep(); notice_new_bottleneck(); } Summary: High-Perf. Network Server - Avoid blocking (so that we can reach bottleneck throughput) - Introduce threads, async io - Limit unlimited thread overhead - Thread pool - Shared variables - Synchronization (lock, synchronized, condition) - Avoid busy-wait - Wait/notify; FSM; asynchronous channel/Future/Handler - Extensibility/robustness - Language support/Design for interfaces - System modeling and measurements - Queueing analysis, operational analysis Outline - Admin and recap - High performance server - Thread design - Asynchronous design - Operational analysis - Multiple servers Why Multiple Servers? - Scale a single server that encounters bottleneck - Scale a single server that has too large latency - Add fault tolerance to a single server - Match with settings where resources may be naturally distributed at different machines (e.g., run a single copy of a database server due to single license; access to resource from third party) - Achieve modular software architecture (e.g., front end, business logic, and database) FB Data Centers *POP = points of presence. Discussion: Requirements in Designing Load-Balancing Multiple Servers - Provide naming abstraction - Optimize resource utilization/performance goal - Achieve fault tolerance - ... Components of a Load-Balancing Multiple Servers System - Service/resource discovery (static, zookeeper, etc, consul) - Health/state monitoring of servers/connecting networks - Load balancing mechanisms/algorithms - Also called a request routing system Request Routing: Basic Architecture server state (load/capacity/content) net state/capacity: path property between servers/clients notify client about selection (direction mech.) requests server selection algorithm Request Routing Mechanisms - DNS based request routing - L4/network request routing - L7/application request routing Example: FB Architecture Example: Wikipedia Architecture Example: Hybrid Request Routing View Cluster 1 in US East Load balancer Cluster 2 in US West Load balancer Cluster 3 in Europe Load balancer proxy Load balancer servers DNS name1 IP1 DNS name2 IP2 IPn Outline - Recap - Single network server - Multiple network servers - Why multiple servers - Overview - DNS based request routing Basic DNS Indirection and Rotation %dig +short cnn.com 157.166.226.25 157.166.226.26 IP address of cnn.com 157.166.226.25 157.166.226.26 DNS server for cnn.com 157.166.226.25 157.166.226.26 157.166.255.18 CDN Using DNS (Akamai Architecture as an Example) - Content publisher (e.g., cnn) - provides base HTML documents - runs origin server(s); but delegates heavy-weight content (e.g., images) to CDN - Akamai runs - (~240,000) edge servers in 130 countries within 1700 networks - Claims 85% Internet users are within a single "network hop" of an Akamai CDN server. - customized DNS redirection servers to select edge servers based on - closeness to client browser, server load Linking to Akamai - Originally, URL Akamaization of embedded content: e.g., `<IMG SRC= http://www.provider.com/image.gif >` changed to `<IMG SRC = http://a661.g.akamai.net/hash/image.gif>` Note that this DNS redirection unit is per customer, not individual files. - URL Akamaization is becoming obsolete and supported mostly for legacy reasons Exercise - Check any web page of cnn and find a page with an image - Find the URL - Use \%dig +trace to see DNS load direction Akamai Load-Balancing DNS Name - Akamai - e2466.dscg.akamaiedge.net (why two levels in the name?) Two-Level Direction - high-level DNS determines proximity, directs to low-level DNS; With query dscg.akamaiedge.net and client IP, directs to region (low-level) - low-level DNS: each manages a close-by cluster of servers With query e2466.dscg.akamaiedge.net and client IP, directs to specific server If the directed edge server does not have requested content, it goes back to the original server (source). Local DNS Alg: Considerations - Load on each edge server does not exceed its server capacity - Maximize caching state of each server - Minimize the number of busy servers Example Local DNS Alg: - Details of Akamai algorithms are proprietary - A Bin-Packing algorithm (column 12 of Akamai Patent) every T second - Compute the load to each publisher $k$ (called serial number) - (estimate the number of needed servers) - Sort the publishers from increasing load - For each publisher, compute a sequence of random numbers using a hash function - Assign the publisher to the first server that does not overload Backup Slides Experimental Study of Akamai Load Balancing - **Methodology** - 2-months long measurement - 140 PlanetLab nodes (clients) - 50 US and Canada, 35 Europe, 18 Asia, 8 South America, the rest randomly scattered - Every 20 sec, each client queries an appropriate CNAME for Yahoo, CNN, Fox News, NY Times, etc. See http://www.aqualab.cs.northwestern.edu/publications/Ajsu06DBA.pdf Server Pool: to Yahoo Target: a943.x.a.yimg.com (Yahoo) Client 1: Berkeley Client 2: Purdue Day Night Web replica IDs Timestamp Server Diversity for Yahoo - Majority of PL nodes see between 10 and 50 Akamai edge-servers. - Nodes far away from Akamai hot-spots. Server Pool: Multiple Akamai Hosted Sites - Yahoo! - Amazon - AOL - NYTimes - Apple - Monster - FOX News - MSN - PCWorld - FEMA ![Graph showing the number of Akamai Web Replicas for various clients including cs.purdue.edu, comet.columbia.edu, cs.uoregon.edu, cs.umass.edu, and lbnl.nodes.planet-lab.org.](image) Load Balancing Dynamics ![Graph showing CDF of inter-redirection time for Berkeley, Brazil, and Korea.] Redirection Effectiveness: Measurement Methodology Akamai Low-Level DNS Server Planet Lab Node 9 Best Akamai Replica Servers Do redirections reveal network conditions? - Rank = r1 + r2 - 1 - 16 means perfect correlation - 0 means poor correlation MIT and Amsterdam are excellent Brazil is poor csail.mit.edu cs.vu.nl pop-ce.rnp.br Percentage of time Akamai's selection is better or equal to rank A system named Cartographer (written in Python) processes measurement data and configures the DNS maps of individual DNS servers (open source tinydns)
{"Source-Url": "http://zoo.cs.yale.edu/classes/cs433/cs433-2018-fall/lectures/application-server-design-async-op-analysis-mservers.pdf", "len_cl100k_base": 5605, "olmocr-version": "0.1.50", "pdf-total-pages": 71, "total-fallback-pages": 0, "total-input-tokens": 98649, "total-output-tokens": 8206, "length": "2e12", "weborganizer": {"__label__adult": 0.0003647804260253906, "__label__art_design": 0.0005803108215332031, "__label__crime_law": 0.00044345855712890625, "__label__education_jobs": 0.0024204254150390625, "__label__entertainment": 0.00012028217315673828, "__label__fashion_beauty": 0.00017821788787841797, "__label__finance_business": 0.0003740787506103515, "__label__food_dining": 0.0002944469451904297, "__label__games": 0.0006175041198730469, "__label__hardware": 0.009552001953125, "__label__health": 0.0004973411560058594, "__label__history": 0.0003767013549804687, "__label__home_hobbies": 0.00013816356658935547, "__label__industrial": 0.0009088516235351562, "__label__literature": 0.00023186206817626953, "__label__politics": 0.00024080276489257812, "__label__religion": 0.0004854202270507813, "__label__science_tech": 0.2099609375, "__label__social_life": 0.0001112222671508789, "__label__software": 0.0352783203125, "__label__software_dev": 0.7353515625, "__label__sports_fitness": 0.0002961158752441406, "__label__transportation": 0.000732421875, "__label__travel": 0.00022232532501220703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20982, 0.02147]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20982, 0.64674]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20982, 0.74029]], "google_gemma-3-12b-it_contains_pii": [[0, 186, false], [186, 325, null], [325, 446, null], [446, 660, null], [660, 993, null], [993, 1271, null], [1271, 1598, null], [1598, 2034, null], [2034, 3042, null], [3042, 4216, null], [4216, 5480, null], [5480, 5711, null], [5711, 5963, null], [5963, 6152, null], [6152, 6891, null], [6891, 7378, null], [7378, 7625, null], [7625, 7979, null], [7979, 8724, null], [8724, 8786, null], [8786, 9096, null], [9096, 9235, null], [9235, 9607, null], [9607, 9773, null], [9773, 10184, null], [10184, 10552, null], [10552, 10847, null], [10847, 11245, null], [11245, 11410, null], [11410, 11645, null], [11645, 11761, null], [11761, 12049, null], [12049, 12380, null], [12380, 12767, null], [12767, 13157, null], [13157, 13350, null], [13350, 13527, null], [13527, 13898, null], [13898, 14284, null], [14284, 14444, null], [14444, 14552, null], [14552, 15028, null], [15028, 15167, null], [15167, 15616, null], [15616, 15660, null], [15660, 15843, null], [15843, 16098, null], [16098, 16318, null], [16318, 16436, null], [16436, 16461, null], [16461, 16580, null], [16580, 16804, null], [16804, 16940, null], [16940, 17151, null], [17151, 17703, null], [17703, 18067, null], [18067, 18201, null], [18201, 18302, null], [18302, 18608, null], [18608, 18715, null], [18715, 18887, null], [18887, 19334, null], [19334, 19348, null], [19348, 19735, null], [19735, 19869, null], [19869, 20003, null], [20003, 20317, null], [20317, 20422, null], [20422, 20550, null], [20550, 20832, null], [20832, 20982, null]], "google_gemma-3-12b-it_is_public_document": [[0, 186, true], [186, 325, null], [325, 446, null], [446, 660, null], [660, 993, null], [993, 1271, null], [1271, 1598, null], [1598, 2034, null], [2034, 3042, null], [3042, 4216, null], [4216, 5480, null], [5480, 5711, null], [5711, 5963, null], [5963, 6152, null], [6152, 6891, null], [6891, 7378, null], [7378, 7625, null], [7625, 7979, null], [7979, 8724, null], [8724, 8786, null], [8786, 9096, null], [9096, 9235, null], [9235, 9607, null], [9607, 9773, null], [9773, 10184, null], [10184, 10552, null], [10552, 10847, null], [10847, 11245, null], [11245, 11410, null], [11410, 11645, null], [11645, 11761, null], [11761, 12049, null], [12049, 12380, null], [12380, 12767, null], [12767, 13157, null], [13157, 13350, null], [13350, 13527, null], [13527, 13898, null], [13898, 14284, null], [14284, 14444, null], [14444, 14552, null], [14552, 15028, null], [15028, 15167, null], [15167, 15616, null], [15616, 15660, null], [15660, 15843, null], [15843, 16098, null], [16098, 16318, null], [16318, 16436, null], [16436, 16461, null], [16461, 16580, null], [16580, 16804, null], [16804, 16940, null], [16940, 17151, null], [17151, 17703, null], [17703, 18067, null], [18067, 18201, null], [18201, 18302, null], [18302, 18608, null], [18608, 18715, null], [18715, 18887, null], [18887, 19334, null], [19334, 19348, null], [19348, 19735, null], [19735, 19869, null], [19869, 20003, null], [20003, 20317, null], [20317, 20422, null], [20422, 20550, null], [20550, 20832, null], [20832, 20982, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20982, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20982, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20982, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20982, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20982, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20982, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20982, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20982, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20982, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20982, null]], "pdf_page_numbers": [[0, 186, 1], [186, 325, 2], [325, 446, 3], [446, 660, 4], [660, 993, 5], [993, 1271, 6], [1271, 1598, 7], [1598, 2034, 8], [2034, 3042, 9], [3042, 4216, 10], [4216, 5480, 11], [5480, 5711, 12], [5711, 5963, 13], [5963, 6152, 14], [6152, 6891, 15], [6891, 7378, 16], [7378, 7625, 17], [7625, 7979, 18], [7979, 8724, 19], [8724, 8786, 20], [8786, 9096, 21], [9096, 9235, 22], [9235, 9607, 23], [9607, 9773, 24], [9773, 10184, 25], [10184, 10552, 26], [10552, 10847, 27], [10847, 11245, 28], [11245, 11410, 29], [11410, 11645, 30], [11645, 11761, 31], [11761, 12049, 32], [12049, 12380, 33], [12380, 12767, 34], [12767, 13157, 35], [13157, 13350, 36], [13350, 13527, 37], [13527, 13898, 38], [13898, 14284, 39], [14284, 14444, 40], [14444, 14552, 41], [14552, 15028, 42], [15028, 15167, 43], [15167, 15616, 44], [15616, 15660, 45], [15660, 15843, 46], [15843, 16098, 47], [16098, 16318, 48], [16318, 16436, 49], [16436, 16461, 50], [16461, 16580, 51], [16580, 16804, 52], [16804, 16940, 53], [16940, 17151, 54], [17151, 17703, 55], [17703, 18067, 56], [18067, 18201, 57], [18201, 18302, 58], [18302, 18608, 59], [18608, 18715, 60], [18715, 18887, 61], [18887, 19334, 62], [19334, 19348, 63], [19348, 19735, 64], [19735, 19869, 65], [19869, 20003, 66], [20003, 20317, 67], [20317, 20422, 68], [20422, 20550, 69], [20550, 20832, 70], [20832, 20982, 71]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20982, 0.04864]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
485ea15b9416f93f6c011e4beef9a9c99696226d
How can I make my project more environmentally friendly? Philip Withnall >> For your talk. My talk about how we can make the project more environmentally friendly -- >> Hello, everyone. I'm excited to introduce our next presenter, Philip Withnall. He will be talking about how we can make our project more environmentally friendly. Good luck, Philip. Philip Withnall: Thanks. So, yeah. Welcome. I'm going to talk to you about how you can make your project more environmentally friendly. The aims of this talk are to help you to help the environment. Include knowledge about where environmental costs lie and how to avoid them. Establish an endpoint to this process. So, how environmentally friendly do you have to make your project and how should it matter? And to lay out questions which still need answers. Some of you may have been at Aditya's talk on the first day of the conference. I recommend looking at the recording. Helping to look at applications and power on the computer. My talk will talk about helping developers to reduce the environmental impact of a specific application that they work on. So, this is the same slide as in my talk on the environment last year and it's still as relevant, I think. And provides a bit of context and motivation for the entire talk. Global greenhouse gas emissions need to decline by 45% by 2030. 9 years time. And then to 2050. And net zero, all greenhouse gas emissions are balanced by carbon and by environment, by growing trees or by using as-yet nonexistent carbon capture technology in the air. And using carbon dioxide as a proxy for all greenhouse gases, and use energy and carbon interchangeably. Because most part they are. All energy production has a carbon cost generation and you will cover that shortly. The longer it takes for emissions to reach net zero, the higher the global average temperature will rise. Kind of like coasting on a road. So, early reductions are better. And typically warming is expected to be greater than 5 degrees C on land and less than that on the oceans. So, making and running software in less carbon. And be able to reduce environmental impacts. Needs to be measured. That comes from the product manufacturing industry. And that approach is the life cycle analysis of the carbon emitted during manufacture which is said to be embodied in the product, during use and during disposal. In a functional unit, a well-defined term, the quantity of the product being analyzed chosen to make the analysis well-defined, scalable and comparable between different parts. So, for example, a functional unit in one analysis might be a single car produced and driven 6 liters per 100K for 300,000 kilometers and then scrapped. A system boundary, another well-defined term. Includes all the raw material inputs and all the processes which are involved directly or indirectly in had manufacturing, transporting, using and disposing of the functional unit. In the case of a car, this would be the raw metal processing for the metals and plastics and rubber and so on in the car. The manufacturing of the car itself. Delivery to its owner. The oil extraction and the processing for its fuel and then the emissions from driving it. And then the emissions for energy recovery and material recovery from scrapping and recycling whatever you can out of car at the end of its life. This current analysis is well understood if not universally adopted in the manufacturing industry. It's standardized as the ISO14,000 series. And just like cars, the costs come from raw materials. Or from burning petrol. They come from building and powering servers. So, there is some raw material extraction going on. Powering networks for data transfer and providing the equipment for doing that. The marginal power usage of the software on an end user's computer. Which is the additional power usage compared to if the user wasn't using that software. And the life cycle of software, which doesn't mimic carbon is disposal of software. Or disposal of service probably as well. So, all that life cycle. Most carbon emissions come from generating the energy needed to power hardware. And it's on the carbon intensity of that power generation. So, for example, coal, everyone's favorite, is very carbon intensive. You burn coal. You release carbon dioxide. Solar or hydro, much less carbon intensive. But they still have emissions because you need to build with concrete or manufacture and build the factories. So, these carbon intensities factor in the entire supply chain of producing that. It's quite hard to get figures for the intensity of the electricity you're using. So, I've kind of used an average of 300 grams of CO2 equivalent per kilowatt hour for most of this presentation. Other emissions come from embodied costs for the hardware itself. So, there's roughly 1.2 tons to manufacture a 2019 Dell server. That's not including the cost of running it or disposing of it. And the embodied cost of a laptop is about a quarter of that. So, about 300 kilos of CO2 just to level that top. I've recently bought insulation for my house. And when buying it, you can look at the data sheet before buying it. And you can see what the lifetime carbon cost of making the insulation is. And then you can calculate carbon savings from not leaking heat from your house into the outside environment. And so, decided to compare insulation from the different manufacturers on the base of the carbon cost. And they do vary radically by an order of 2 orders of magnitude, I think? I would be happy to be corrected. But I know of no signals of the embodied carbon gases of software. Hardware, but not software. So, the scenario, I think, could improve the status of our field. People are going to eventually want to know the carbon cost of software they procure, especially if they're a business or the government and operating at that kind of scale. And we could provide that information. Just like manufacturers of insulation do and just like we already provide information about software licensing. And people already make decisions on that basis. And in order to do a rigorous analysis, there are various complications like direct and indirect rebound effects and transformational changes which would need to be accounted for. And I've ignored them for now for the sake of making some progress. But hopefully more rigor can be added over time. So, let's try and do a rough life cycle analysis for software in GNOME. Here's the functional unit that we're gonna use. One distribution tarball of a software release. This is the thing that we want to measure the lifetime costs of. And here's system boundary with some colors. So, system boundary contains the processes which we aren't directly or indirectly responsible for throughout the lifetime of the functional unit. For person parts of the system, it might be easier to measure costs in aggregate. So, for example, the power consumption of GNOME servers or the carbon emissions of the Foundation or conferences. But for other parts of the system, it might be easier to measure costs per functional unit. For example, the cost of all CI pipelines you run in order to one release of your software. Or the costs of hackfests during your software development to get a certain feature done or something. So, as long as we're careful to not doubly count costs, splitting things up like that should work out okay. Why is it important to look at the whole system rather than just running profiling tools on your app? Because looking at the whole system counts the project-wide costs and gives an incentive to produce them which wouldn't exist otherwise. If you only did profiling on your app, you would only be looking at these costs here. So, that said, let's firstly look at the costs here. The marginal ones. The marginal costs of your app on a user's system are all of the resources it uses while running. That's compute time, bandwidth, that kind of thing. While improvements in power usage you can make to your app are likely quite small, multiply by the users app, likely quite large. Based on the analysis last year, I estimate the marginal costs like this. The largest overall environmental impact that can be in the house. And you want to see more about that, take a look back at my talk from last year. There are various conventional tools for measuring and improving these marginal costs. But probably the first tool you should use is just to think about your software. What use cases is your app for? How does it serve them? And does it serve them in an energy efficient way? So, for example, if your use case is allowing the user to listen to music, is it energy efficient to stream the music from a server on the Internet all the time, or more energy efficient to store it locally? That's the example of the kind of question that you should be thinking about. And obviously there are other considerations to bear in mind. But it's all about finding a nice balance between them. Another example, for your use case to process the user in some way. The software requires a certain format. Sometimes the user has to spend half an hour manually reformatting the data before running the software. Then the software has cost 30 minutes of extra computer use and frustration and cost of energy. So, sometimes adding features or making sure that the functionality is appropriate is going to improve the environmental impact of your software. Other tools are available to pinpoint more specific energy consumption issues. So, frequency of CPU wakeups, network usage, disk IO. They're the biggest consumers of energy in an app. As a rough guideline, one hour of using the CPU will emit about 6 grams of CO2 based on a 20-watt power string. One gigabyte of network traffic will cost you about 17 grams of CO2. And -- and you can explore this by using syssprof-cli on the command line. Or the equivalent. Here is syssprof being used in software. It's the software. It basically plots various different measurements on a timeline. You have timing increasing that way and different measurements of the CPU or the network usage and the bandwidth. And there's some recent work in globe, and libsoup, which is unreviewed and unmerged but hopefully will go somewhere on reporting globe and others. And the HTTP requests in libsoup. systemd is another tool you can use. Supports accounting of various resources which processes use. And this is kind of related to Benjamin's talk earlier. It can -- if you enable the accounting functionality by using this command, it will turn on all the accounting for all the units on a system. And with the recent work to use systemd for use, it will get enabled for user session unit as well. It will produce results like this if you're on systemctl status on a service. It will give you an overview of the total network bandwidth that's been used. Or the disk IO bandwidth or the amount of CPU time. And an indication of the runtime as well. So, you can look at systemctl status for your app and find out what it's been doing as a more high-level and over a slightly longer time period of like one whole session or something. Which will give you an idea of where the big costs are. So, for example, geoclue, not quite sure why it's downloaded 8 megs just to find the location. So, maybe there's a bug there. At an even higher level, powertop, the various bits of hardware on your machine, and looking at consumption caused by wakeups from your process which add up quite easily to cause a lot of CPU power usage. These occur whenever your process wakes up to handle inputs or timer events or callbacks or disk network traffic and to reduce the power consumption used to group them all together. And as per Aditya's talk at the start of the conference, we may eventually get a breakdown like is provided but powertop within GNOME usage. So, moving now to look at the embodied cost of a specific structure like specific costs from your software rather than from-- from a release of your software rather than the marginal costs, sorry. CI, continuous integration, is an ongoing cost of development. And its main carbon cost is CPU time. But network bandwidth adds a bit depending on how you have certain things set up. So, especially if every CI pipeline downloads the dependencies for your project from scratch, that will start to add up to be quite a lot if you run your pipelines quite a lot. So, the formula here estimates the number of -- the cost of CI for a project over whatever unit of time you set. So, you could say what was the average run for a pipeline in the last month. And how much data was downloaded by each pipeline last month? And then times it by the number of pipelines that happened last month. And you could do that all, and you get numbers like 4 kilos of CO2 out. Which is what I worked out for GLib per month which I think is not too high. It's not zero, but it's reasonable. So, it comes out to four kilos per month. For comparison, the target emissions for one person per year are 1.4 tons, of which that would be 1.2% for doing all of the CI for GLib. And the other bit of the system background are a bit harder to measure and the work is still ongoing on measuring them. There's not enough information to report about in the talk yet. Sorry. GUADEC this year is being measured. To provide a comparison against carbon emissions from an in-person conference. So, we've got some data from the last two years of GUADEC and hopefully can collect more next year if we do an in-person GUADEC. And we can see how the different options compare and where there are possibilities for savings without affecting the experience of the conference. Thanks for getting the measurements set up for GUADEC to start this year. I'll be looking at the results. And hopefully publishing something about them after the conference is over. So, oh, that slide -- once the measurement is done, improvements can happen. And at what point do we stop making improvements once we start it? It's like, do we get to a stage where an application consumes zero energy for doing everything? Obviously that's not possible. So, my current thoughts on this is that it's a bit of a competition. If two softwares provide the same functionality, rationally users and distributions will choose the one with the lower embodied and the lower marginal costs. And I suspect in many cases it won't be that clear cut. But that's the principle. These costs should be budgeted for by the user running the software. Just like the carbon emissions from buying and driving a car should be by the driver. And they can look at producing cars with lower embodiment of carbon emissions. The improvements are the standard ones. Allow them to work more efficiently. Use less CPU to do it, less network through caching and a bunch of network requests together to reduce RAM trips. And do the same for disk IO. CI pipelines are quite easy to improve. So, unless your CI pipeline is run very infrequently, like a few times a week, then you should pre-build a Docker image containing all of your dependencies and run the CI using that image. That would look at the network activity on each CI job. Quite carbon-intensive. And it can also fail and that's quite often in development. And also means that you don't have to wait for the downloads to happen for all of your dependencies. And why not implement this on various projects? So far, it's sped up the pipeline by about a factor of 4. Using shallow clones for a Git repository into a CI runner. It will reduce the network activity. And that's a simple option you can set in GitHub which I blogged about recently. So, as I said, we're still in the process of measuring the overheads such as the server infrastructure and the cost of the Foundation as an organization and as an employer. Once the measurements are done, then some recommendations could be made on the basis of the data and we can see how those costs get split up between different projects and within GNOME. And how they compare to the cost of the projects themselves. So, pulling it all together. GNOME apps, I think, should be labeled with their embodied carbon cost just like they're labeled with a license. And that cost is the their share of the cost of the GNOME project and the Foundation overheads, plus their costs for continuous integration and hackfests that apply to each functional unit, each release of a basic software. We don't have all the data for that yet. Particularly around project-wide overheads. But we should collect it and we should refine our analysis as more data comes in. Those embodied costs should be reduced based on the data we measure. By optimizing the CI. By making hackfests carbon-neutral. By switching the Foundation servers to be using a carbon-neutral power supply. There's various options. It depends on what the data says. And we should reduce marginal costs of apps by making them faster in all the normal ways that you optimize software. Then making sure also that I think the key thing is to not have the user use the computer for longer than is necessary. Because it has a certain power needed to be turned on from scratch and then the marginal power increase of the laptop or computer when it's working hard versus idle is a small fraction of that. So, the best thing to do is to not have the computer on in the first place. There are so many open questions about where the data is inaccurate your, why there are margins? Things we don't know. How many users do we have? Still don't know that one. Can we collect better statistics about the user systems? Which ones are used the most? Which one uses the CPU the most. We have no framework for that. That would be nice to have. Other life cycle impacts, there are various standard ones. Most of which don't apply to GNOME. We haven't done that at all. There's lots If anyone has questions about any of these, please get in touch. And that's it. Please go out there and see what you can do to improve your applications, improve your CI pipelines. Mostly I think the improvements will be CPU usage and network usage. I think we're not great on at the moment. Sometimes it might add or rearrange features to make better use of the user's time. So, please get in touch if you have ideas or feedback or want to discuss things further about the application profiling or about the climate crisis in general and how that applies to GNOME. If you want to check my analysis, please check out Git repository here. And there's a bibliography in there which has a number of references. And if you want to read just one of them, make sure it's that one. It's a 15-minute read. It's really easy. But it covers most of the major topics about climate change. And that's it. I'll take a look at the questions now. If you have any questions, please put them in Etherpad which is linked in the shared notes. So, first question. have you calculated the carbon cost of something you've made? No. Still a work in progress. Various with the numbers in the presentation are parts of the solution to work out the carbon cost of julip, but I'm in the there yet. There's still thinking to be done about how to attribute the cost of dependencies which I don't have a good answer for yet. It's a work in progress. But we're getting there. Second question. Are there models from other industries that would show what carbon emission labeling or reporting would look like that we can learn from? Not that I know of. There are standards for carbon emission labeling reporting and there are a lot of different options. And nobody's yet done it successfully in a consumer-facing way. A lot of what's out there at the moment is for business-to-business sales and things. If you've got any ideas of things that do exist and work in certain contexts, let me know. There's also a lot of literature out there and I haven't read it all. I have read quite a lot. But not all of it. Third question, do you think there is space to create harder policy guidelines down the road? Or even just best practice guidelines? I think there is. I think from the experience after the equivalent talk I did at GUADEC last year, people aren't fans of harder policy guidelines without data to back it up. Which kind of slows down a lot of policy improvement. But also means you don't do the wrong improvements. So, I would hope there were specs to create harder policy guidelines down the road. But I think we should probably start with softer ones or things we can be fairly confident about. Next question. If we label apps with the carbon cost, will it be easy to include costs of system services, so, any GeoClue example, or should we also label the core of GNOME separately? Don't know. So, the labeling I'm thinking of is labeling the embodied cost of the release of a piece of software. And I'm not thinking of labeling the marginal costs of running it. So, with that said, I think the question is working on a slightly different basis from what my thoughts were around labeling. But there's possibly code for labeling. Or maybe providing some insight into power usage from certain apps as you run your computer. Which is probably more what -- what people want. Which is what Aditya was talking about in the first talk in the conference. Answering the question of, like, why is my fan running so hard? I didn't think was doing anything really intensive on my laptop. Where is the power going? Next question. Could someone theoretically -- >> Thank you, for your talk. And we are a little bit out of time. Philip Withnall: I'll catch people on chat afterwards. Thank you. >> Thank you very much. Well, thanks, everyone. And you can check break and office hours with GitLab during the break. Good luck. [Enjoy your break!]
{"Source-Url": "https://www.gnome.org/wp-content/uploads/2020/07/GUADEC2020_Philip-Withnall-How-can-I-make-my-project-more-environmentally-friendly.pdf", "len_cl100k_base": 4584, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 14862, "total-output-tokens": 4887, "length": "2e12", "weborganizer": {"__label__adult": 0.0003707408905029297, "__label__art_design": 0.0005421638488769531, "__label__crime_law": 0.0004148483276367187, "__label__education_jobs": 0.0022106170654296875, "__label__entertainment": 0.00013065338134765625, "__label__fashion_beauty": 0.00017821788787841797, "__label__finance_business": 0.0008783340454101562, "__label__food_dining": 0.0004291534423828125, "__label__games": 0.0006866455078125, "__label__hardware": 0.00214385986328125, "__label__health": 0.00113677978515625, "__label__history": 0.0003540515899658203, "__label__home_hobbies": 0.0004107952117919922, "__label__industrial": 0.0006718635559082031, "__label__literature": 0.0003886222839355469, "__label__politics": 0.0004916191101074219, "__label__religion": 0.0003230571746826172, "__label__science_tech": 0.19970703125, "__label__social_life": 0.00041866302490234375, "__label__software": 0.035736083984375, "__label__software_dev": 0.75048828125, "__label__sports_fitness": 0.0003325939178466797, "__label__transportation": 0.00092315673828125, "__label__travel": 0.0003960132598876953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21821, 0.0023]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21821, 0.38864]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21821, 0.96914]], "google_gemma-3-12b-it_contains_pii": [[0, 3215, false], [3215, 6794, null], [6794, 10342, null], [10342, 13898, null], [13898, 17696, null], [17696, 21279, null], [21279, 21821, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3215, true], [3215, 6794, null], [6794, 10342, null], [10342, 13898, null], [13898, 17696, null], [17696, 21279, null], [21279, 21821, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21821, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21821, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21821, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21821, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21821, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21821, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21821, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21821, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21821, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21821, null]], "pdf_page_numbers": [[0, 3215, 1], [3215, 6794, 2], [6794, 10342, 3], [10342, 13898, 4], [13898, 17696, 5], [17696, 21279, 6], [21279, 21821, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21821, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
d329eb494096701ba6577e367d1a77192f65dc5a
[REMOVED]
{"Source-Url": "http://algo2.iti.kit.edu/sanders/papers/europvm-mpi98.pdf", "len_cl100k_base": 4451, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 27071, "total-output-tokens": 5416, "length": "2e12", "weborganizer": {"__label__adult": 0.0003459453582763672, "__label__art_design": 0.0003924369812011719, "__label__crime_law": 0.0004210472106933594, "__label__education_jobs": 0.0006985664367675781, "__label__entertainment": 0.00015747547149658203, "__label__fashion_beauty": 0.0001652240753173828, "__label__finance_business": 0.0003895759582519531, "__label__food_dining": 0.0003619194030761719, "__label__games": 0.0005593299865722656, "__label__hardware": 0.00313568115234375, "__label__health": 0.0007257461547851562, "__label__history": 0.0004351139068603515, "__label__home_hobbies": 0.00012624263763427734, "__label__industrial": 0.0008525848388671875, "__label__literature": 0.0002701282501220703, "__label__politics": 0.0003604888916015625, "__label__religion": 0.0005974769592285156, "__label__science_tech": 0.39306640625, "__label__social_life": 0.00013959407806396484, "__label__software": 0.0256195068359375, "__label__software_dev": 0.56982421875, "__label__sports_fitness": 0.0003709793090820313, "__label__transportation": 0.0007801055908203125, "__label__travel": 0.0002715587615966797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21655, 0.03204]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21655, 0.41978]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21655, 0.85347]], "google_gemma-3-12b-it_contains_pii": [[0, 2655, false], [2655, 5498, null], [5498, 8458, null], [8458, 11223, null], [11223, 13230, null], [13230, 14587, null], [14587, 16060, null], [16060, 18604, null], [18604, 21014, null], [21014, 21655, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2655, true], [2655, 5498, null], [5498, 8458, null], [8458, 11223, null], [11223, 13230, null], [13230, 14587, null], [14587, 16060, null], [16060, 18604, null], [18604, 21014, null], [21014, 21655, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21655, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21655, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21655, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21655, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21655, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21655, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21655, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21655, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21655, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21655, null]], "pdf_page_numbers": [[0, 2655, 1], [2655, 5498, 2], [5498, 8458, 3], [8458, 11223, 4], [11223, 13230, 5], [13230, 14587, 6], [14587, 16060, 7], [16060, 18604, 8], [18604, 21014, 9], [21014, 21655, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21655, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
746b5a85ed95e6a47f4d351eb3bc725d39fe8839
Localization issues in the design of a humanoid goalkeeper for the RoboCup SPL using BICA Víctor Rodríguez, Francisco J. Rodríguez, and Vicente Matellán Grupo de Robótica, Depto. de Ingeniería Mecánica, Informática y Aeroespacial E.I.I.I Universidad de León León, Spain Email: vrodrm01@estudiantes.unileon.es, fjrodl@unileon.es, vicente.matellan@unileon.es Abstract—This article exposes the localization issues faced during the implementation of a humanoid goalkeeper to take part in the RoboCup SPL standardized soccer robot competition. For this task, we have used BICA, a state-driven, component-based architecture created by our counterparts in the URJC, to allow a much easier behavior design process. The use of BICA let us choose different self-localization methods in configuration time, and the possibility of using none in running time if some type of error appears. Keywords—RoboCup, localization, behavior programming, goalkeeper, Nao I. INTRODUCTION The worldwide-known RoboCup [7] competition, was established in 1997 to promote research in the robotics field. It is comprised of several events, where RoboCup Soccer is the most popular by far. To be precise, RoboCup Soccer has sub-events using different types of robots, ours being Standard Platform League (SPL [8]), in which teams must use the same robot model, currently the Nao Humanoid. The RoboCup has gained much recognition because it represents a good opportunity for either testing existing robotic solutions (both software or hardware ones), or developing new ones in a controlled, but not limited, common environment. In our case, we play in SPL soccer. Matches in this competition take place in a well defined environment but it is still complex environment because, while playing, a total of 8 robots (4 per team) will be competing for the ball, trying to score, etc. As we will see later, the use of a good localization system is mandatory to implement winning strategies in the SPL. Static objects (goals, lines, etc.) on the field may be used as landmarks for the localization system, and almost every team does. But there are many factors like unexpected collisions, visual obstruction by other robots, etc. that could make the self-localization system fail, so robots should be able to fulfill their roles without it. The BICA architecture [5] was designed to abstract many problems in the development of software control systems for the Nao by offering to the programmer a library of independent and multipurpose components already implemented, letting him or her focus on writing a good behavior instead of reimplementing existing algorithms. This article describes BICA's multiple localization components, and their use in designing a goalkeeper for the RoboCup SPL. We will show how using BICA facilities the goalkeeper can be designed to use the self-localization methods or not, even to decide during the game if using the localization information or not. As we will see in next sections all the components are implemented a priori and are available at run time. At a given moment the system can choose between them and this will allow to our robot to deploy the better actions in each situation. The rest of the paper is organized as follows, next section describes the basis of RoboCup regulations, the robot used and the BICA architecture. Section III is devoted to the localization components included in BICA, Section IV to the goalkeeper design and final section to the discussion of the approach. II. CONCEPTS A. The environment All field specifications and match rules can be found in the book that is published every year [4] by RoboCup. The football field where the robots will play has fixed dimensions and layout, the figure 1 shows the dimensions of the field. ![Diagram of the football field](image1) Figure 1. Diagram to scale of the football field used in SPL (dimensions in mm), as seen in the official rulebook. As it can be seen, the field has a number of lines at specific locations that is used by the localization components to calculate the position of robot on the field. The small choice of colors also allows to easily distinguish between the different elements, like goals. The ball itself is a Mylec orange street hockey ball, 65 mm in diameter, according to the book. B. The robot The robot that will be used by all the teams in SPL league is Nao (figure 2), an humanoid robot built by the french Aldebaran Robotics. The full list of features is available in [6], so the advanced technical details will not be discussed here. The robot has two cameras mounted on its head, but only one can be used at a given moment, and they offer a limited resolution to the detriment of speed. One is located higher and offers a view of the field, and the other is better for controlling the ball as it points lower. Nao is also equipped with ultrasonic, obstacle-detecting sensors; speakers and microphones for team coordination (via an integrated speech synthesizer); and wireless capabilities, which is the only feasible method for communication while playing. ![Nao robot depicted with both team markers, as seen in the official rulebook.](image) Each team will be made up by 4 robots: a goalkeeper and three field players. Colored waistbands must be worn by the robots, to help visual identification. All robots must obey the wireless commands of a state-machine GameController running on a PC controlled by a human referee to signal penalizations to robots, goals, etc. C. BICA We did not design our player from scratch. We have used an SDK named BICA that works as a proxy between the framework provided by Aldebaran, and the final application. This SDK, Behavior-based Iterative Component Architecture (BICA, [5]), abstracts from many low level details so users can concentrate on writing and debugging medium/high level robot behaviors. BICA has been developed by the Robotics Research group from the University of Rey Juan Carlos of Madrid (URJC), our partners in the SpiTeam. BICA is a component and state-based software package. It is divided in Components, each one providing a different service or feature. For instance, there is a component that takes care of ball detection [2], another for goals detection, several more for global localization, ball kicking, etc. Developers can also create their own components These components can be loaded or discarded in real time, and BICA can dynamically assign processor power to the active ones. Thanks to this, the amount of power and CPU consumption is kept at minimum, and every component is called as frequently as Nao’s hardware is able to manage. Every cycle, all activated components are stepped in a controlled manner. This is implemented by calling a function in a iterative way. The name of this function is the same in every component: step(). ![BICA design diagram for a simple goalkeeper behavior. Big yellow circles are states, dark blue arrowed spheres are transitions. Big, discontinued, purplish blue spheres represent BICA component dependencies.](image) Components can be organized into hierarchies to implement more sophisticated behaviors. When the robot needs to run a task made up by several components organized in that hierarchy, the step() function in the higher component triggers a tree of calls to each step() function in the components below. Every component can also request the activation of another non-active one. This way, BICA takes care of stepping dependencies before the code that requires them is stepped on its own. A BICA component is made up by states and transitions that connect states. That is, a BICA component is a finite state automata. Each state consists in a piece of code run only when the component has been "stepped", and the state is active. Only one state may be active at any time. The code of the active state will be the one being executed every time the component is stepped. Transitions from the currently active state are evaluated at every step to decide whether to switch to another state or remain in the current one. Transitions are basically boolean decisions, that can be complex functions or just a single boolean variable. Components can be designed using Visual Component Design (VICODE) tool (figure 3). As we can see, left Figure 4. Coordinates given by BICA are team-independent. Position (0,0) marks always the center of the field. Increasing X-axis values get nearer the opponent’s goal. Increasing Y-axis values are nearer to the right touchline. side shows a fixed palette of elements lets us draw and interconnect states, transitions and dependencies. The editor also provides a way to input code directly, and to set automatically timers for time-controlled transitions. When saving a Component, VICODE automatically generates the corresponding C++ code from the diagram creating the skeleton of C++ classes that can be completed by the designer. Human-written code will be kept the next time the Component is edited. When the code is fully implemented, it will be compiled optimized for the robot CPU and loaded in the robot. Another tool made in Java, named JManager, allows us to easily interact with the components running in the robot from our PC. III. LOCALIZATION SYSTEMS The basic behavior of a goalkeeper is prevent the ball ends inside the goal. To do this, the robot should try to stay under its own goal. The robot could move its body between both posts and take position in front of the ball to do this. This seems simple, but some problems arise: - What would happen if our robot was to be carried away from the goal? (e.g., grabbed by the referee after a foul, or pushed by another robot) - Should the robot move away from the goal, to try to clear the ball in case it came near, but not close enough for a direct kick? The first one is the worst by far. If our robot is not able to navigate through the field, its performance will be very poor. For instance, at the beginning of the match, both teams have 45 seconds to get their position in the allowed areas. If they fail to do so, the referee places them manually on preselected spots, worst than the ones allow by the rules if the robots are able to self-position themselves. This problem is know as the kidnapped robot. Ideally, the localization system should be able to estimate where our robot is in a given moment, and recover quickly after a “kidnapping”. The second problem is related to the strategy we want to use in the game. If we want to play an offensive match, but our localization system is too weak, we cannot think in kicks far away from our goal position. Whereas if our localization system is well implemented we can deal other movements far from our goal. BICA has different implementations of localization algorithms, and we use them to get distances and bearings to all interesting components during the game. To prevent confusions and minimize the code, all methods follow the same conventions concerning the coordinate system. A right-hand system is employed so, regardless of the robot’s team, decisions and tactics need not to be tailored for each, as it can be seen in figure 4. In the same way, all localization components follow this coordinate system except the basic sistem, this gets a simple localization from opponent and own goal position. We have to choose between how the localizations systems are launched. If the choice is going to do it at running time, all the potentially desired localization components have to be loaded and launched int he robot. We will find two problems with this system. If the inner estimation error of each method is going to be used to decide which one is used, all the components have to be active and for this reason all tree of calls for each component (step()) are made. This means that it will consumes a lot of processing power to keep all components alive. Other problem, can be find when we stop and activate between localization methods because they have to trigger the system from scratch and this means that a lot of time will be used to initialize components. Regarding the localization methods included in BICA, there are three different components available: A. FMK FMK [14] stands for “Fuzzy MarKov” and was the localization method developed originally in the TeamChaos, the previous team where our group was integrated before making up SpiTeam. In FMK, the field is represented by a grid $G$ so that $G(x, y)$ represents the probability of finding the robot at a given position $(x, y)$. The cells contains information of the probability that the robot is in that cell, and information on the most probable orientation range, which means that it is actually a $2\frac{1}{2}$ grid because only one orientation is represented. This information is represented by a diffuse trapezoid (Fig. 6). This trapezoid is defined by the tuple $(\theta, \delta, \alpha, h, b)$. Intuitively, if $h$ is low, the probability of being in this cell is low. If $h$ is high, it is very probable... that the robot is in this position. If the trapezoid is wide (is large), great uncertainty exists about the orientation of the robot. If the trapezoid is narrow, or even has triangular form (because is practically null), the orientation uncertainty is so low that we can affirm that the robot orientation is $\theta$. ![Fuzzy trapezoid](image) **Figure 6.** Fuzzy trapezoid ### B. n-EKF The Extended Kalman Filter (EKF) is one the most popular tools for state estimation in robotics [9]. It is a local localization method whose strength relies on its simplicity and its low computational complexity. However, this method is not able to self-locate the robot quickly when starting from a situation of total uncertainty. Neither, is it able to recover from situations of high error in the estimations, nor from the manual change of the position of the robot during the game. This is due to its monomodality, that is, it only maintains a single estimation of the position of the robot. Using the n-EKF[12] our team has tried to maintain several hypotheses, each one represented by an independent EKF. The number of EKF filters will not keep constant. It can be dynamically modified up to a maximum limit. Initially, there will not be any active EKF. Every time that the global localization system seems reliable, but no filter close enough to the position estimated by it, a new EKF filter is created. The new EKF filter is initialized to the center of the cell position where the global systems is, and the uncertainty is set to the one associated with that cell. ### C. 3D reconstruction The third method available in BICA is the one based in the 3D reconstruction of the goals [15] through a color based geometrical segmentation method. It uses an HSV color filler, an edge filter, and Hough transformation to detect the post and crossbar lines. Afterwards, the position of the robot with respect to the goal is calculated exploiting... Localization ability plays a big role during RoboCup matches. Localization is even more “necessary” for the goalkeeper than for the other players, otherwise some of its actions would be constrained. Other players do not need to know their exact positions on the field to look for the ball and kicking it towards the opponent goal. But the goalie has to remain into a critical area where a misplacement could cost victory. The simplest solution would be to add extra transitions from every state, checking if the distance to the goal is greater than a fixed value. Then, the robot should block the ball whenever it came too near (another parameter to be tuned). The algorithm consists in drawing a line between the ball and the center of its goal, follow that line towards the ball, not leaving the area. If the ball gets closer enough it has to kick it out towards the opponent goal. A more risky situation appears if the ball is close to the goalie, but not enough for kicking, and outside the set save area where the goalkeeper can go (the parameter controlling the maximum distance). Ignoring the ball could let an opponent approach it and score a goal. But, if the robot walks out to clear it, the goal will be exposed. There are several evaluations of the different localization methods for the RoboCup in the literature. For instance, in Hessel bachelor thesis [19] a quick review of all localization methods used in Robocup 2011 can be found, among them we can found SpiTeam localization. In [12] we can see a depth comparison between the methods available on BICA on the four legged league, the ancestor of the current SPL league. Finally, regarding Kalman-based methods, in [18] a good review can be found. Our choice of the localization algorithm is usually made at design time. If we look at the diagram in figure 5 we have all the desired component necessary to deploy the goalkeeper behavior. Two of the dependencies (discontinued, purplish blue spheres) represent the NEKF (complex) and the SearchNet (basic) localization systems implemented. We load only these ones to save energy consumption and to avoid not covered states where the robot can not find their localization (for instance if the goalkeeper is surrounded by robots). IV. ROLE OF LOCALIZATION IN A ROBOCUP GOALIE Localization ability plays a big role during RoboCup matches. Localization is even more “necessary” for the goalkeeper than for the other players, otherwise some of its actions would be constrained. Other players do not need to know their exact positions on the field to look for the ball and kicking it towards the opponent goal. But the goalie has to remain into a critical area where a misplacement could cost victory. The simplest solution would be to add extra transitions from every state, checking if the distance to the goal is greater than a fixed value. Then, the robot should block the ball whenever it came too near (another parameter to be tuned). The algorithm consists in drawing a line between the ball and the center of its goal, follow that line towards the ball, not leaving the area. If the ball gets closer enough it has to kick it out towards the opponent goal. A more risky situation appears if the ball is close to the goalie, but not enough for kicking, and outside the set save area where the goalkeeper can go (the parameter controlling the maximum distance). Ignoring the ball could let an opponent approach it and score a goal. But, if the robot walks out to clear it, the goal will be exposed. Time constrains are also critical, for instance, if the robot is allow to go to the ball in the previous situation, but if it accidentally moves the ball further away (e.g., with its feet), it can continue walking towards it going far away from the goal. Even if it means to increase the overall complexity, we must make sure that our robot does not “insist” to carry out an action when it is clear that it can not be achieved, or that it has no effect. Some inverse hysteresis has to be considered when defining the states of the component. Finally, we have not discussed the navigation system. For the goalie we are using a simple control in velocity. We tried to be always faced to the ball, first with the head, then we align the body, so the navigation algorithm is straightforwards. Inside the area the goalkeeper cannot be touched by the opponents, so we do not need to implement an avoidance behavior. There are also no problem to bump into a teammate because they are not allowed to enter its own area (an the referee will remove them if entering). We only should worry about collisions if going out, but the goalie will only go out if it is the closest to the ball, so it has not to worry about penalties. A. Without localization If no localization system is implemented, or the readings are not precise, our goalie will have to work without it. This is actually feasible, but the results are less than ideal. If no visual element is present in Nao’s visual field (goals, ball, lines...) in a given moment, the robot is considered lost. This can happen when it is looking in the wrong direction or when another player block our view. The first approach of our goalkeeper in 2009, was developed without any localization system [17] and was tested in Robocup 2009. The robot implemented a simple system to remain under its own goal bar, taking care of how many steps had run left and right. It also has reactive actions to saving goals when the ball was near the goal. The other actions of the goalie worked in the same way, except by kicking the ball. The only way to control the movements of the robot would be either to count the number of steps, or the time after leaving the goal, by using a timer and a direction. B. Using localization In case of using a localization, the given error in the calculated position is small and the goalie behavior design is much more simple and allow us to do creative play. Our goalkeeper components are receiving constant data about its current position on the field. BICA has functions that let us specify a precise location where the robot will try to move to. After adjusting manually its bearing (which is also given by the localization system), Nao should place itself quickly under the goalbar. It lets to look at the center of the field, so finding and tracking the ball will be easier. The approach used in 2011 is to draw two virtual areas. The first one marks the limits that our robot should not pass while defending the goal. The second one, a bigger area should be used when the ball comes too near and the robot gets out of the goal to clear it. If the localization data is reliable, the goalkeeper would get back in position after crossing any area. 3D geometric properties. Another methods have also been tested on the our team platform, as for instance [16] based on genetic algorithms, It has not been included in the standard distribution of BICA due to its high computing power requirements. D. Experimental evaluation of localization algorithms In this paper we have not tried to describe the insights of the different localization algorithms, but the use of BICA to integrate them, and its use to implement a goalkeeper behavior. V. CONCLUSION AND FURTHER WORK This paper has tried to show the basic problems related to autonomous self-localization of autonomous humanoids in the RoboCup competition. We have shown in particular how the goalkeeper behavior can be implemented with or without localization with BICA framework and how the use of localization is a must if a competitive behavior is desired. We have also summarized the localization methods included in the standard distribution of our software. Regarding further work, an empirical evaluation of the different methods provided by BICA has not been done in real game conditions. This requires the design of a rigorous methodology for evaluating localization in different situations, in particular, this require offline tools for recording a match a reproducing the same conditions using a different localization component. Anyway, as the information provided by the localization is used to take decisions it is not possible to reproduce exactly the same conditions. Also we would like to add a different navigation skills, for instance walking backwards facing the ball, or more in general, a positional navigation while tracking the ball. ACKNOWLEDGMENT We want to express our gratitude to the work developed by the SpiTeam SPL Team, and specially to Dr. Francisco Martín for his work on localization, Dr. Carlos Agüero for his work on BICA, and Dr. José María Cañas for the general design of the architecture, as well as to the rest of our teammates for their contributions. This work has been partially supported by Cátedra Telefónica - Universidad de León (grant CTULE11-2). REFERENCES [18] Michael J. Quinlan and Richard H. Middleton, Multiple Model Kalman Filters: A Localization Technique for RoboCup Soccer in RoboCup 2009: Robot Soccer World Cup XIII, pp 276-287
{"Source-Url": "http://robotica.unileon.es/~vmo/pubs/isda2011.pdf", "len_cl100k_base": 5056, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 17621, "total-output-tokens": 6661, "length": "2e12", "weborganizer": {"__label__adult": 0.0010938644409179688, "__label__art_design": 0.0014410018920898438, "__label__crime_law": 0.0018968582153320312, "__label__education_jobs": 0.0013675689697265625, "__label__entertainment": 0.0002646446228027344, "__label__fashion_beauty": 0.0006651878356933594, "__label__finance_business": 0.0004930496215820312, "__label__food_dining": 0.0010690689086914062, "__label__games": 0.01078033447265625, "__label__hardware": 0.00487518310546875, "__label__health": 0.00229644775390625, "__label__history": 0.0011358261108398438, "__label__home_hobbies": 0.0004620552062988281, "__label__industrial": 0.002429962158203125, "__label__literature": 0.0004513263702392578, "__label__politics": 0.0011081695556640625, "__label__religion": 0.001018524169921875, "__label__science_tech": 0.3486328125, "__label__social_life": 0.00028395652770996094, "__label__software": 0.00811004638671875, "__label__software_dev": 0.58837890625, "__label__sports_fitness": 0.0176239013671875, "__label__transportation": 0.0035800933837890625, "__label__travel": 0.0005211830139160156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27493, 0.01753]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27493, 0.67215]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27493, 0.92074]], "google_gemma-3-12b-it_contains_pii": [[0, 4057, false], [4057, 8297, null], [8297, 13025, null], [13025, 14969, null], [14969, 22207, null], [22207, 27493, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4057, true], [4057, 8297, null], [8297, 13025, null], [13025, 14969, null], [14969, 22207, null], [22207, 27493, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27493, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27493, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27493, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27493, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27493, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27493, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27493, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27493, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27493, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27493, null]], "pdf_page_numbers": [[0, 4057, 1], [4057, 8297, 2], [8297, 13025, 3], [13025, 14969, 4], [14969, 22207, 5], [22207, 27493, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27493, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
232e1dc9830bfbe7c51fe4303582887c537cbe3a
Abstract Purpose/Thesis: The aim of this paper is to introduce the concept that Open Data and Open APIs provided by Communication Service Providers integrated in one end-user-oriented application may considerably improve the process of communication between people and institutions. Approach/Methods: Open Data is one of the key elements of the broad Internet ecosystem; other elements, such as open interfaces, open source, API, etc., are the assets that make the Internet environment robust, scalable, and extendable. The paper, based on the case study analysis, presents selected applications integrating the communication enablers in the form of Open APIs and Open Data sources. Results and conclusion: The combination of Open Data and functions provided by telecommunications operators in the form of Open APIs significantly improves and facilitates the processes of communication between people and institutions. Originality/Value: The author proposes to integrate Open Data with real time communication functions provided by Communication Service Providers in the form of Open APIs. Open Data and Open APIs are effective tools to create user-made environments that are convergent and coherent from the application source code point of view. Keywords Open Data. Open API. Service interfaces. Telecommunications operators. Received: 21.03.2014. Revised: 8.05.2014. Accepted: 16.05.2014. 1. Introduction Open Data is one of the symbols of our times. Its nature and application is best defined by DBpedia (Auer, et. al. 2007, 722): Open data is the idea that certain data should be freely available to everyone to use and republish as they wish, without restrictions from copyright, patents or other mechanisms of control. Another definition was created by the Open Knowledge Foundation in 2005 (Defining Open Data, 2013): Open data is data that can be freely used, shared and built-on by anyone, anywhere, for any purpose. It seems conclusive that Open Data should be easily accessible to all citizens without any legal or technical limitations. What purpose does Open Data serve? First and foremost, it facilitates the transparency of public life. When combined with information from web ports, social networking sites and facilities provided by Open API, Open Data enables internet users to create new innovative portals, applications and systems. Open Knowledge Foundation portal defines three key features of openness (Defining Open Data, 2013): - Availability and access – the data must be available for everyone in convenient and modifiable form. - Reuse and redistribution – terms and condition should permit reuse and redistribution of data in machine-readable form. - Universal participation – everyone must be able to use data sets. Open Data significantly influences three areas: open governance, smart cities and innovation. In open governance Open Data showing politicians’ activity gives citizens the opportunity to evaluate their representatives in the process of governance. The information about the results of parliament voting, absences and activity in the mass media can be very useful for the assessment of politicians. Smart cities is another large area of potential Open Data application. The smart city concept is focused on providing models and specific solutions based on innovative new technologies, new organizational models and management as well as new legal regulations for the development of urban infrastructure, efficient resource usage, support of social interactions and construction of social capital (Muraszkiewicz, 2014). Open Data is extremely important here as it may be used to collect and expose information about: the activity of citizens on the city portals, housing prices, traffic information, safety, number of students in schools, energy and water consumption, etc. In the case of innovation Open Data refers to the concept of Open Innovation by Professor Henry Chesbrough (Chesbrough, 2006). Open Data exposed with API allows the users to create innovation (e.g. new innovative applications and services) outside the place (organization) where the data is stored. In this paper the author focuses on the examples of applications based on the concept of Open Data and Open API exposed by telecommunications (telecom) service providers – in particular on the features provided by Orange. It should be noted that the legal aspects of the selected applications discussed here have been limited to the case of the telecommunications market in Poland. 2. How to make Open Data accessible? According to the definition quoted in the previous section, Open Data is assumed to be readily available for reuse and reprocessing. In order to meet these conditions, data providers are required to expose data in such a way as to make it: - easily accessible for potential users, - available for a number of users simultaneously, - regularly updated, - secure in terms of access, both for the supplier and the receiver. It seems absolutely necessary for the entity providing the data to build a platform that will be able to expose data in a safe and productive way. The first issue to be taken into consideration is the architecture of the platform since the system may be built with either centralized or distributed architecture. The centralized option ensures the security of the Open Data. In that case all hardware and software elements of the platform may be installed in the data center of the institution exposing the data. On the other hand, it is a considerable challenge to make such an environment productive and fully available. The performance may be improved mainly through the enhancement of the resources (that is via additional servers, databases, network devices), which is an expensive solution. Another approach is a distributed system. In this case the elements of the system may be situated in different places and with various entities being located in so-called cloud. As a result, Open Data is available in different places in the networked environment, which on the one hand solves the problem of performance, but on the other hand makes the users face the challenge of keeping stored data up-to-date in the distributed environment. This requires the development of tools for synchronizing data with the primary source. Another matter to be considered is the compliance with the standards and IT protocols used for the data exposure. In order to make data reusable and easy to use in web portals, applications and systems, the programming interfaces known as API (Application Programming Interface) should act as data carriers. The data repository using the API can provide the end user with the data represented via the most popular Internet protocols, such as HTTP (Hypertext Transfer Protocol), FTP (File Transfer Protocol) or LDAP (Lightweight Directory Access Protocol). These protocols usually send text data. However, the user may also take advantage of less popular protocols (e.g. binary protocols) which are more difficult to implement but offer better security and compression of transmitted information. The contemporary trends prove that in the majority of cases the platforms exposing both Open API and Open Data use Web Services technologies based on HTTP protocol. However, another issue arises at this point, namely Web Services (WS) exist in two different models. One of them is strictly related to Service Oriented Architecture (SOA) in the case of which web services take advantage of SOAP protocol and service semantics written in WSDL specification (Newcomer, 2003). Slightly simpler attitude is proposed by the RESTful architectural style (Richardson et. al. 2013) which is based on the Resource Oriented Architecture (ROA), where the access to the service (e.g. to the Open Data) is gained through URL (Uniform Resource Locator) resources and methods in HTTP protocol (such as GET, POST, PUT, DELETE). Another topic demanding attention appears here, i.e. the format of the data itself as exposed by Web Services. Simple data can be displayed in a text format, e.g. Comma Separated Values (CSV), or in the key-value format, but also with the help of XML (Newcomer, 2003) and JSON¹ (Rockford, 2009) notations. When dealing with complex data types containing, for instance, graphics, the way of exposing such content by Web Services must allow for the simple download of such objects. The instances of such data types are the Geographic Information System (GIS) records representing maps which are provided via e.g. Web Map Service protocol (WMS). The application of open protocols and standards presented in this section facilitates both the exposure of Open Data and the creation of applications using it. The accessibility of SDK sets (Software Development Kit) accelerates and simplifies the job of the programmers even further. SDK modules, available for particular programming environments (Java, ¹ http://www.json.org MS Visual Studio, PHP, etc.), provide programmers with objects that may be processed by classes and application methods. Using the objects of Enterprise JavaBeans specification of Java language is very good example of this type of high-level programming. 3. The sources of obtaining Open Data and potential problems The most crucial sources of Open Data are governmental and educational institutions. Governmental institutions „effortlessly” acquire some collections of citizen data, industrial data, statistical data, etc. Some of this data is sensitive because of the Polish law on the Protection of Personal Data and it cannot be made public. However a considerable number of records (like statistical data) may be published within the Open Data model. The examples of data repositories coming from Polish governmental sources are: Otwarty Budżet (Open Budget) and Sejmometr (Parliament-Meter) sites. Another substantial collection of Open Data is the data related to the educational institutions. In 2004 the education ministers of all member nations of the Organization for Economic Cooperation and Development (OECD) signed a document in which they declared that all data obtained as a result of the allocation of public funds must be publicly accessible. The significant number of projects and communities on the Web is dedicated to the open publication of data. It is worth mentioning here the chemistry-oriented Blue Obelisk group\textsuperscript{2} and Linked Science project\textsuperscript{3}. It should also be pointed out that there are two other potential sources of data (perhaps still not treated as Open Data), that is, social networking and telecom operators. Social networking sites such as Facebook, Twitter or Google practically base their entire business model on acquiring data from the Internet users and making them available to third parties (mainly in order to present contextual advertisements to the users). Another issue is the acquisition of data from the public, e.g. applying the methods of crowdsourcing. From the end-user point of view it is essential to provide a convenient interface for entering data or making it available. These interfaces can be offered in the form of: - websites, - dedicated desktop applications, - mobile applications. Two first mentioned above include, for instance, portals enabling citizens to report problems with infrastructure, such as FixMyStreet\textsuperscript{4}, FixMyTransport\textsuperscript{5}, or the Polish initiative Naprawmy To!\textsuperscript{6} (Let’sFixIt!). The latter also offers a mobile application for Android (operating system). Another interesting example of a mobile application used to capture and share information within Open Data model is the application called Yanosik\textsuperscript{7}. This system warns users against the police patrols, speed cameras and traffic incidents and all information can be entered into the system by the application users. \textsuperscript{2} http://www.blueobelisk.org/ \textsuperscript{3} http://linkedscience.org/ \textsuperscript{4} http://www.fixmystreet.com \textsuperscript{5} http://www.fixmytransport.com \textsuperscript{6} http://naprawmyto.pl/home \textsuperscript{7} http://yanosik.pl/ 4. Open API – the key to success Programming interfaces have been well known in the computer science for many years. The adjective “open” has already been used in the names of numerous computer interfaces – just to mention Microsoft’s ODBC (Open Database Connectivity) or OpenGL standards of graphics programming. The term “Open API” appeared for the first time in the Internet world of social networking and Web 2.0. The creators of the term meant to emphasize the ease of combining different platforms and systems based on Web Services. It is important to mention that Open APIs are published on the Internet and their specifications are available to all potential users. Why Open API is so crucial? First of all, it enables programmers to create interesting end-user applications of commercial value. The “interesting” application should have the following characteristics: - combine Open APIs from various sources (mashup), - use or gain Open Data, - have an attractive graphic representation, - have rich functionality, - have low hardware requirements, - in the case of mobile applications provide low power consumption in relation to the battery drain, - have a “reasonable” price. Combining all, often even contradictory, features listed above, in one application is not easy. The large number of free applications available on the web have not proved to be commercially successful and few of them deserve to be called “killer applications”. Telecom operators have also managed to recognize the potential of Open APIs and made a number of features available for the programmers on the Internet. Some of those functions are: receiving and sending SMS, MMS or USSD (called Unstructured Supplementary Service Data), locating mobile terminal (mobile phone), checking its status (free, busy) or adding amounts for goods and services purchased on the Web to the subscriber’s account (payment API). 5. Examples of applications using Open APIs and Open Data Orange Labs created a number of applications within the Open Middleware 2.0 Community project that integrate functions of a telecom operator with an Open Data facility (Podziewski et. al. 2012; Bogusz et. al. 2012; Litwiniuk et. al. 2012), and the concept of using an Open API in a variety of systems and applications (Kalitska, et. al. 2012; Legierski & Korbiel, 2011). This section depicts several applications that use Open APIs and Open Data. Emergency Button (Podziewski et. al. 2012) – the idea of the Emergency Button application is based on the interaction between the senior and the caregiver. Seniors are people with health problems requiring special care, help or assistance. If they use mobile phones, they are given an opportunity to start the Emergency Button service (by pressing the key with programmed USSD code or automatically – whenever the phone detects the user’s fall). When the action is initiated, the caregiver receives a message (an SMS) containing the location of the senior including his/her geographical coordinates and approximate address. The application uses the following API: Receive USSD, SMS Send and Terminal Location, as well as the Google Maps API. **BusStop** (Litwiniuk et. al. 2012) – this is an application focused on public transport. The system is to provide the user with the information on the timetables of selected bus lines, the routes of individual lines and lines running in the user’s vicinity. This service is based on the functionalities offered by mobile networks. BusStop application uses USSD channel through which it is possible to provide the service. In the next step, the system locates the user and his nearest bus stop or selects the timetable required by the user. In the final stage, the information requested by the user is sent to him via SMS. This API application employs: Receive USSD, Send SMS, Terminal Location and the data from ZTM Warszawa (Warsaw Public Transport Authority). **Telco 2.0 for UC** (Bogusz et. al. 2012) – Unified Communications is a concept which has recently become popular on the market of business communication. It combines all possible communication channels in one application and for one phone number. The UC communication can be expanded with such channels as, for instance, SMS, MMS and USSD. Telco 2.0 for UC enables the use of the operator function in the system from Siemens – Openscape UC. The typical user has his office VoIP phone connected to the OpenScape Voice software telecom switch using the SIP protocol. On the other hand, such a subscriber has access (using a computer and web browser) to the OpenScape Web Client application, which enables the use of advanced Unified Communications functions such as conferencing, Instant Messaging conversations, phone management (CTI – Computer Telephony Integration), etc. The Architecture of the Siemens OpenScape environment has been complemented with the Telco 2.0 for UC server, the task of which is, on the one hand, to establish and maintain a communication session (using Open API) with the service provider’s Service Delivery Platform (SDP), and, on the other hand, to expose services to the subscriber of the Unified Communications system. The Telco 2.0 for UC allows the users to send SMS and USSD messages, as well as locate the mobile terminal. The application uses Send SMS, Send USSD and Terminal Location APIs. Other Open APIs used in the system are Google Maps and OpenScape UC SOA API. **Social Game** – This is a mobile social networking city game. The main objective of this scheme was to build a platform which enables users to create their own scenarios for urban games. The scene of the game can be any place specified as its territory by the designer of the game. The player can be anyone with a mobile phone offering access to the Internet. The audience passively using their laptops or desktops transfer hints to the players. The idea of an urban social game is based on the use of information provided by the operator’s platform which exposes Open APIs. As a result, the city game has been complemented with information provided directly by the mobile network. This additional telecommunication aspect will allow for the use of a mobile phone as a terminal to control the game in which the users participate. Through USSD messages the users can be informed about events taking place in the game which are significant from their point of view, and they may inform the platform about their activities. Additionally, the system specifies the player location via the platform in order to handle the scenario. The application uses Send SMS Terminal Status, Terminal Location and Facebook API. 6. Telecom operator as a potential source of Open Data Telecom operators store extensive amount of data both about their customers and telecommunications infrastructure. Some of it can be published and made available as Open Data. This type of data includes, for example, information about the architecture of the mobile network. Data such as the location of BTS (Base Transceiver Station) and information about the mobile cell (Cell ID) are the subject of two crowdsourcing projects, that is BTS Search⁸ and Open Cell ID⁹. The information concerning the infrastructure components of those mobile operators is collected by the Internet users and made available as Open Data. However, the telecom operator is able to expose the same set of data on the Internet in the form of Open API. It should be emphasized that this data will obviously be more current than the information gathered by the Web community. The operator can even go a step further and offer more sophisticated and detailed data (this time, for a fee) including e.g.: the location of the BTS, the Cell ID values, Base Station Identity Code Neighbor Cell and information about the ID of BCCH (Broadcast Control Channel ID). This data set can be useful for external entities to build algorithms determining more suitable location of mobile stations and evaluation of the quality of the network operator. Another set of data that can be offered in the form of Open Data may represent aggregated information about telecommunications traffic between cities. When selling data, the provider can offer more detailed set of information on, for instance, the telecommunications traffic observed from the BTSs. To sum up, it seems conclusive that the telecom operators should not oversee the benefits of data sharing both in the Open Data and paid data models and develop business models in which they can offer these types of data to their customers. 7. Conclusions The use of Open API programming interfaces is the prerequisite for the effective use and re-use of Open Data. It is strictly related to the fact that the productive use of data basically means its employment in web applications and web portals. Only the exposure of data in the form of API by the source system can make it up-to-date and valid for the user. At the same time, it can secure the data and protect the integrity of data transmission. When talking about the exposure of Open Data and Open API in recent years, in particular in the Internet environment, it is possible to observe trends in the use of Web services based on RESTful protocol and data representation in the form of XML and JSON standard. Standards related to the use of SOA and SOAP protocol, also being upward trends, have been popular mostly with private networks of companies and government organizations. The examples of applications created in Orange Labs clearly show that the combination (mashup) of Open API and Open Data is crucial for the functionality of innovative applications, and the Open Data itself made available by means of Open APIs is a key element of the network ecosystem. ⁸ http://btsearch.pl ⁹ http://www.opencellid.org/ Open Data – czym są i jak z nich czerpać korzyści Abstrakt Cel/teza: Celem pracy jest przedstawienie koncepcji mówiącej, że Otwarte Dane i funkcje dostarczane przez operatorów telekomunikacyjnych w formie Open API zintegrowane w aplikacji użytkownika końcowego mogą w sposób istotny poprawić proces komunikacji pomiędzy ludźmi i instytucjami. Koncepcja/metody badań: Otwarte Dane są jednym z kluczowych elementów ekosystemu Internetu. Inne elementy, takie jak: otwarte interfejsy programistyczne, open source, API sprawiają, że środowisko Internetu jest niezawodne, skalowalne i rozszerzalne. W publikacji przedstawiono przykłady zastosowania aplikacji integrujące funkcje komunikacyjne w formie Open API i Otwarte Dane. Praca jest oparta na metodzie studium przypadku (case study). Wyniki i wnioski: Głównym przesłaniem jest konkluzja, iż Open Data w połączeniu z funkcjami dostarczanymi przez operatorów telekomunikacyjnych w formie Open API znacznie poprawiają i ułatwiają procesy komunikacji między ludźmi i instytucjami. Oryginalność/wartość poznawcza: W niniejszej pracy zaproponowano integrację otwartych danych z funkcjami komunikacji w czasie rzeczywistym świadczonymi przez operatorów telekomunikacyjnych w formie Otwartych API. Przykłady aplikacji pokazują, iż Open Data i Open API to proste, ale skuteczne narzędzia do tworzenia przez użytkownika środowisk aplikacyjnych, które zapewniają konwergentność i spójność kodu. Słowa kluczowe SEBASTIAN GRABOWSKI – Graduated from the Faculty of Electronics and Telecommunications at Koszalin University of Technology, the Faculty of Management at the University of Warsaw and the Warsaw School of Social Psychology. His research focuses on Open Data, Open API, interactions between users of telecommunications networks and the Internet as well as issues related to Open Government. For the past 12 years he has been involved in the telecommunications industry, currently he works in Orange Poland. He is the Director of Orange Poland Research and Development Center. He is a co-founder of the Open Middleware 2.0 community – a program bringing together different communities of developers, researchers, technology suppliers and IT companies near the world of Telecommunications and IT. Contact to the Author: Sebastian.Grabowski@orange.com Orange Polska Badania i Rozwój Sieci i Platform Usługowych Centrum Badawczo-Rozwojowe Obrzeżna 7 02-691 Warszawa
{"Source-Url": "http://ojs.sbp.pl/index.php/zin/article/download/533/484", "len_cl100k_base": 5045, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 20269, "total-output-tokens": 5475, "length": "2e12", "weborganizer": {"__label__adult": 0.0003504753112792969, "__label__art_design": 0.0005741119384765625, "__label__crime_law": 0.0008654594421386719, "__label__education_jobs": 0.004669189453125, "__label__entertainment": 0.0002410411834716797, "__label__fashion_beauty": 0.00014865398406982422, "__label__finance_business": 0.0022792816162109375, "__label__food_dining": 0.00034046173095703125, "__label__games": 0.0006237030029296875, "__label__hardware": 0.0031223297119140625, "__label__health": 0.0005917549133300781, "__label__history": 0.0006690025329589844, "__label__home_hobbies": 0.00010704994201660156, "__label__industrial": 0.0005650520324707031, "__label__literature": 0.0007729530334472656, "__label__politics": 0.0006318092346191406, "__label__religion": 0.0003848075866699219, "__label__science_tech": 0.27734375, "__label__social_life": 0.00027680397033691406, "__label__software": 0.150146484375, "__label__software_dev": 0.5537109375, "__label__sports_fitness": 0.00019812583923339844, "__label__transportation": 0.001262664794921875, "__label__travel": 0.000270843505859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24449, 0.01637]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24449, 0.31817]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24449, 0.86022]], "google_gemma-3-12b-it_contains_pii": [[0, 2201, false], [2201, 5248, null], [5248, 8938, null], [8938, 12173, null], [12173, 15200, null], [15200, 18811, null], [18811, 21963, null], [21963, 22991, null], [22991, 24449, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2201, true], [2201, 5248, null], [5248, 8938, null], [8938, 12173, null], [12173, 15200, null], [15200, 18811, null], [18811, 21963, null], [21963, 22991, null], [22991, 24449, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24449, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24449, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24449, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24449, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24449, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24449, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24449, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24449, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24449, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24449, null]], "pdf_page_numbers": [[0, 2201, 1], [2201, 5248, 2], [5248, 8938, 3], [8938, 12173, 4], [12173, 15200, 5], [15200, 18811, 6], [18811, 21963, 7], [21963, 22991, 8], [22991, 24449, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24449, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
f3135ba23b0f9fa78c99568787a46a63ac58ad03
Modelling and Managing Knowledge through Dialogue: a Model of Communication-based Knowledge Management. Violaine Prince To cite this version: HAL Id: lirmm-00122846 https://hal-lirmm.ccsd.cnrs.fr/lirmm-00122846 Submitted on 5 Jan 2007 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. MODELLING AND MANAGING KNOWLEDGE THROUGH DIALOGUE: A MODEL OF COMMUNICATION-BASED KNOWLEDGE MANAGEMENT Violaine Prince University of Montpellier and LIRMM-CNRS 161 Ada Street 34392 Montpellier cedex 5 France Email: prince@lirmm.fr Keywords: Knowledge Management, Knowledge sharing, Multi-agent systems, heterogeneous Agents. Abstract: In this paper, we descrile a model that relies on the following assumption; ontology negotiation and creation is necessary to make knowledge sharing and KM successful through communication. We mostly focus on the modifying process, i.e. dialogue, and we show a dynamic modification of agents knowledge bases could occur through messages exchanges, messages being knowledge chunks to be mapped with agents KB. Dialogue takes account of both success and failure in mapping. We show that the same process helps repair its own anomalies. We describe an architecture for agents knowledge exchange through dialogue. Last we conclude about the benefits of introducing dialogue features in knowledge management. 1 INTRODUCTION Knowledge has muted from a personal expertise towards a collective lore. Thus, knowledge management (KM) and modelling contains social features and deals with society of agents. As a society, agents interact, and interaction conventional aspects have attracted the attention of the KM community, with different points of view: (i) The deep relationship between knowledge and knowledge communication (Ravenscroft and Pilkington 2000); (ii)The power of the communicative process as a knowledge modifier (Parson et al. 1998), (Zhang et al. 2004). Although communication is an active component in KM, most studies have dealt with cases where agents were sufficiently close in type and knowledge in order to share common ontologies. In extensive KM systems, knowledge sharing is hampered by the lack of common ontologies between artificial agents. Integrating different designations for the same concepts has been tackled by (Williams 2004) in a shared environment named DOGGIE (Distributed Ontology Gathering Group Integration Environment) allowing negotiation of terminology among different ontologies. The author’s approach is that of a ‘peer-to-peer’ situation that allows agents to share knowledge and learn. His aims was more to demonstrate how heterogeneity in representations could be overcome, than to focus on the dynamic process that underlies it, that is, dialogue. Our approach is in the same main line of thought, but the added value is that we focus on the properties of dialogue as a process of incremental knowledge adjustment. Human dialogue occurs between interlocutors distinct in state, knowledge and intents. It happens when a need for knowledge occurs. It is used when discrepancies in representations appear. Applying dialogue requirements to KM is an issue we will address in section 2. Although difference is a mandatory component for dialogue existence, too great a distance between agents might not even let the chance for a dialogue to occur. Section 2 states the likely requirements for dialogue success and consequently those for an adequate KM through interaction. Section 3 presents an architecture implementing the process between artificial agents, an instantiation of which, dedicated to agent teaching, has been presented in (Yousfi and Prince, 2005). Teaching is one of the tasks in which knowledge sharing is tracked at best (Williams 2000). This architecture has been evaluated through its applications: the teaching application, dedicated to conceptual knowledge revision, presently runs as a prototype. Another application about risk management ontologies acquisition (Makki et al. 2006) is developed according to both model and architecture described in this paper. 2 DIALOGUE AS A MEAN FOR SHARING KNOWLEDGE Agents are considered heterogeneous when they differ in nature or in a major attribute. In this paper we will restrain the definition of heterogeneity to cognitive artificial agents with the following properties: (i) Different ontologies and knowledge bases (i.e. different world representations); (ii) Different tasks within the system; (iii) Possibly belonging to different applications that need to share knowledge, Web services or activity. Heretogeneity produces variations in building, sharing and communicating knowledge. Theoretical Framework in Agents Knowledge Sharing Human beings favour dialogue as a major mean for knowledge acquisition; Each agent considers any fellow agent as a knowledge source 'triggered' through questioning. Information is acquired, from the answer, as an external possible hypothesis. This is the starting point of both an acquisition and a revision-based process, where the external fact, the message, is subject to confrontation with the inner knowledge source of the requiring agent. It drives the latter to proceed to derivation by reasoning. The feedback observed in natural dialogue is that the knowledge source could be addressed for understanding confirmation. Along with other researchers, (Finin et al. 1997), (Zhang et al. 2004), we consider this process as translatable into the software world and interesting as an economical mean to acquire, mediate and share knowledge. The cognitive agents we are representing can be seen as entities foregoing the following cycle (Prince 1996): (i) Capture, and symmetrically, edit data. Data is every trace on a media; (ii) Transform the captured data into information, as the output of an interpretative process on data. This result will be either stored as knowledge or discarded. (iii) Keep and increase knowledge, which, in turn, is of various types: (a) Stored information seen as useful; (b) Operating 'manuals' to interpret data; (c) Deriving modes to produce knowledge out of knowledge (reasoning modes); (d) Strategies to organise and optimise data capture, information storage, and knowledge derivation. Knowledge is by essence defined as explicit knowledge, the only variety that could be implementable. (iii) Acquire bypasses to data interpretation and action on the information environment through time-saving procedures: e.g. developing know-how in the information field. The four parts model is called DIKH (for Data, Information, Knowledge and know-How). KM relies mostly in the two last components: “know-How”; and “Know” are distinguished according to their properties involving explicitness vs implicitness, transferability vs non transmissibility. Since it is representable and conceptualisable, “Know” or “Knowledge” (K) has been mostly investigated by KM. “Know-how” has been claimed as embedded in expert systems, but since it is implicit it cannot be easily described. Therefore, know-how has to be assumed as an important skill of cognitive agents, but not as to be further investigated, unless in very restricted areas. Since KM is at stake, we are focusing on the K fourth of the model. DIKH assumes a recursive modelling: Part of knowledge derives from data, part of it derives from present knowledge, and part of it is an economical optimisation of its organisation. The K part puts in a nutshell several known elements of KM. (i) Lexical Knowledge represents ontological contents, relations, organisation. (ii) Produced knowledge is the kept part of information (interpreted data). It might rejoin either lexical knowledge (new concepts to integrate in the ontology), or production knowledge (rules, general laws). (iii) Production knowledge has been mostly investigated by research in AI: rules, reasoning, going further to meta-rules, and to strategies in organising KM is an important issue in cognitive agent modelling. (iv) The “know-how” part of knowledge, an originality of the DIKH model, strengthens upon profiling, preferences, and presentation, as the agent signature key. In single-application architectures, all agents tend to share the same "know-how" since the latter belongs to the architecture designer. In heterogeneous systems, many architectures might be confronted with each other. Web semantics and services have integrated this aspect: Languages such as XML have been devoted to emphasize the know-how about knowledge presentation. The DIKH recursive modelling might easily represent the KM part of a rational natural agent (a person), as an extension from artificial agents environment, to Human-System environments. Since “information” is a temporary status for knowledge, the model reduces to the representation given in figure 1. A rational cognitive agent might be designed, from the static point of view, as: (i) A set of lexical skills: ontological knowledge, relationships between names and concepts, variables and their domains; (ii) The core of a reasoning engine: Local axioms, strong beliefs that help deriving other rules and rules having a lesser status; (iii) Last, elements of belief and knowledge that help optimising the engine, those are the adaptive modes derived from experience. A Message/Knowledge Chunk Exchange Theory: A message (Jacobson and Halle 1956) can be defined as a formatted data set which: (i) is emitted as a sender’s intention concerning his recipients; (ii) follows a protocol (conventions in format and exchanges); (iii) has a content; such that the whole (form, intention, protocol, contents) is supposed to modify the internal state of the recipient agent(s). Related to artificial agents communication, the message properties are the following: (i) presentation: Formatting properties of the message as a meta-format. (ii) content: it is in itself a complex system that can be decomposed into: 1. how content is formalised; i.e.: (i) The selected elements in the chosen language to designate different items; (ii) composition rules used for the message. 2. The semantic content of the message: lexical data meaning and formal compositions. 3. The informational content: What the recipient agent has been able to understand from the received message. 4. The Intentional content: what the sender agent has wanted to transmit. Definition: The formal structure of a message can be described as a ternary structure composed of: - Data: The lexical and syntactic items composing the message strata, equivalent to the rheme in the Speech Act Theory (SAT) (Searle 1969). - Knowledge: which is itself decomposed into: 1. the necessary knowledge to encode/decode data (1). 2. the message semantic content is knowledge (2), equivalent to the topic in SAT. 3. the knowledge to embed/derive the semantic content (intentional/informational content) (3) - (iii) Formulation (the adapted terminology for “presentation” in the formal structure): Style and formalism are qualitative indicators. The preceding definition has a striking resemblance with agents K-models. Hence, the formal structure of a message and the K-model of the agent could be seen as related by a strong morphism. (i) Data in message definition is provided by the lexical skills of both sender and recipient agents. (ii) The semantic content is the knowledge chunk exchanged between agents: It is used to enhance the recipient lexical skills in ontological knowledge building or updating (iii) the intentional versus informational contents of the message, tackles the issue of confronting the knowledge production modes or engine of both locutors. (iv) Last, message formulation is the result of applying the sender’s preferences about message exchanges, and triggers the recipient’s adaptive modes to accept the message or reject it if it is not properly designed. This explains why the exchange of messages is a natural mode of knowledge enhancement. If the message is structurally compatible with the K-model of an agent, then: Let $A_\mu$ be the K-model of agent $\mu$. Let $m$ be the formal structure of an incoming message. The question is: is $A_\mu \cup m$ a new possible state of $\mu$’s K-model? For this, we need an interpreted form of the message. Definition: An interpreted form of a message is obtained through the following process: (i) Applying decoding knowledge: unification algorithms and abductive rules are used to initiate this phase; (ii) Semantic interpretation of the decoded form (informational content): Deductive and inductive reasoning is used. (iii) If formulation is not a liability for interpretation, the informational content should be equal or close to semantic content of the message. Given the preceding results, a message, seen from its formal structure point of view, could be designed as a knowledge bridge between agents. Its purpose is to: (i) Allow agents to update their knowledge through other agents knowledge; (ii) Fix knowledge discrepancies between agents. Let $m$ be the formal structure of the message to be exchanged between two agents $\mu$ and $\nu$. The three components of $m$, data, knowledge and formulation, could be designed as following: (i) The data of $m$ belongs to the lexicon of $\mu$ as a sender, and should also belong to the lexicon of $\nu$. (ii) The knowledge in $m$ needs the corresponding items of $\mu$’s and $\nu$’s knowledge engines. However, if the message is supposed to increase the recipient’s knowledge, this part also comprises knowledge that is either new or which extensions are new to the recipient. (iii) Last, the formulation is the formalism used for the bridge (language, or protocol). It requires adaptation from the sender to the recipient, and vice-versa. Figure 2 shows a representation of the formal structure of a message as bridge between cognitive agents. Dialogue as Knowledge Adjustment Seeking: A theory of messages suggests dealing with the following cases: (i) Whether the message has been misinterpreted, or not decoded at all, which is a failure in communication (an issue we will not tackle here); (ii) Or if the message, being correctly interpreted has roused contradictions and thus, has failed to reach its goal; (iii) A combination of both cases, a common situation in natural dialogues. We will focus here on contradiction in knowledge or belief revision. Belief revision appears as a compulsory process when: (i) The recipient finds its knowledge contradicted by what the others know (this happens when launching the primitive DetectConflict described in next section); (ii) The informational content the recipient has derived from the message seems irrelevant with the dialogue situation. The first case is a pure KM problem. An agent needs to receive a message, i.e., a future knowledge chunk of its own K-model, and this leads to a major revision of its beliefs and its lexical skills. Therefore a "why" -type of dialogue is initiated on the recipient’s behalf, and revision could be undertaken as soon as the recipient is convinced of the quality of the received knowledge. This has been mostly investigated in (Parson et al. 1998) and (Zhang et al. 2004). They have shown that not only the contradicted agent is likely to change, but also its contrador, since the latter needs to restructure its own KB in order to convince the other. Our theory reaches the same results since the "why-questions", seen as messages from the former recipient, force the sender to use its skills in formulation and in knowledge derivation. In (Yousfi and Prince, 2005), we have shown how an artificial "teacher" agent modifies its KB during the process, at least at the student model level, but furthermore, while teaching, it might spot its own defaults in knowledge. The other case is the knowledgeable problem of misunderstanding. Whether misunderstanding bursts out from decoding errors or mistaken reasoning, the fact is that the following equation: informational content (INFC) = semantic content(SC) = intentional content (INTC) is sometimes not achieved between formal agents. Until now, the latter tended to abort interaction whenever it failed. However, since artificial agents need to be more robust, they have to be provided with functionalities helping them to pursue dialogue further. Our theory provides some heuristics for approximating this double equality: (i) Re-formulation dialogues tend to achieve the first equality: INFC = SC; by using other knowledge items or other laws maybe closer its present K-model, the recipient agent might reach a state where it finally understands the message; (ii) Every careful choice of formulation, on the sender’s behalf might help providing the second equality SC = INTC. Argumentation/explanation dialogues play an important role in trying to reach a good approximation. In conclusion, whenever agents need to interact, it is not a problem if interaction is not a successful one shot process. Dialogue is an incremental process acting as a mutual adjustment mechanism, that repairs both its own failures and agents mutual discrepancies. However time consuming, dialogue is less costly than a wrong action based on false beliefs. It is thus very important in decision-making tasks where crucial stakes are involved. 3 A GENERAL ARCHITECTURE FOR KM THROUGH DIALOGUE The general software architecture modelling a knowledge sharing and revision activity between two rational artificial agents is presented in figure 3. Components that have to be implemented are the agents K-models comprizing the other agent model, communication primitive allowing messages exchanges, and dialogue strategies underlying the communication protocol. World and activity models are provided either from each agent environment or activity, or from applications and services they need to address or from which they are issued. **Implementing Agents:** Implementing the K-model is an easy task since it involves: (i) The local agent ontology and lexical skills; (ii) A set of Reasoning Primitives, and an access to Dialogue Strategies (sharable between rational agents) and a set of local rules for knowledge and message construction; (iii) One or many representation formalisms with which the agent processes (e.g.,XML for structure, KQML for communication acts ((Finin et al. 1997) etc...). Sharable Reasoning Primitives are the following: 1. **Add(K);** adds knowledge to the K-Model 2. **Revise(K,O);** triggers an algorithm trying to attach parts of K to the Ontology. Described in (Yousfi and Prince, 2005). Provides a flag REVISEF indicating success (or not) in attachment, and where it happens. 3. **DetectConflict(K);** if REVISEF = false detects conflictual attachments. Its result is issuing a CreateMessage(M,OtherAgent) (see next subsection); Two other modal primitives are necessary for a situation where knowledge has to be shared: (i) **Wanted** data. Communication Primitives Three basic communication primitives are necessary: One is CreateMessage the second is AcceptMessage, and the last is RejectMessage. All deal with three variables: D standing for Data, K standing for knowledge and F for formulation. The first receives the lexical choice after the K part has been expressed in a logical form the INTC (intentional content) and translated in the language instantiating the F part. As an example, we present CreateMessage below. The other primitives follow a similar description. RejectMessage returns the value of the part responsible for rejection (language, or unknown data). Whereas AcceptMessage triggers a matching and revision procedure within the recipient agent K-model, that might in turn end up with another CreateMessage primitive. These primitives use also basic functions Sendto and Ack (for acknowledge) that deal directly in interfacing with the other agent. Dialogue Strategies: Message-level and Scripts: Messages Roles: The exchange of communication primitives follow dialogue strategies available to every artificial agent. A strategy is related to the agent goal and satisfaction of its needs. It can be typed in order to extract information from the other agent or to make it perform an action or a task. Thus, every message plays a ‘role’ in a dialogue instantiation, according to a strategy. Roles have been labelled after the Speech Act Theory illocutionary functions (Searle 1969) or according to the functional roles theory (as in (Yousfi and Prince, 2005)) and depend on the task or activity type. At the message level, roles are the materialisation of Dialogue Strategies. For instance, the most used speech acts in agents modelling and communication are performative (i.e. the message runs an applet) or directive (the message is a command to the other agent). The functional roles we have used almost in agents learning are ask-for-knowledge, ask-for-explanation, give-knowledge, give-explanation, assert-satisfaction or assert-unsatisfaction. Those were modals applied to the CreateMessage primitive and transmitted with the message. In this paper, we present a generalisation of the architecture and components, and one can notice that the role R is sent as an argument of the Sendto command. Dialogue Strategies from expectations: When an agent issues a message with a given role, then it expects in turn a reply with a compatible role. The adjustment script available to agents follows these guidelines: Dialogue Strategies from expectations: When an agent issues a message with a given role, then it expects in turn a reply with a compatible role. The adjustment script available to agents follows these guidelines: shifts to the other script TransformIn \((M, O, A)\), the message is transformed into its parts and matched with ontology and axioms Revise \((O, A)\) reasoning is applied on the added elements if Wanted(Kn) then \(R \leftarrow 'ok'\) CreateMessage \((M, OtherAgent, L)\) the agent has found the wanted knowledge and asserts its satisfaction else \(R \leftarrow \text{need} \) else the agent sets the role to its need Call Adjustment and calls recursively the strategy Let us note that their is a timeout associated to recursive calls i.e. if no replies are given or if the dialogue enters an endless loop, then the dialogue strategy component stops communication. Unfortunately we have no room here to present the FailureCommunication script but let us say that it deals with reformulation (shifting languages) and explanation roles and strategies in messages exchanges. 4 CONCLUSION The model presented here and some elements of its architecture have instantiated in a learning environment for cognitive artificial agents. It is sufficiently general to be implemented within different applications and activities, as long as they need an advanced communication framework for knowledge sharing, revision and for communication. The originality of the model relies in modelling the dynamic process in KM, i.e., dialogue, as the crucial component in knowledge revision, and not only considering the static dimension of KM. What has not been explicitly detailed here is that the same theory applies to Human-Computer interaction and to Collective vs Individual agents KM. The issue that is dealt with goes much further than artificial agents programming. But what we have shown here is that even restricted to a formal and decidable framework, the theory takes into account knowledge conflict and provides it with solutions inspired from natural agents’ behaviour. REFERENCES J. Makki, A.M. Alquier and V. Prince Learning risk ontologies from both corpora and cognitive agents. LIRMM-CNRS research report. Montpellier France. 2006
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-00122846/document", "len_cl100k_base": 4889, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22721, "total-output-tokens": 5947, "length": "2e12", "weborganizer": {"__label__adult": 0.0004830360412597656, "__label__art_design": 0.0014905929565429688, "__label__crime_law": 0.0007476806640625, "__label__education_jobs": 0.0152435302734375, "__label__entertainment": 0.0004258155822753906, "__label__fashion_beauty": 0.00032520294189453125, "__label__finance_business": 0.0015716552734375, "__label__food_dining": 0.0006990432739257812, "__label__games": 0.0015277862548828125, "__label__hardware": 0.000713348388671875, "__label__health": 0.00107574462890625, "__label__history": 0.0006394386291503906, "__label__home_hobbies": 0.0001995563507080078, "__label__industrial": 0.000782012939453125, "__label__literature": 0.00482940673828125, "__label__politics": 0.000698089599609375, "__label__religion": 0.0008258819580078125, "__label__science_tech": 0.349609375, "__label__social_life": 0.0004274845123291016, "__label__software": 0.044036865234375, "__label__software_dev": 0.572265625, "__label__sports_fitness": 0.00036454200744628906, "__label__transportation": 0.0008230209350585938, "__label__travel": 0.00027680397033691406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26475, 0.02571]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26475, 0.48253]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26475, 0.90686]], "google_gemma-3-12b-it_contains_pii": [[0, 1030, false], [1030, 4795, null], [4795, 10334, null], [10334, 15171, null], [15171, 19601, null], [19601, 22313, null], [22313, 26475, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1030, true], [1030, 4795, null], [4795, 10334, null], [10334, 15171, null], [15171, 19601, null], [19601, 22313, null], [22313, 26475, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26475, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26475, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26475, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26475, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26475, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26475, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26475, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26475, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26475, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26475, null]], "pdf_page_numbers": [[0, 1030, 1], [1030, 4795, 2], [4795, 10334, 3], [10334, 15171, 4], [15171, 19601, 5], [19601, 22313, 6], [22313, 26475, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26475, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
0307370c477d7bc55693c39645626ef4711c7c90
1 Use UCSF Chimera packages in any Python 2.7 interpreter 2 Projects using PyChimera 2.1 Quickstart 2.2 Installation 2.3 FAQ & Known issues 2.4 For developers 3 Indices and tables 4 Acknowledgments 5 Citation Use UCSF Chimera packages in any Python 2.7 interpreter With PyChimera you can... - Run scripts depending on chimera from CLI with `pychimera script.py`. - Enable `import chimera` in interactive coding sessions outside UCSF Chimera, including IPython and Jupyter Notebooks. - Launch a standard UCSF Chimera instance, with the benefit of importing all your conda or virtualenv packages with `pychimera --gui.` Chapter 1. Use UCSF Chimera packages in any Python 2.7 interpreter Projects using PyChimera • GaudiMM • InsiliChem Tangram • @jrjhealey’s chimera scripts & bioinfo-tools 2.1 Quickstart PyChimera lets you use the full UCSF Chimera codebase in any Python 2.7 project. 2.1.1 Why? UCSF Chimera is an extensible molecular visualization tool with a vast collection of modules gathered after years of development. These tools allow you to perform serious molecular modelling jobs even without Python knowledge (they also offer a very versatile text interface with custom commands). However, all this code is only available if you use the bundled Python 2.7 interpreter to run your scripts, like this: ``` chimera --nogui --script "path/to/my/script.py arg1 arg2 ..." ``` Or, using the GUI command line toolbar, by running `runscript path/to/my/script.py arg1 arg2 ..`. This is less desirable than simply running `python script.py arg1 arg2 ....` With PyChimera, you can run `pychimera script.py` and forget about using the UCSF Chimera interpreter or not. It just works. Additionally, it offers some more features: • Interactive sessions in the Python interpreter where you can just run `import chimera`, like it should be. • Full compatibility with IPython and Jupyter Notebooks (read Extra packages). • Simple API to run your Chimera-dependent script with any python interpreter. • Conda recipes for UCSF Chimera and UCSF Chimera headless, for automated testing and deployment. If you want more details, be sure to check the paper (already submitted, once published, it will be linked here). 2.1.2 Usage Run `pychimera -h` for quick help. Basically: **Running code** To execute a script: ``` pychimera script.py ``` To launch a module that uses UCSF Chimera internally: ``` pychimera -m this ``` To execute any Python statement: ``` pychimera -c 'import chimera' ``` To know which UCSF Chimera instance is being loaded: ``` pychimera --path ``` **Interactive sessions** To start an interactive Python session with importable UCSF Chimera modules: ``` pychimera # start the standard Python interpreter pychimera -i some_file.py # run a script and stay in the standard Python interpreter pychimera -im module # same, but with a Python module pychimera -ic "string" # same, but with a command pychimera ipython # launch IPython interpreter pychimera notebook # launch IPython notebook ``` To launch the UCSF Chimera GUI with custom packages (check InsiliChem Plume as an example!): ``` pychimera --gui ``` 2.1.3 Multiplatform compatibility UCSF Chimera is available for Linux, Mac OS X and Windows. PyChimera does its best to provide the same compatibility for all three platforms and each release is continuously tested in Travis CI (Linux, Mac OS X) and AppVeyor (Windows). Despite our efforts, some features might not be as polished in some platforms. The table below summarizes the current state of implementation: ### Features <table> <thead> <tr> <th>Features</th> <th>Platforms</th> </tr> </thead> <tbody> <tr> <td></td> <td>Linux</td> </tr> <tr> <td>pychimera</td> <td>Y</td> </tr> <tr> <td>pychimera script.py</td> <td>Y</td> </tr> <tr> <td>pychimera ipython</td> <td>Y</td> </tr> <tr> <td>pychimera notebook</td> <td>Y</td> </tr> <tr> <td>pychimera --gui</td> <td>Y</td> </tr> <tr> <td>pychimera -c</td> <td>Y</td> </tr> <tr> <td>pychimera -i</td> <td>Y</td> </tr> <tr> <td>pychimera -m</td> <td>Y</td> </tr> </tbody> </table> * In Windows, use `python -m pychimera`, not `pychimera`. ** Only with `qtconsole` installed. PyChimera has been successfully installed and tested in the following 64-bit systems: - **Linux** - Arch Linux with UCSF Chimera 1.10, 1.11, 1.12 - Ubuntu 14.04 with UCSF Chimera 1.10, 1.11, 1.12 (Travis CI’s) - CentOS 6.3 with UCSF Chimera 1.11.2 - **Mac OS X** - 10.11 El Capitan with UCSF Chimera 1.12 - 10.10 (Xcode 6.4) with UCSF Chimera 1.12 (Travis CI’s) - **Windows** - 7 SP1 with UCSF Chimera 1.12 - Windows Server 2012 R2 with UCSF Chimera 1.12 (AppVeyor’s) ## 2.2 Installation PyChimera is a wrapper around UCSF Chimera, so it makes sense that you have to install it beforehand. Additionally, to reduce compatibility problems, PyChimera is best used within a `conda` environment. ### 2.2.1 Recommended steps 1. **Install Miniconda** Anaconda is a Python distribution that bundles all the scientific packages you will ever need. It also provides a fancy package manager to update and install more if you need it. A stripped-down version of Anaconda that only includes the package manager (`conda`) and the essential Python packages, called Miniconda, is also available and it’s the one we will use here. Go the Miniconda webpage and download the installer for your platform. For the sake of simplicity, choose the Python 2.7 version. You can use the 3.6 version if you really want, but you will need to create a Python 2.7 environment later. Once you have downloaded the installer, run it. Depending on the platform, you have to do it differently. **Linux and Mac OS X** Just run `bash ~/Downloads/Miniconda*.sh` and follow the wizard. Make sure to answer yes when the installer asks about modifying your `.bashrc`. When you are done, close the terminal and reopen it to apply the changes. **Note:** The installer requires `chimera` to be in `$PATH`. In Linux, you can symlink `$CHIMERA_INSTALL_DIR/bin/chimera` to any location of your `$PATH` (ie, `~/.local/bin`). In OSX, make sure to use the binary located in `/Applications/Chimera.app/Contents/Resources/bin/chimera`. There are alternative locations in the same Chimera.app directory that might not work! **Windows** Double-click on the `.exe` and follow the steps. Make sure to add Miniconda to your `%PATH%` when you are asked about it. ### 2. Install UCSF Chimera If you haven’t already, install UCSF Chimera. If you already have it installed, skip to step 3, but make sure you satisfy the requirements detailed in step 4. PyChimera has been tested on UCSF Chimera 1.10 and above on Linux, Mac OS X and Windows. See *Multiplatform compatibility* for more details. Go to the UCSF Chimera download page and get the latest installer for your platform (1.12 at the time of writing). You probably want the 64bit version. Once you have it in disk, run it. Depending on the platform, this is performed differently: **Linux** You need to run these commands in a terminal: ```bash cd ~/.Downloads # or whatever it is in your system chmod +x chimera*.bin ./chimera*.bin ``` At the final step, you will be asked to create a symlink in one of the places specified in your `$PATH`. Choose the one that refers to the Miniconda installation. **Windows** Just double click on the `.exe` and follow the steps of the installation wizard. ### 3. Install PyChimera With `conda` and UCSF Chimera in your system, this step is really easy! ```bash conda install -c insilichem pychimera ``` If you installed Miniconda 3.6 or want to use a separate environment, use these ones: ```bash conda create -n pychimera -c insilichem python=2.7 pychimera source activate pychimera # Linux and Mac OS x activate pychimera # Windows only ``` 4. Make sure chimera can be found If you followed all the steps, you should be able to run `pychimera --path` and obtain the UCSF Chimera installation directory as a result. However, if you used a different installation path and did not symlink the binary to somewhere in your PATH, `pychimera` won’t be able to locate it. There’s a workaround for this! Just set an environment variable called `CHIMERADIR` pointing to the UCSF Chimera installation directory. In Linux and Mac OS X, this can be done in your `.bashrc` or equivalent. ```bash export CHIMERADIR="~/.local/UCSF-Chimera" # point to the actual location in your system ``` In Windows, you have to search “Environment variables” in the Start menu and create a new user environment variable in the popup dialog. Remember, the variable name is `CHIMERADIR` and the value should be something like `C:\Program Files\Chimera 1.12`. ### 2.2.2 Alternative procedure If you don’t want (or can’t) use `conda`, you can also install PyChimera with `pip`: ```bash pip install pychimera ``` or directly from source: ```bash # With git git clone https://github.com/insilichem/pychimera.git cd pychimera python setup.py install # With wget wget https://github.com/insilichem/pychimera/archive/master.zip unzip pychimera*.zip cd pychimera-master python setup.py install ``` While this should work in an ideal environment, it would probably have some rough edges due to the libraries installed in your system being different than the ones provided by UCSF Chimera. The `pychimera conda` package has been finetuned to work with the correct versions so, if possible use that. Otherwise, refer to the `conda recipe` to identify the correct versions. ### 2.2.3 Extra packages So far, you have a barebones `pychimera` installation. If you want to make use of all the Jupyter compatibility features, you will need to install some extra packages. Namely: - IPython support: `ipython` (in Windows, `qtconsole` is also required). - Notebook support: `jupyter, notebook`. - Interactive molecule depiction: `nglview`. This is easily installed with `conda`: ```bash ## First, activate your environment if necessary # source activate pychimera ## In Windows, it would be: ``` (continues on next page) # activate pychimera conda install ipython jupyter notebook ## In Windows, you will also need: conda install qtconsole ## For interactive visualization in the notebook: conda install -c bioconda nglview ## might need: # jupyter-nbextension enable nglview --py --sys-prefix ## FAQ & Known issues ### 2.3.1 Numpy problems UCSF Chimera bundles its own distribution of some popular packages, like numpy, and those are loaded before your env packages for compatibility reasons. Be warned if you use specific versions for your project, because you can face strange bugs if you don’t take this into account. In some platforms (Linux), this can be worked around with some work on the precendence of paths in sys.path, but in some of them is not as easy (OS X). The easiest and most robust way to fix this is by upgrading UCSF Chimera’s numpy: ``` pip install --upgrade numpy -t "$(pychimera --path)/lib/python2.7/site-packages" ``` Take into account that this action will prevent outdated extensions from working again. As a far as we know, these are affected, but there might be more (please report it so we can update this list!): - MMTK-dependent extensions: MD, Energy minimization - AutoDock-dependent extensions: AutoDock Vina If you use those often enough, you should have a separate, unmodified UCSF Chimera installation. ### 2.3.2 PyChimera GUI has ugly fonts Anaconda-provided tk package is not built with truetype support (for several reasons; read here and here). Chimera does ship its own one (with correct font support), but since PyChimera loads its own Python interpreter, it ends up being replaces with conda’s one. All conda environments are created with tk by default, so if the fonts really bother you, you can uninstall it with conda remove --force tk and it will fallback to system’s one if needed. ### 2.3.3 Setuptools problems If you are using the development versions of pychimera, Chimera’s setuptools will complain about the versioning scheme (ie, pychimera==0.1.11+6.gc2e1fbb.dirty). As before, the fix is to upgrade the package. You might have to remove it manually beforehand, though: ``` rm -r $(pychimera --path)/lib/python2.7/site-packages/setuptools-3.1* pip install --upgrade setuptools -t "$(pychimera --path)/lib/python2.7/site-packages" ``` 2.3.4 Chimera reports problems with libgfxinfo.so and pcrecpp The error traceback ends with: ``` libgfxinfo.so: undefined symbol: _ZN7pcrecpp2RE4InitERKSsPKNS_10RE_OptionsE ``` This is due to an incompatibility between Chimera’s `pcre` libraries and those loaded by PyChimera. Depending on how you installed PyChimera, these will be: - Installed with conda (or with pip but inside a conda environment): the libraries will correspond to the `pcre` package in the conda environment. To make sure it works, you would probably have to downgrade to version 8.39 with `conda install pcre=8.39`. - Installed with pip (outside a conda environment): the loaded library will be the system’s one. If you can afford to downgrade to version 8.39 system-wide, do it. You will probably not, so the best option is to create a conda environment to execute PyChimera properly: `conda create -n pychimera -c insilichem pychimera`. 2.4 For developers 2.4.1 Quick API PyChimera provides access to UCSF Chimera’s modules from any Python 2.x interpreter. This is achieved in two steps: 1. `patch_environ()` patches environment variables with proper paths (packages and libraries). Since the original `sys.path` is exported to `PYTHONPATH`, you can use all your virtualenv/conda packages with Chimera. This call restarts Python to inject a new `os.environ` with `os.execve`. 2. `enable_chimera()` initializes Chimera. This is done through their own routines (`chimeraInit`). As a result, if you want to use PyChimera in your developments, you only need to execute these lines at the beginning of the script. For example, PyChimera is used programmatically in the GaudiMM CLI entry point. ```python import pychimera pychimera.patch_environ() pychimera.enable_chimera() ``` Calling `patch_environ()` will result in the interpreter being restarted to inject all UCSF Chimera libraries; take that into account in the logic of your program. This is why you should probably add the lines at the very beginning of the script. Alternatively, you can leave those lines out and have your users execute the script with `pychimera` instead of `python`. Up to you, but usually you will prefer to hide the technical details... 2.4.2 Alternative methods PyChimera also offers its interface through `python -m`. This has not been thoroughly tested, so it may not work perfectly. Add `-i` for interactive mode: ``` python -[i]m pychimera [-m another_module | -c "string" | script.py | ipython | ... ˓→notebook] ``` You can also try to launch it from IPython, but, again, some things may not work. Anyway, these two commands have the same effect: If you want to run a script with IPython and then inspect the results (-i flag), your best bet is to run `pychimera ipython` and then call `%%run path/to/file.py` inside the interpreter. ## 2.4.3 How does it work? When you run `patch_environ`, we try to locate a valid UCSF Chimera installation in the system. This is performed with three alternative strategies: 1. Check if a `CHIMERADIR` variable is set. This is normally set by the user when the automated strategies can’t work right due to the system configuration. If the path is valid, use that as the UCSF Chimera installation directory. Else, try strategy #2. 2. Check if an executable called `chimera` is somewhere in `PATH`. This is done with `disutils.spawn.find_executable`. If successful, figure out the UCSF Chimera installation directory from the file path after resolving any possible symlinks. 3. If `chimera` is not in `PATH`, we can try to find the installation directory in the default locations (`~/.local` or `/opt` for Linux, `/Applications` for Mac OS X, `C:\Program Files` for Windows). Once we have located a valid UCSF Chimera, we find the needed libraries and Python modules to patch `LD_LIBRARY_PATH`, `PYTHONPATH` and other environment variables, as specified in their own shell launcher (Linux/OSX) and cpp launcher (Windows). In this step, any additional packages and libraries installed in a conda environment or virtualenv are also injected. For all this to work, the interpreter is restarted. After the restart, `enable_chimera` is called, which runs the UCSF Chimera initialization routines contained in `chimeraInit.py`. Depending on the CLI options, we then run a script, run IPython/Notebook or start the GUI. CHAPTER 3 Indices and tables • genindex • modindex • search CHAPTER 4 Acknowledgments Largely based on ideas by Greg Couch at chimera-users. PyChimera is scientific software, funded by public research grants (Spanish MINECO’s project CTQ2014-54071-P, Generalitat de Catalunya’s project 2014SGR989 and research grant 2017FI_B2_00168, COST Action CM1306). If you make use of PyChimera in scientific publications, please cite it. It will help measure the impact of our research and secure future funding! ```latex @article{pychimera2018, author = {Rodríguez-Guerra Pedregal, Jaime and Maréchal, Jean-Didier}, title = {PyChimera: use UCSF Chimera modules in any Python 2.7 project}, journal = {Bioinformatics}, volume = {34}, number = {10}, pages = {1784-1785}, year = {2018}, doi = {10.1093/bioinformatics/bty021}, URL = {http://dx.doi.org/10.1093/bioinformatics/bty021}, eprint = {http://oup.backfile/content_public/journal/bioinformatics/34/10/10.1093_bioinformatics_bty021/1/bty021.pdf} } ```
{"Source-Url": "https://pychimera.readthedocs.io/_/downloads/en/stable/pdf/", "len_cl100k_base": 4703, "olmocr-version": "0.1.49", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 33609, "total-output-tokens": 5653, "length": "2e12", "weborganizer": {"__label__adult": 0.00023555755615234375, "__label__art_design": 0.00027871131896972656, "__label__crime_law": 0.00020205974578857425, "__label__education_jobs": 0.0008196830749511719, "__label__entertainment": 7.981061935424805e-05, "__label__fashion_beauty": 0.0001131892204284668, "__label__finance_business": 0.00017392635345458984, "__label__food_dining": 0.00028133392333984375, "__label__games": 0.0004787445068359375, "__label__hardware": 0.0006008148193359375, "__label__health": 0.00035643577575683594, "__label__history": 0.00015103816986083984, "__label__home_hobbies": 9.119510650634766e-05, "__label__industrial": 0.00029206275939941406, "__label__literature": 0.0001558065414428711, "__label__politics": 0.00013816356658935547, "__label__religion": 0.0003285408020019531, "__label__science_tech": 0.01727294921875, "__label__social_life": 0.00013637542724609375, "__label__software": 0.040069580078125, "__label__software_dev": 0.93701171875, "__label__sports_fitness": 0.0002257823944091797, "__label__transportation": 0.00020325183868408203, "__label__travel": 0.0001614093780517578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17800, 0.02613]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17800, 0.29406]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17800, 0.81734]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 225, false], [225, 225, null], [225, 225, null], [225, 225, null], [225, 636, null], [636, 703, null], [703, 2234, null], [2234, 3581, null], [3581, 5779, null], [5779, 7922, null], [7922, 10170, null], [10170, 12454, null], [12454, 15076, null], [15076, 16782, null], [16782, 16844, null], [16844, 16844, null], [16844, 16927, null], [16927, 16927, null], [16927, 17800, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 225, true], [225, 225, null], [225, 225, null], [225, 225, null], [225, 636, null], [636, 703, null], [703, 2234, null], [2234, 3581, null], [3581, 5779, null], [5779, 7922, null], [7922, 10170, null], [10170, 12454, null], [12454, 15076, null], [15076, 16782, null], [16782, 16844, null], [16844, 16844, null], [16844, 16927, null], [16927, 16927, null], [16927, 17800, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17800, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17800, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17800, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17800, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17800, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17800, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17800, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17800, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17800, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17800, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 225, 3], [225, 225, 4], [225, 225, 5], [225, 225, 6], [225, 636, 7], [636, 703, 8], [703, 2234, 9], [2234, 3581, 10], [3581, 5779, 11], [5779, 7922, 12], [7922, 10170, 13], [10170, 12454, 14], [12454, 15076, 15], [15076, 16782, 16], [16782, 16844, 17], [16844, 16844, 18], [16844, 16927, 19], [16927, 16927, 20], [16927, 17800, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17800, 0.04331]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
b05e6216b947664ed258b02af39ba2a71be7f5b7
Generating high-performance arithmetic operators for FPGAs Florent de Dinechin, Cristian Klein, Bogdan Pasca To cite this version: Florent de Dinechin, Cristian Klein, Bogdan Pasca. Generating high-performance arithmetic operators for FPGAs. 2008. ensl-00321209 HAL Id: ensl-00321209 https://ens-lyon.hal.science/ensl-00321209 Preprint submitted on 12 Sep 2008 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Generating high-performance arithmetic operators for FPGAs Florent de Dinechin, Cristian Klein and Bogdan Pasca * LIP (CNRS/INRIA/ENS-Lyon/UCBL), Université de Lyon École Normale Supérieure de Lyon 46 allée d’Italie, 69364 Lyon cedex Abstract This article addresses the development of complex, heavily parameterized and flexible operators to be used in FPGA-based floating-point accelerators. Languages such as VHDL or Verilog are not ideally suited for this task. The main problem is the automation of problems such as parameter-directed or target-directed architectural optimization, pipeline optimization, and generation of relevant test benches. This article introduces FloPoCo, an open object-oriented software framework designed to address these issues. Written in C++, it inputs operator specifications, a target FPGA and an objective frequency, and outputs synthesisable VHDL fine-tuned for this FPGA at this frequency. Its design choices are discussed and validated on various operators. 1 Arithmetic operator design 1.1 Floating-point and FPGAs FPGA-based coprocessors are available from a variety of vendors, and it is natural to try and use them for accelerating floating-point (FP) applications. On floating-point matrix multiplication, their floating-point performance slightly surpasses that of a contemporary processor [6], using tens of operators on the FPGA to compensate their much slower frequency (almost one order of magnitude). However, FPGAs are no match to GPUs here. For other FP operations that are performed in software in a processor (for instance all the elementary functions such as exp, log, trigonometric...) there is much more speed-up potential: One may design a dedicated pipelined architecture on an FPGA that outperforms the corresponding processor code by one order of magnitude while consuming a fraction of the FPGA resources [4]. Implementing the same architecture in a processor would be wasted silicon, since even the logarithm is a relatively rare function typical processor workloads. For the same reason, GPUs have hardware acceleration for a limited set of functions and in single precision only. In an FPGA, you pay the price of this architecture only if your application needs it. Besides, operators can also be specialized in FPGAs. For example, a squarer theoretically requires half the logic of a multiplier: A floating-point multiplication by the constant 2.0 boils down to adding one to the exponent (a 12-bit addition in double-precision), and shouldn’t use a full-blown FP multiplier as it does in a processor. Actually it is possible to build an optimized architecture for any multiplication by a constant [2]. Finally, operators can be fused on an FPGA, for example the Euclidean norm \( \sqrt{x^2 + y^2} \) can be implemented more efficiently than by linking \*This work was partly supported by the XtremeData university programme, the ANR EVAFlo project and the Egide Brâncuşi programme 14914RL. two squarers, one adder and one square root operator. There are many more opportunities for floating-point on FPGAs [3]. The object of the FloPoCo project\(^1\) is to study and develop such FPGA-specific Floating-Point Cores. ### 1.2 From libraries to generators FloPoCo is not a library but a generator of operators. Indeed, it is the successor to FPLibrary, a library of operators written in VHDL. Many parts of FPLibrary were actually generated by as many ad-hoc programs, and FloPoco started as an attempt to bring all these programs in a unified framework. A first reason is that it is not possible, for instance, to write by hand, directly in VHDL or Verilog, an optimized multiplier by a constant for each of an infinite number of constants. However, this task is easy to automate in a program that inputs the constant. Another reason is the need for flexibility. Whether the best operator is a slow and small one or a faster but larger one depends on the context. FPGAs also allow flexibility in precision: arithmetic cores are parameterized by the bit-widths of their inputs and outputs. Flexibility also makes it possible to optimize for different hardware targets, with different LUT structure, memory and DSP features, etc. Thus, the more flexible an operator, the more future-proof. Finally, for complex operators such as elementary function evaluators, the optimal design is the result of a costly design-space exploration, which is best performed by a computer. VHDL and Verilog are good for describing a library of operators optimized for a given context, but the more flexibility and the more design-space exploration one wants, the more difficult it gets. It is natural to write operator generators instead. A generator inputs user specifications, performs any relevant architectural exploration and construction (sometimes down to pre-placement), and outputs the architecture in a synthesizable format. To our knowledge, this approach was pioneered by Xilinx with their core generator tool\(^2\). An architecture generator needs a back-end to actually implement the resulting circuit. The most elegant solution is to write an operator generator as an overlay on a software-based HDL such as SystemC, JBits, HandelC or JHDL (among many others). The advantages are a preexisting abstraction of a circuit, and simple integration with a one-step compilation process. The inconvenient is that most of these languages are still relatively confidential and restricted in the FPGAs they support. Basing FloPoCo on a vendor generator would be an option, but would mean restricting it to one FPGA family. FloPoCo therefore took a less elegant, but more universal route. The generator is written in a mainstream programming language (we chose C++), and it outputs operators in a mainstream HDL (we chose standard synthesizable VHDL). Thus, the FloPoCo generator is portable, and the generated operators can be integrated into most projects, simulated using mainstream simulators, and synthesized for any FPGA using the vendor back-end tools. Section 2.2 will show how they can nevertheless be optimized to a given FPGA target. The inconvenient of this approach is that we had to develop a framework, instead of reusing one. Section 2 describes this framework and the way it evolved in a very practical and bottom-up way. ### 1.3 The arithmetic context It is important to understand that this framework was developed only with arithmetic operators in view. An arithmetic operator is the implementation of a mathematical function, and this underlying mathematical nature is exploited pervasively in FloPoCo. For instance, an operator may be combinational or pipelined, but will usually involve no feedback loop or state machine (the only current exception is an accumulator). With this restriction, we are able to implement a simple, efficient and automatic approach to pipelining (see section 3) and testbench generation (see section 4). As another example, when generating \(^1\)[www.ens-lyon.fr/LIP/Arenaire/Ware/FloPoCo/] \(^2\)We would welcome any feedback on early architecture generators test benches, relevant test patterns may be defined by function analysis, and the expected output is defined as a mathematical function of the input, composed with a well-defined rounding function \[1\]. These are only a few examples. The design-space exploration for complex operators is based on automated error analysis \[4\], which is also specific to the arithmetic context. FloPoCo is not only a generator framework, it is also a generator of arithmetic cores using this framework. It currently offers about 20 operators, from simple ones such as shifters or integer adders to very complex ones such as floating-point exp and log. This article is not about these operators, but will be illustrated by actual examples of already implemented operators. FloPoCo is distributed under the LGPL, and interested readers are welcome to try it, use it and improve it. 2 The FloPoCo framework The FloPoCo generator inputs (currently in the command-line) a list of operator specifications, internally builds a list of Operator objects (some of which may be sub-components of the specified operators), then outputs the corresponding VHDL. 2.1 Operators The core class of FloPoCo is Operator. From the circuit point of view, an Operator corresponds to a VHDL entity, but again, with restrictions and extensions specific to the arithmetic context. All the operators of FloPoCo extend this class. The main method of an Operator is outputVHDL(), which outputs the VHDL code of an operator. To implement this virtual method for an operator, one may simply embed some existing VHDL code in the C++ code, which makes the VHDL code simpler and easier to debug. In addition, the Operator class provides many helper methods which relieve the designer from repetitive or error-prone work, for example entity and component declaration, signal declaration, signal registering, etc. In short, Operator provides black box functionality for known VHDL recipes, but otherwise requires manual output of VHDL code. This approach allowed us to quickly backport existing generators. More importantly, we may tinker with the framework without having to rework existing cores. Design space exploration, if any, is done in the operator constructor. The input specification (input and output widths, etc.) and the deployment and performance constraints (e.g. VirtexIV, 300MHz) are analysed, and operator attributes are set to be used later by outputVHDL(). For instance, the constructor of an integer constant multiplier internally builds and pipelines a directed acyclic graph (see Figure 1) with several labels on each node \[2\]. Operator also defines other virtual methods for the purpose of pipelining and testbench generation. These will be considered in due course. 2.2 Targets The Target class abstracts the features of actual FPGA chips. Classes representing real FPGA chips extend this class (we currently have classes for two very different FPGA, Xilinx VirtexIV and Altera StratixII). The idea is to declare abstract methods in Target, which are implemented in its subclasses, so that the same generator code fits all the targets\[4\]. To this purpose, a Target is given as argument to the constructor of an operator – it also receives an objective frequency, this will be detailed in section 3. The methods provided by the Target class can be semantically split into two categories: - Architecture-related methods provide information about the architecture of the FPGA and are used in architectural exploration. For instance, lutInputs() returns the number of inputs of the FPGA’s LUTs. \[3\]There is a Signal class, but it has currently no real signal semantic (it doesn’t checks for pending signals, short circuits, etc), it just makes writing outputVHDL() easier, and we will not detail it any further. \[4\]Of course, it is also possible to have a big if that runs completely different code depending on the target. • **Delay-related methods** provide approximate informations about the delays for signals traveling the FPGA. For example, `adderDelay(int n)` returns the delay of an n-bit addition. These methods will be used for automatic pipelining, see section 3. Some of these methods have an architecture-related dual, for example `suggestAdderSize(double delay)` that returns the size of an adder that will have the required delay. The difficulty here is to find the right abstraction level for `Target`. On one hand, we do not hope to provide an exhaustive and detailed description of all the existing – and future – FPGAs. On the other hand, we do not need to: Vendor tools are very good at fitting a design to a given target, and we should rely on them. The complexity of exploiting the details of the target should be left to the back-end tools. To understand how we approach this issue in FloPoCo, consider the example of integer multiplication support in FPGA. Early FPGAs were logic-only, then came hard multipliers embedded in the FPGA fabric, then these multipliers were promoted to DSP blocks with the addition of accumulators. Current DSP blocks are very complex and differ widely from FPGA to FPGA. Some contain 18x18 multipliers, some 18x24, some 36x36 which can be split in several 18x18 or 9x9 multipliers, with subtle restrictions. All contain internal registers levels, and some an accumulator. What is the best way to abstract this complexity and variety, in a way both concise and generally useful? The current – probably not definitive – answer is a method called `suggestSubmultSize()` which takes the objective frequency as an argument and returns the input sizes of the largest (possibly rectangular) sub-multiplier that, when written as a \* in VHDL, runs at the objective frequency on this target. This abstract method will be implemented very differently in subclasses of `Target` that describe actual FPGAs, depending on the availability or not of DSP blocks and on the capabilities of these blocks. With this simple interface, we are able to generate large floating-point multipliers (build by assembling several DSP blocks) whose performance and resource consumption almost match (and sometimes even surpass) the vendor-generated ones, while being more flexible. For the design-space exploration of future operators, we will need other methods, for example a method that inputs a multiplier size and returns an estimation of its area. And of course, the question of target abstraction will remain an open one forever, as new FPGA features keep appearing and new operators present new problems. ### 3 Automatic pipelining Pipelining a given arithmetic operator for a given context is relatively easy, if tedious. What is important is to evaluate the number of pipeline levels and their approximate location, but the details are best left to the back-end tools, which will actually place the registers after logic optimization as part of the technology mapping step. Recent tools apply, to various extent, retiming techniques (moving registers around to improve the critical path without changing the functionality of the circuit) [5]. This is also best done after technology mapping. #### 3.1 Frequency-directed pipelining Currently, FloPoCo implements frequency-directed pipelining using variations on the following generic algorithm, which is simple (greedy), and linear in the size of the operator. The constructor, working from the output to the input (or the other way round, depending what is expected to give the best results), accumulates estimations of the critical path delay (provided by the `Target` class), and inserts register levels when the current critical path exceeds the reciprocal of the objective frequency. When this process is finished, the constructor sets the `pipelineDepth` attribute of the `Operator` class. When an operator instantiates a sub-component (e.g. `FPMultiplier` instantiates `IntMultiplier`), it may obtain its `pipelineDepth`, which allows it to delay signals accordingly – this works recursively. FloPoCo does not provide any special framework for the computation of critical paths or the distribution of registers: an ad-hoc program needs to be written by hand for each operator to implement the previous generic algorithm. This is about as tedious as pipelining VHDL code, but needs to be done only once. In addition, FloPoCo has several facilities for inserting multiple delays on signals which help prevent bugs due to lack of synchronization. The testbench generation framework also takes into account the pipeline depth of the unit under test automatically. ### 3.2 Discussion This part of the framework is still being improved. It is not clear yet if we want it to eventually end up as a generic retiming engine using a generic abstraction of an arithmetic circuit. Firstly, it would be a tremendous amount of work to get it right, and we choose to focus our efforts on operator development. Secondly, it is not obvious that it would be useful. The ad-hoc, per-operator approach is more flexible and more powerful in the design exploration phase, for instance. Critical path delay estimations are necessarily inaccurate: actual values can only be obtained after placement and routing of the complete application in which the operator takes place, something which is out of the scope of FloPoCo. No guarantee is therefore given that the operator will actually function at the objective frequency. What is easy to ensure, however, is that when the objective frequency is raised, the number of pipeline stages increases and the critical path delay does not increase. The real question is, do we need to actually place registers, which is the bulk of the work? If all the back-ends implemented retiming [5] efficiently (which is not the case yet), we would just have to evaluate the pipeline depth as a function of the frequency (a comparatively simple task) and place the corresponding number of register levels at the output of the operator, leaving to the tools the task to push them in. Still, retiming is a costly iterative process, and considering its local nature, it should be sped up by a good initial placement. We have little practical experience on this subject and would appreciate feedback. ### 3.3 A detailed example The command ``` flopoco -frequency=200 -target=VirtexIV IntConstMult 24 26353589 ``` builds a multiplier of a 24-bit integer with the constant 26353589 (the 24 first bits of \( \pi \)), with objective frequency 200MHz, for a VirtexIV. Figure 1 shows the obtained architecture. Note that some adders are much larger than the others. With \(-\text{frequency}=100\), only the two last levels of registers (marked with a \(*\)) are generated: The lower levels are grouped in a single pipeline stage. Table 1 provides some synthesis results for Xilinx VirtexIV using ISE 9.1, for three values of the objective frequency. In this example, the reported frequency is much better than the specified one. This is because the automatic pipeline program for this operator works at the adder level, not at the bit level (splitting ![Figure 1: Multiplier by 26353589 pipelined for 200MHz.](image) <table> <thead> <tr> <th>-frequency</th> <th>latency</th> <th>frequency</th> <th>area</th> </tr> </thead> <tbody> <tr> <td>100</td> <td>2</td> <td>146 MHz</td> <td>176 sl</td> </tr> <tr> <td>200</td> <td>4</td> <td>264 MHz</td> <td>199 sl</td> </tr> <tr> <td>300</td> <td>5</td> <td>373 MHz</td> <td>218 sl</td> </tr> </tbody> </table> Table 1: Synthesis results for pipelined operators. the adders into sub-adders). The current framework would allow for bit-level pipelining for these operators, but it would be much more complex and we currently feel this effort is not justified. We however pipeline adders when they are too large to reach the objective frequency, because we can use for that the existing pipelined \texttt{IntAdder} operator. 4 Test case generation In order to make sure that no bugs leak into the VHDL code generated for each arithmetic operator, FloPoCo was also designed to automate test case generation. Due to the large number of parameters that can be customised, writing test benches by hand for each operator is not an option. Test benches are pieces of VHDL code (possibly with a few associated data files) which run from a VHDL simulator, give certain inputs (test cases) to the arithmetic operators and test the correctness of the outputs. Test cases are operator-specific and are generated by doing the equivalent computation in software. We deliberately avoid duplicating the hardware algorithms in software in order to avoid introducing the same bugs. Instead, we relied on well tested libraries such as GMP and MPFR. Small operators can be exhaustively tested, but it becomes impossible to exhaustively test larger ones, in particular double-precision ones. Instead, our strategy consists in maximising the number of data paths and signal combinations that are tested. A FloPoCo test bench basically generates two types of test cases. The first is random test cases. The \texttt{Operator} class is able to generate uniform random inputs, but it should most of the cases be over-ridden in an operator-dependent way that focuses the test on the interesting domains of input. For instance, a double-precision exponential returns $+\infty$ for all inputs larger than 1024 and returns 0 for all inputs smaller than $-1024$. In other terms, the most interesting test domain for this function is when the input exponent is between $-10$ and 10, a tiny fraction of the full double-precision exponent domain ($-1024$ to 1023). Generating random 64-bit integers and using them as floating-point inputs would mean testing mostly the overflow/underflow logic, which is a tiny part of the operator. Similarly, in a floating-point adder, if the difference between the exponents of the two operands is large, the adder will simply return the biggest of the two, and again this is the most probable situation when taking two random operands. Here it is better to generate random cases where the two operands have close exponents. In addition to random test cases, there are also mandatory test cases which test specific situations which a random test would have little chance to hit. Again, these test cases are operator-specific. 5 Conclusion This article introduced FloPoCo, an open-source software framework for generating high-quality, highly parameterized, pipelined and flexible operators for FPGAs. This framework evolved (and still evolves) in a deliberately bottom-up and practical way focussed on the needs of arithmetic cores. It uses a low-level, highly flexible \texttt{printf}-based approach to VHDL generation. In its current state, FloPoCo is already an invaluable booster for the core focus of our research, which is to develop new arithmetic operators. In the future, a wider range of design objectives may be added: optimizing for power, for a given pipeline depth, for memory, etc. In addition, we will refine and extend the FPGA model as new needs appear, for instance to model the internal memory resources. FloPoCo’s automatic pipeline framework can in principle be used to build larger computation pipelines composed of many operators, in a way that automatically delays signals to match the pipelines of the various operators, and does so even when the designer, to optimize resource usage, changes the precision of some operators. We also intend to explore this possibility. References
{"Source-Url": "https://ens-lyon.hal.science/ensl-00321209/file/DinechinKleinPasca.pdf", "len_cl100k_base": 4957, "olmocr-version": "0.1.48", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 21307, "total-output-tokens": 5708, "length": "2e12", "weborganizer": {"__label__adult": 0.0007348060607910156, "__label__art_design": 0.000652313232421875, "__label__crime_law": 0.0006256103515625, "__label__education_jobs": 0.00054168701171875, "__label__entertainment": 0.0001468658447265625, "__label__fashion_beauty": 0.00028228759765625, "__label__finance_business": 0.00029540061950683594, "__label__food_dining": 0.0005970001220703125, "__label__games": 0.0009617805480957032, "__label__hardware": 0.0158538818359375, "__label__health": 0.0010728836059570312, "__label__history": 0.0004444122314453125, "__label__home_hobbies": 0.0002295970916748047, "__label__industrial": 0.0017490386962890625, "__label__literature": 0.00022923946380615232, "__label__politics": 0.0005292892456054688, "__label__religion": 0.0011501312255859375, "__label__science_tech": 0.2108154296875, "__label__social_life": 0.00011038780212402344, "__label__software": 0.007343292236328125, "__label__software_dev": 0.75341796875, "__label__sports_fitness": 0.0006451606750488281, "__label__transportation": 0.00135040283203125, "__label__travel": 0.0003192424774169922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24296, 0.0396]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24296, 0.60156]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24296, 0.90028]], "google_gemma-3-12b-it_contains_pii": [[0, 909, false], [909, 3873, null], [3873, 7988, null], [7988, 11904, null], [11904, 16122, null], [16122, 19380, null], [19380, 23440, null], [23440, 24296, null]], "google_gemma-3-12b-it_is_public_document": [[0, 909, true], [909, 3873, null], [3873, 7988, null], [7988, 11904, null], [11904, 16122, null], [16122, 19380, null], [19380, 23440, null], [23440, 24296, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24296, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24296, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24296, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24296, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24296, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24296, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24296, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24296, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24296, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24296, null]], "pdf_page_numbers": [[0, 909, 1], [909, 3873, 2], [3873, 7988, 3], [7988, 11904, 4], [11904, 16122, 5], [16122, 19380, 6], [19380, 23440, 7], [23440, 24296, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24296, 0.05051]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
2e26c173891a6553b94ddc582c28e675854f2877
[REMOVED]
{"Source-Url": "https://cs.uwaterloo.ca/~dberry/ATRE/Slides/EmotionsValuesBeliefsInRE.pdf", "len_cl100k_base": 6708, "olmocr-version": "0.1.50", "pdf-total-pages": 128, "total-fallback-pages": 0, "total-input-tokens": 167602, "total-output-tokens": 12681, "length": "2e12", "weborganizer": {"__label__adult": 0.0004856586456298828, "__label__art_design": 0.0026454925537109375, "__label__crime_law": 0.0006203651428222656, "__label__education_jobs": 0.036224365234375, "__label__entertainment": 0.00025391578674316406, "__label__fashion_beauty": 0.0003581047058105469, "__label__finance_business": 0.007442474365234375, "__label__food_dining": 0.0005726814270019531, "__label__games": 0.00110626220703125, "__label__hardware": 0.0009126663208007812, "__label__health": 0.0008945465087890625, "__label__history": 0.0007710456848144531, "__label__home_hobbies": 0.0002720355987548828, "__label__industrial": 0.001220703125, "__label__literature": 0.002559661865234375, "__label__politics": 0.0010547637939453125, "__label__religion": 0.00092315673828125, "__label__science_tech": 0.1708984375, "__label__social_life": 0.0005130767822265625, "__label__software": 0.029693603515625, "__label__software_dev": 0.7392578125, "__label__sports_fitness": 0.0002894401550292969, "__label__transportation": 0.0006966590881347656, "__label__travel": 0.00026607513427734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36787, 0.00639]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36787, 0.35869]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36787, 0.94659]], "google_gemma-3-12b-it_contains_pii": [[0, 180, false], [180, 329, null], [329, 525, null], [525, 713, null], [713, 920, null], [920, 1156, null], [1156, 1500, null], [1500, 1657, null], [1657, 1893, null], [1893, 2166, null], [2166, 2507, null], [2507, 2828, null], [2828, 3205, null], [3205, 3478, null], [3478, 3929, null], [3929, 4209, null], [4209, 4385, null], [4385, 4619, null], [4619, 4803, null], [4803, 5001, null], [5001, 5169, null], [5169, 5413, null], [5413, 5773, null], [5773, 5891, null], [5891, 6183, null], [6183, 6291, null], [6291, 6559, null], [6559, 6714, null], [6714, 7140, null], [7140, 7531, null], [7531, 7734, null], [7734, 7960, null], [7960, 8233, null], [8233, 8526, null], [8526, 8738, null], [8738, 8939, null], [8939, 9152, null], [9152, 9431, null], [9431, 9767, null], [9767, 10056, null], [10056, 10409, null], [10409, 10609, null], [10609, 10853, null], [10853, 11130, null], [11130, 11356, null], [11356, 11771, null], [11771, 11893, null], [11893, 12146, null], [12146, 12328, null], [12328, 12592, null], [12592, 12853, null], [12853, 13195, null], [13195, 13359, null], [13359, 13601, null], [13601, 13893, null], [13893, 14121, null], [14121, 14459, null], [14459, 14824, null], [14824, 15149, null], [15149, 15421, null], [15421, 15798, null], [15798, 16027, null], [16027, 16450, null], [16450, 16674, null], [16674, 16920, null], [16920, 17117, null], [17117, 17372, null], [17372, 17538, null], [17538, 17837, null], [17837, 18232, null], [18232, 18449, null], [18449, 18646, null], [18646, 19091, null], [19091, 19336, null], [19336, 19529, null], [19529, 19719, null], [19719, 19904, null], [19904, 20144, null], [20144, 20352, null], [20352, 20693, null], [20693, 20864, null], [20864, 21250, null], [21250, 21490, null], [21490, 21640, null], [21640, 21882, null], [21882, 22194, null], [22194, 22509, null], [22509, 22780, null], [22780, 23132, null], [23132, 23264, null], [23264, 23463, null], [23463, 23693, null], [23693, 23800, null], [23800, 24047, null], [24047, 24411, null], [24411, 24700, null], [24700, 24971, null], [24971, 25427, null], [25427, 25879, null], [25879, 26179, null], [26179, 26554, null], [26554, 26875, null], [26875, 27075, null], [27075, 27398, null], [27398, 27704, null], [27704, 28003, null], [28003, 28241, null], [28241, 28511, null], [28511, 28978, null], [28978, 29452, null], [29452, 29745, null], [29745, 30248, null], [30248, 30327, null], [30327, 30698, null], [30698, 31064, null], [31064, 31457, null], [31457, 31855, null], [31855, 32128, null], [32128, 32375, null], [32375, 32845, null], [32845, 33371, null], [33371, 33874, null], [33874, 34503, null], [34503, 34936, null], [34936, 35447, null], [35447, 35997, null], [35997, 36561, null], [36561, 36787, null]], "google_gemma-3-12b-it_is_public_document": [[0, 180, true], [180, 329, null], [329, 525, null], [525, 713, null], [713, 920, null], [920, 1156, null], [1156, 1500, null], [1500, 1657, null], [1657, 1893, null], [1893, 2166, null], [2166, 2507, null], [2507, 2828, null], [2828, 3205, null], [3205, 3478, null], [3478, 3929, null], [3929, 4209, null], [4209, 4385, null], [4385, 4619, null], [4619, 4803, null], [4803, 5001, null], [5001, 5169, null], [5169, 5413, null], [5413, 5773, null], [5773, 5891, null], [5891, 6183, null], [6183, 6291, null], [6291, 6559, null], [6559, 6714, null], [6714, 7140, null], [7140, 7531, null], [7531, 7734, null], [7734, 7960, null], [7960, 8233, null], [8233, 8526, null], [8526, 8738, null], [8738, 8939, null], [8939, 9152, null], [9152, 9431, null], [9431, 9767, null], [9767, 10056, null], [10056, 10409, null], [10409, 10609, null], [10609, 10853, null], [10853, 11130, null], [11130, 11356, null], [11356, 11771, null], [11771, 11893, null], [11893, 12146, null], [12146, 12328, null], [12328, 12592, null], [12592, 12853, null], [12853, 13195, null], [13195, 13359, null], [13359, 13601, null], [13601, 13893, null], [13893, 14121, null], [14121, 14459, null], [14459, 14824, null], [14824, 15149, null], [15149, 15421, null], [15421, 15798, null], [15798, 16027, null], [16027, 16450, null], [16450, 16674, null], [16674, 16920, null], [16920, 17117, null], [17117, 17372, null], [17372, 17538, null], [17538, 17837, null], [17837, 18232, null], [18232, 18449, null], [18449, 18646, null], [18646, 19091, null], [19091, 19336, null], [19336, 19529, null], [19529, 19719, null], [19719, 19904, null], [19904, 20144, null], [20144, 20352, null], [20352, 20693, null], [20693, 20864, null], [20864, 21250, null], [21250, 21490, null], [21490, 21640, null], [21640, 21882, null], [21882, 22194, null], [22194, 22509, null], [22509, 22780, null], [22780, 23132, null], [23132, 23264, null], [23264, 23463, null], [23463, 23693, null], [23693, 23800, null], [23800, 24047, null], [24047, 24411, null], [24411, 24700, null], [24700, 24971, null], [24971, 25427, null], [25427, 25879, null], [25879, 26179, null], [26179, 26554, null], [26554, 26875, null], [26875, 27075, null], [27075, 27398, null], [27398, 27704, null], [27704, 28003, null], [28003, 28241, null], [28241, 28511, null], [28511, 28978, null], [28978, 29452, null], [29452, 29745, null], [29745, 30248, null], [30248, 30327, null], [30327, 30698, null], [30698, 31064, null], [31064, 31457, null], [31457, 31855, null], [31855, 32128, null], [32128, 32375, null], [32375, 32845, null], [32845, 33371, null], [33371, 33874, null], [33874, 34503, null], [34503, 34936, null], [34936, 35447, null], [35447, 35997, null], [35997, 36561, null], [36561, 36787, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36787, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36787, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36787, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36787, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36787, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36787, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36787, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36787, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36787, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36787, null]], "pdf_page_numbers": [[0, 180, 1], [180, 329, 2], [329, 525, 3], [525, 713, 4], [713, 920, 5], [920, 1156, 6], [1156, 1500, 7], [1500, 1657, 8], [1657, 1893, 9], [1893, 2166, 10], [2166, 2507, 11], [2507, 2828, 12], [2828, 3205, 13], [3205, 3478, 14], [3478, 3929, 15], [3929, 4209, 16], [4209, 4385, 17], [4385, 4619, 18], [4619, 4803, 19], [4803, 5001, 20], [5001, 5169, 21], [5169, 5413, 22], [5413, 5773, 23], [5773, 5891, 24], [5891, 6183, 25], [6183, 6291, 26], [6291, 6559, 27], [6559, 6714, 28], [6714, 7140, 29], [7140, 7531, 30], [7531, 7734, 31], [7734, 7960, 32], [7960, 8233, 33], [8233, 8526, 34], [8526, 8738, 35], [8738, 8939, 36], [8939, 9152, 37], [9152, 9431, 38], [9431, 9767, 39], [9767, 10056, 40], [10056, 10409, 41], [10409, 10609, 42], [10609, 10853, 43], [10853, 11130, 44], [11130, 11356, 45], [11356, 11771, 46], [11771, 11893, 47], [11893, 12146, 48], [12146, 12328, 49], [12328, 12592, 50], [12592, 12853, 51], [12853, 13195, 52], [13195, 13359, 53], [13359, 13601, 54], [13601, 13893, 55], [13893, 14121, 56], [14121, 14459, 57], [14459, 14824, 58], [14824, 15149, 59], [15149, 15421, 60], [15421, 15798, 61], [15798, 16027, 62], [16027, 16450, 63], [16450, 16674, 64], [16674, 16920, 65], [16920, 17117, 66], [17117, 17372, 67], [17372, 17538, 68], [17538, 17837, 69], [17837, 18232, 70], [18232, 18449, 71], [18449, 18646, 72], [18646, 19091, 73], [19091, 19336, 74], [19336, 19529, 75], [19529, 19719, 76], [19719, 19904, 77], [19904, 20144, 78], [20144, 20352, 79], [20352, 20693, 80], [20693, 20864, 81], [20864, 21250, 82], [21250, 21490, 83], [21490, 21640, 84], [21640, 21882, 85], [21882, 22194, 86], [22194, 22509, 87], [22509, 22780, 88], [22780, 23132, 89], [23132, 23264, 90], [23264, 23463, 91], [23463, 23693, 92], [23693, 23800, 93], [23800, 24047, 94], [24047, 24411, 95], [24411, 24700, 96], [24700, 24971, 97], [24971, 25427, 98], [25427, 25879, 99], [25879, 26179, 100], [26179, 26554, 101], [26554, 26875, 102], [26875, 27075, 103], [27075, 27398, 104], [27398, 27704, 105], [27704, 28003, 106], [28003, 28241, 107], [28241, 28511, 108], [28511, 28978, 109], [28978, 29452, 110], [29452, 29745, 111], [29745, 30248, 112], [30248, 30327, 113], [30327, 30698, 114], [30698, 31064, 115], [31064, 31457, 116], [31457, 31855, 117], [31855, 32128, 118], [32128, 32375, 119], [32375, 32845, 120], [32845, 33371, 121], [33371, 33874, 122], [33874, 34503, 123], [34503, 34936, 124], [34936, 35447, 125], [35447, 35997, 126], [35997, 36561, 127], [36561, 36787, 128]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36787, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
19001b7ba6b9b89781be85bf3dad81e99cd56127
Incremental Learning in Real Time Data Processing Knowles Atchison, Jr. Spring 2012 Johns Hopkins University Design and Analysis of Algorithms Research Paper May 7, 2012 Abstract In today’s oversaturated information environment, it has become increasingly necessary to process data and arrive at decisions immediately without the data being at rest or retrieved from secondary storage. In the brief moment new information is available, it must be assimilated and the worthiness ascertained and applied to the decision making capabilities of the software in question. This is a paradigm shift from traditional batch processing and learning algorithms. With continuous data flow, the challenge is how to transform the vast amount of stream raw data into information and knowledge representation and accumulate experience over time to support future decision-making [Haibo et al. 2011]. This paper will investigate the concept of learning in a real time data environment and compare and contrast contemporary solutions proposed for the problem domain. 1 Introduction Incremental learning has two base scenarios: either the processing node has training and/or prior knowledge of the desired solution space or it starts tabula rasa. This slight distinction plays a large role in the choice of algorithm and ultimately the function of the software. Regardless, once the data begins to flow, the behavior begins to run together with the following basic outline: - Data is received - Data is added to the learning model/framework - Useable information is ascertained and assimilated into the model - Some action is taken based on the prior step - Repeat Algorithmic differences can vary from underlying data structures to decision behavior. Regardless of the underlying framework, an incremental learning algorithm must meet the following requirements [Zhao et al. 2010]: - It should learn from new data - It should not require access to the original data used for training (if applicable) - It should retain previously acquired knowledge 1.1 The Case for Frameworks The vast majority of published works on incremental learning focus less on the algorithm at hand and more on the framework in which said algorithm exists. This is especially true in real time processing situations where the flow of data, the ordering of steps, and data outflow are generally mutually exclusive from the handiwork of the processing element in and of itself. However, the learning portion is not necessarily plug and play and thus there is a strong correlation between the framework and the learning algorithm/processing that takes place within. 1.2 Contributions The goal of this paper is to investigate and analyze solutions that attempt to make real time data processing systems ‘smarter’ via incremental learning. Commercially available platforms and frameworks are also discussed to shed light on current approaches to this problem area. In the following sections, the paper will discuss modern problems in which incremental learning is deployed and the effects therein. Lastly, some future areas of research are introduced. 2 Text Document Classification A problem instance of interest is text document classification. Text mining is becoming increasingly important to industry as it provides a means to discover patterns and trends in natural language and analyzes that information for use in application processing [Zhihang et al. 2007]. It is in the middle ground of incremental learning problems since it can operate both with and without training data. This presents several viable options from which to attack the problem, most research examples use a variation on neural networks as their modeling basis. 2.1 Text Classification Training One option is to use a set of training data to initialize the system and set it up for future processing. Applications may have a large amount of such data available and new training often comes available over time [Zhihang et al. 2007]. The Incremental Learning of Text Classification (ILTC) framework proposed to address this problem functions in a slightly different way then traditional online learning. In a normal online learning situation, new data allows the system to adjust its hypothesis and act accordingly. ILTC operates by learning from the new data without reexamining the old training data, but retaining previously acquired knowledge. The below figure outlines the framework: The initial training set allows the system to begin operation under certain known conditions which are then adapted as new information becomes available. Algorithm 1 Neural Network **Initialization** 1. A list of initial terms and phrases are given as the first training set. \[ Tr = \{ t_0 \ldots t_m \} \], where \( m_0 \) is the total number of terms 2. Each term has an associated local and global weight. 3. The weights and terms form a feature vector that is used for classification. **Incremental Learning** 1. Data [new documents] comes in and the weights of the current feature vector are adjusted. \[ document d = \overline{q} = \{ q_0 \ldots q_{m_0} \} \] \[ q_i = \text{frequency of } i^{th} \text{ term} \] 2. The old data is not reevaluated at any point in time; the current assessment is adjusted only. 3. The output to the neural network is a matrix \( M \in R^{m_0 \times k_0} \) where \( k_0 \) is the total number of text classes and \( m_0 \) is the total number of terms. Columns are feature vectors for a given text class and rows are feature vectors of the \( i^{th} \) term within the term list. **Application to Neural Network** 1. Each node in the network represents a text class. 2. When new information is available the network must add nodes to account for this information. 3. Output may modify feature vectors. \[ \bar{x} = \{x_0 \cdots x_m\}, x_i = q_i \times g_i, \] Where \( g_i \) is the global weight of the \( i^{\text{th}} \) term and \( q_i \) is the frequency. 4. Classification of the document is the main output. Secondary output is a new set of training information that is incorporated into the incremental learning in the same way the initial training set was assimilated. Of particular importance is step number two in the initialization portion of the above algorithm. A word may have significant meaning within a phrase, even though it appears less frequently than other terms in that phrase. In the example of technical manuals, a verb will carry more contextual meaning even though it is found fewer times then other words, e.g. check memory leak, check and advise [Zhihang et al. 2007]. The incremental learning portion of the algorithm can handle nearly all cases where the information coming in is not “new”, but rather a permutation on already known information. The neural network comes into play when previously unknown data is given as input and instances where data is not in the set of known items, but falls within similar text classes/features. Below is a diagram of the aforementioned neural network: ![Diagram of Incremental Neural Network Model](image) Figure 2: Incremental Neural Network Model [Zhihang et al. 2007] The two major components of this approach are the incremental learning of text features (key terms and phrases) and the incremental neural network model. ILTC illustrates the ability to train a new system, acquire new knowledge, and assimilate it all without the need to neither refer to the original training data set nor forget information seen previously. **2.1.1 Training Result Analysis** The data sample of the experiment is a collection of diagnostic automobile documents that contain phrases from customers such as “there was a noise heard in the engine” and “engine runs rough” [Zhihang et al. 2007]. As a precursor to the computational results of the analysis, when we read the prior two sentences, our brain picks up on the relationship of the word engine to the rest of the sentence. The significance is present in our minds via sentence structure, word quantity, and common experience. The most important of which is the common experience, as we know that the engine is a crucial piece of the car and thus more weight is given to it when discussing things wrong with our automobiles. On the other hand, sentences and descriptions are often grammatically incorrect and may be entirely inaccurate as to the root cause of the mechanical problem (but that is outside the domain scope of this algorithm). Starting with data set one, D1, training classifier $F_1$ was generated. Then for D2 through D4, $F_{2-4}$ were generated. The nine categories of text documents were generated/classified via a process called ‘Learning From Text’ [Huang et al. 2006]. It is as follows: --- ### Algorithm 2 Learning From Text **Initialization** 1. Assume classification categories for $C_1, C_2, \ldots, C_n$ with training documents $T_1, T_2, \ldots, T_n$, where $T_i$ contains the documents belonging to category $i, i = 1, \ldots, n$ 2. Determine frequency threshold **Document Indexing** 1. Generate a term list from terms in $T_1, T_2, \ldots, T_n$ as $T_r = T_1 \cup T_2 \cup \ldots \cup T_n$ 2. Filter terms a. Remove stopping words that will not be useful in classification (to, from, etc…) --- b. Merge morphological variants of the a word into the same root (leak, leaks, leaking). c. Track word frequency and note terms that exceed or are under the desired threshold. Matrix Generation Generate Term-Category Weight (TCW) Matrix [Huang et al. 2006] $$TCW = M \times N$$ matrix where, - \( M \) = number of terms in the term index - \( N \) = number of diagnostic categories $$TCW_{ij} = tf_{ij} \cdot G_i$$, multiplication of local and global weights - \( tf_{ij} \) is the frequency of the \( i \)th terms in the term index occurring in \( T_j \) for diagnostic category \( j \) $$G_i = \log_2 \left[ \frac{ndocs}{df_i} \right] + 1$$ - \( ndocs \), the number of documents in the entire training set - \( df_i \), total number of documents in \( T_i \) that contain \( i \) Each subsequent data group contains the same nine classes (remember that these are the categories that data can fall in and are represented by nodes in the neural network), but each data set has new terms for the algorithm to learn from. While the methodology for generating \( C_1, C_2, \ldots, C_n \) is provided, the author never directly defines the categories or data sets. The data sets are arranged as follows: <table> <thead> <tr> <th></th> <th>D1</th> <th>D2</th> <th>D3</th> <th>D4</th> </tr> </thead> <tbody> <tr> <td>Number of Samples</td> <td>600</td> <td>600</td> <td>600</td> <td>600</td> </tr> <tr> <td>Number of new terms</td> <td>346</td> <td>123</td> <td>59</td> <td>139</td> </tr> </tbody> </table> Table 1: Data used in incremental experiments [Zhihang et al. 2007] 2.1.1.2 Initial Data Set D1 is compared to Perceptron, which is a supervised learning algorithm that is a linear classifier, meaning it is a classification algorithm that makes predictions based on a linear predictor function combining a set of weights with the feature vector describing a given input. In the context of this problem, it is similar in the fact that it uses the weights of words within a phrase (the feature vector) to determine how to classify the text. Perceptron: \[ f(x) = \begin{cases} 1 & \text{if } w \cdot x + b > 0 \\ 0 & \text{otherwise} \end{cases} \] where: - \( w \) is a vector of real-valued weights - \( w \cdot x \) is the dot product (computes a weighted sum) - \( b \) is possible bias It is worth noting that the authors do not expound on any decision making process leading up to the choice of the Perceptron as the comparison tool to the initial training/classification of data. The results are as follows: ![Figure 3: Performance of ILTC on data set DI with comparison to the standard Perceptron learning. [Zhihang et al. 2007]](image) While it appears that this method is preferable, it is unfortunate that the classes C1 through C9 are not explicitly defined in the experiment and thus the revelation that ITLC outperformed in all classes except for C5 raises more questions than it answers. For example, what in the data in C5 allows a simple linear classifier to have more correct output than a new framework specifically designed for such a purpose? Can data be hand crafted to give better initial results for certain algorithms? How was correctness measured? Regardless, the takeaway from part one of the experiment is that a proper choosing of framework and algorithm will allow for a better initial result which lays the groundwork for the remainder of the procedure. 2.1.1.3 Incremental Learning Results “Three iterations of incremental learning were conducted on data sets, D2, D3 and D4. During each iteration, the feature space is expanded, which is implemented subsequently in the expansion of the input nodes in the neural network system: 123 new input nodes were added during the incremental learning from D2, 59 new input nodes were added during the incremental learning from D3, 139 new terms from D4” [Zhihang et al. 2007]. In this portion of the experiment, the competing algorithm is a stochastic weight update scheme, which a neural network that also implements incremental learning, but does not retain previously acquired knowledge. The results are as follows: ![Figure 4: Performance Comparison of ILTC and SWU](Zhihang et al. 2007) Again the ITLC framework outperforms its counterpart; however there are several issues with the above portion of the experiment. First and foremost, SWU does not retain prior information. In this manner, comparing the two is akin to comparing apples and oranges. ITLC will be more correct, but SWU may outperform, computationally speaking, since it is doing “less work”. ITLC strikes the balance between new and old information learnt, which is arguably one of the most important characteristics of an incremental learning system doing real time data processing. 2.2 Text Classification sans Training On the other end of this spectrum is the setup that does not require an initial training set. This situation reflects a real world scenario where the initial training data may be cost prohibitive or otherwise somehow unobtainable. One solution for the lack of labeled data required is to develop algorithms that can learn from a small number of labeled examples augmented with a large number of unlabeled examples [Liu et al. 2011]. Since there is a lack of pre-labeled data, the key aspect to bootstrap into a useful system is to employ the use of clustering, specifically a highly modified version called the fuzzy partition clustering method (FPCM) [Liu et al. 2011]. FPCM divides the dataset via basic features of the initial sample data (i.e. simple feature vectors of the document): Given a document set: \( X = \{x_1, x_2, \ldots, x_n\} \) Sample \( x_i \) is a feature vector \( p(x_i) = (x_{i1}, x_{i2}, \ldots, x_{ik}), x_{ij} (1 \leq j \leq s) \) Samples are divided into fuzzy subsets via some comparability measurement (this is a notable comparison to the Fuzzy C-Means method of achieving similar results.) A sample illustration: ![Figure 5: Visualization of Fuzzy Clustering](image) Based on the criteria of the frequency and distribution of certain words within a sample data file, three clusters of “similar” documents have been partitioned without the need for initial training data. The sample data in fact becomes, in essence, training data. The fuzzy clustering algorithm is substituted for pre-labeled, known training data. With the training aspect now complete, the next key part of the system is the underlying model in which knowledge is stored. In this instance a naïve Bayesian model is used as the framework for incremental learning. In contrast to the neural network presented in section 2.1, the naive Bayesian model is an applied principle of the keep it simple mantra. Rather then build up an overly complicated system with a set of predefined data, a simple classifier allows for [0,1] decision process similar to Perceptron. This moves the decision making capabilities from a relationship oriented structure to one of mere probability. Algorithm 3 Bayesian Classifier **Initial Sample Data Clustering** 1. Select representative points $v_1, v_2, \ldots, v_c$ as initial centroids based on some criteria (the larger circles in Figure 3). 2. Determine fuzzy membership based on the distance from each element to the centroid. 3. Select samples close to the centroids as now labeled samples/training sets $U_A: U_A = \{x_i \mid h_k < \omega\}, \omega$ is the threshold; remainder of elements are in set $U_B$ **Build Bayesian Classifier** 1. Train with $U_A$ 2. Classify unlabeled documents from set $U_B$ \[ s \leftarrow 1, s \text{ is size of } U_B \] \[ x \leftarrow x_s, x_s \in U_B \] \[ \min \{\sum_{j=1}^{c} L_j(x) * P(c_j \mid x)\} \] $x$ belongs to class $c_i$, but was classified in $c_j$. $P$ is the probability of $x$ being clustered into $c_i$ and $L_j$ is the loss of $x$ if moved to cluster $j$. **Runtime Execution** Classifier is determined by the posterior probability. It will attempt to determine the mutual information relationship between known and new data. Evaluates the feature vector (incoming data) with the classifier set (known data) [Liu et al. 2011]: \[ X = \{x_1, x_2, \ldots, x_n\}, C = \{c_1, c_2, \ldots, c_n\} \] \[ \text{foreach } x_j \text{ in } X \] \[ s[j] = QMEI(c_j, x_j) \] Rank $s[]$, export to classifier \[ p(x_i) = \sum_{i=1}^{n} Gauss(x_i - x_j, \sigma I) \quad //\text{probability density} \] //mutual information relationship \[ QMEI(c, x_j) = \log \frac{(\sum_{i=1}^{n} \int p(c_i, x_j)^2 dx_j)(\sum_{i=1}^{n} \int p(c_i)^2 p(x_j)^2 dx_j)}{(\sum_{i=1}^{n} \int p(c_i, x_j) p(c_i) p(x_j) dx_j)^2} \] More responsibility is placed on the initial labeling of sample data, as it drives the remainder of the algorithm during execution. If the “seed” is not done correctly, the rest is ultimately irrelevant. 2.2.1 Results Analysis The data set for this experiment was a collection of 4,800 web pages, pre categorized into six categories of 800 documents each. The Bayesian classifier is combined with FPCM and two other clustering methods for comparison. The results can be seen in the below figure: Figure 6: The proposed comprehensive classifier is compared with ISODATA, clustering, and fuzzy C-average [Liu et al. 2011] We can immediately see that the data sets used in the experiment are well defined and can easily be recreated. Additionally, the comparison of the proposed classifier is done combined with other external clustering methodologies. As the above diagram illustrates, the new classifier clearly has an advantage in certain text categories. Interestingly enough, the category in which it is outperformed is military affairs, the jargon of which is often populated with acronyms and short hand that often requires prior knowledge to comprehend correctly. In this context, the lack of a predefined training set most likely worked against FPCM. 3 Stream Computing Most tools in place for “big data” today exist as batch processing systems [Satzger et al. 2011]. MapReduce, Hadoop, et al. operate on the data as a final set for the given run since the problem is divided out as sub-problems. While highly efficient, this fails to solve similar problems where data is constantly arriving over time. The fundamental difference in the two approaches can be illustrated with the following diagram: ![Diagram of Batch Processing vs. Stream Computing] It comes down to data at rest or data in motion. How does a system not only receive data, but provide meaningful analysis before said data is gone? Two such solutions to this issue are the Elastic Stream Computing framework and IBM’s InfoSphere. 3.1 Correlation to Incremental Learning While not all stream computing frameworks implement learning algorithms, the concept of the stream computing framework is closely tied with the concept of machine learning in general. When data is arriving over a period of time, eliminating the ability to process a single batch as one unit, stream computing combines the flexibility and power of cloud computing with the opportunity to implement incremental learning solutions within the framework. ### 3.2 Elastic Stream Computing Written in Erlang, Elastic Stream Computing (ESC) is a stream computing engine designed for real time data computation [Satzger et al. 2011]. It is similar to MapReduce in the fact that it uses key/value pairs, but diverges with the design for distributed online processing of event streams with unknown/varying rates of data. The overall data structure of ESC is that of a directed acyclic graph (DAG). A DAG defines the data flow through the system and vertices represent operations to be done on the data at that given point in time. The ideal environment to set up ESC is the cloud. This allows the elasticity in the name since the platform itself has zero control over data rates and should scale up and down when appropriate. The diagram below outlines the overall architecture of ESC. Note the similarities between the diagram below and the neural network put in place to solve the text classification problem in section 2.1. ![Figure 8: Esc Platform Overview](image) Figure 8: Esc Platform Overview [Satzger et al. 2011] While the description of programmatic options at vertices is brief at best, the authors [Satzger et al. 2011], do leave the door open for interpretation. In these critical points within the framework, incremental learning algorithms would be of excellent use given the DAG nature of ESC. ### 3.3 InfoSphere Every day, consumers and businesses generate data at a pace that would have seemed unprecedented just a few years ago. Each hour, retail mammoth Wal-Mart handles more than one million customer transactions, feeding databases estimated at over 2.5 petabytes in size [IBM 2011]. InfoSphere is a framework to provide real time analytics on data that is in motion. Akin to ESC, it places an additional abstraction atop the streaming framework by allowing programmers to write in their Streams Processing Language (SPL). Aggregation of data sources coupled with the flexibility and power of functional programming paradigms is a welcoming environment for machine learning, incremental learning in particular. ### 3.4 Practical Usage In the two aforementioned platforms, the usage of the classification algorithms from section 2 would not necessarily be universally plug and play. For example, in IBM’s InfoSphere, we could have a process that does not need to be initially trained, while creating and retaining information seen previously. In this instance, the Bayesian classifier would a better fit. Additionally, external constraints weight heavily on design choices such as the ability to create and maintain a set of initial training data of which to feed a system. A framework’s architecture may also lend itself to one particular algorithm. In the instance of ESC, the network topology of the processing nodes is, in and of itself, a neural network. It would follow that the use of an algorithm that closely mimics the container’s format would be of great use and possibly easier to implement, considering how often economic factors weigh in on the decision making process. 4 Future Research The few scenarios we have touched upon in this paper have not even scratched the surface of what is out there, both in academia and the workplace. As readily evidenced in the discussion of text classification of section 2, the permutations for a given problem domain are numerous: training data, underlying data structure, so on and so forth. The concept of incremental learning has been applied in situations of traffic flow to ecommerce. As cloud computing continues to take off, we expect incremental learning solutions within cloud/streaming frameworks to rapidly increase, as companies bring new solutions into the marketplace. While the ground has been thoroughly treaded, the depth is something to be desired and a possible avenue of exploration going forward. 5 Conclusions In this paper, we discussed the concept of incremental learning, laid out the basic qualifications of an incremental learning algorithm, examined the results of each algorithm’s experiments, and explored text classification as a problem area in which incremental learning could be applied. We saw how the existence or lack of training data can affect the underlying data structures for a given solution. Current frameworks and commercially available solutions were also presented, highlighting some of the ways incremental learning can be implemented in the real world. 7 References Published Books and Papers WWW References
{"Source-Url": "http://www.knowlesatchison.net/pdf/MachineLearning.pdf", "len_cl100k_base": 5559, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 30817, "total-output-tokens": 6729, "length": "2e12", "weborganizer": {"__label__adult": 0.0003590583801269531, "__label__art_design": 0.0005216598510742188, "__label__crime_law": 0.0005664825439453125, "__label__education_jobs": 0.00481414794921875, "__label__entertainment": 0.00012826919555664062, "__label__fashion_beauty": 0.00024700164794921875, "__label__finance_business": 0.0007448196411132812, "__label__food_dining": 0.0004432201385498047, "__label__games": 0.0006265640258789062, "__label__hardware": 0.0013837814331054688, "__label__health": 0.0012569427490234375, "__label__history": 0.00033283233642578125, "__label__home_hobbies": 0.0002002716064453125, "__label__industrial": 0.0008320808410644531, "__label__literature": 0.000560760498046875, "__label__politics": 0.00036215782165527344, "__label__religion": 0.0004606246948242187, "__label__science_tech": 0.40869140625, "__label__social_life": 0.00017571449279785156, "__label__software": 0.026123046875, "__label__software_dev": 0.5498046875, "__label__sports_fitness": 0.0003647804260253906, "__label__transportation": 0.0006985664367675781, "__label__travel": 0.00020599365234375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25885, 0.02693]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25885, 0.74216]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25885, 0.90845]], "google_gemma-3-12b-it_contains_pii": [[0, 1052, false], [1052, 2629, null], [2629, 4433, null], [4433, 5788, null], [5788, 7134, null], [7134, 9246, null], [9246, 11191, null], [11191, 12528, null], [12528, 14262, null], [14262, 15635, null], [15635, 17720, null], [17720, 18981, null], [18981, 20028, null], [20028, 21289, null], [21289, 23123, null], [23123, 24653, null], [24653, 25885, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1052, true], [1052, 2629, null], [2629, 4433, null], [4433, 5788, null], [5788, 7134, null], [7134, 9246, null], [9246, 11191, null], [11191, 12528, null], [12528, 14262, null], [14262, 15635, null], [15635, 17720, null], [17720, 18981, null], [18981, 20028, null], [20028, 21289, null], [21289, 23123, null], [23123, 24653, null], [24653, 25885, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25885, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25885, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25885, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25885, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25885, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25885, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25885, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25885, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25885, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25885, null]], "pdf_page_numbers": [[0, 1052, 1], [1052, 2629, 2], [2629, 4433, 3], [4433, 5788, 4], [5788, 7134, 5], [7134, 9246, 6], [9246, 11191, 7], [11191, 12528, 8], [12528, 14262, 9], [14262, 15635, 10], [15635, 17720, 11], [17720, 18981, 12], [18981, 20028, 13], [20028, 21289, 14], [21289, 23123, 15], [23123, 24653, 16], [24653, 25885, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25885, 0.02094]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
7709ebd3704f1484f199576b6e02bb942f511695
Towards an internationalized and localized TEI Sebastian Rahtz, Oxford University Abstract The Text Encoding Initiative Guidelines have been widely adopted by projects and institutions in many countries in Europe, the Americas, and Asia, and are used for encoding texts in dozens of languages. However, the Guidelines are written in English, the examples are largely drawn from English literature, and even the names of the elements are abbreviated English words. We need to make sure that the TEI and its Guidelines are internationalized and localized so that they are accessible in all parts of the world. The paper describes how the TEI project can develop internationally, including - A review of why localisation and internationalisation matter - A discussion of how the TEI architecture can be leveraged to support internationalised versions - The application of the W3C ITS guidelines to the TEI work - Practical results from a pilot project, and future translation plans - The tools needed to make use of an internationalised TEI - The steps towards ontologies in the TEI 1. TEI, internationalisation, and localisation The Text Encoding Initiative Guidelines [TEI] have been widely adopted by projects and institutions in many countries in Europe, North America, and Asia, and are used for encoding texts in dozens of languages. For example, the projects listed at http://www.tei-c.org/Applications/ have examples of work involving Chinese, Danish, Dutch, Finnish, French, German, Greek, Hungarian, Italian, Japanese, Latin, Norwegian, Serbian, Spanish, Welsh, and some African languages; but given that the Guidelines are c. 1400 pages of fairly dense technical English, it is possible that only the more dedicated scholars get involved. It may be useful to distinguish between what we might call ‘traditional’ or documentary approaches to translation, which focus on translating the descriptive prose of the Guidelines as a document, and ‘formal’ approaches which focus instead on translating the individual components (examples, element and attribute names, technical descriptions) in a way that enables these components to be used within the formal structures of the TEI as a technical standard. While the first approach may be very useful, the results are more difficult to maintain over the long term and are also more difficult to produce, since they cannot be accomplished in discrete chunks. The latter approach is the one we propose here, since it is more easily maintainable (only the affected elements need to be updated when changes are made to the Guidelines) and can be more easily undertaken in a distributed fashion by collaborative groups. Some translation work has already been undertaken: • There have already been six ‘traditional’ translations of the TEI Lite (http://www.tei-c.org/Lite/) documentation into other languages. These have not covered translation of the element names or technical reference documentation. They are in wide use, however, and have created a need for more extensive translations of the Guidelines themselves. • The French Groupe d’experts n° 8 within CN 357 (Commission de normalisation «Modélisation, production et accès aux documents») of the CG 46 (Commission générale «Information et documentation») at AFNOR has an interest in TEI translation. Amongst other goals, this group intends to translate the definitions of the TEI elements and attributes into French. So far, they have worked in a ‘traditional way’ on some chapters of the P4 and P5 versions. Dissemination of the resulting French version of these chapters is very limited. • Some ‘formal’ work has also been undertaken on translating element and attribute names; Alejandro Bia (for his background work see eg [BIA]) and Arno Mittelbach have prepared translation sets for Catalan, Spanish, and German. This work is integrated into the Roma (http://www.tei-c.org.uk/Roma/) application, allowing users to create tailored schemas in one of the supported languages. Translation of documentation is only part of the issue. We need to make sure that the TEI and its Guidelines are internationalized and localized so that they are accessible in all parts of the world. The W3C define these processes as follows: Internationalization (I18N) Internationalization is the process of generalizing a product so that it can handle multiple languages and cultural conventions without the need for redesign. Internationalization takes place at the level of program design and document development. Localization (L10N) Localization is the process of taking a product and making it linguistically and culturally appropriate to a given target locale (country/region and language) where it will be used. [http://www.w3.org/TR/itsreq/#intro_definitions] Localization primarily concerns examples in the TEI context. There are over 1100 formal examples (that is to say, syntactically complete and valid) scattered through the text of TEI Guidelines, and another 775 in the formal definitions of elements; nearly all are in English. This is usually acceptable for examples like this: Lexicography has shown little sign of being affected by the work of followers of J.R. Firth, probably best summarized in his slogan, <cit><quote>You shall know a word by the company it keeps.</quote><ref>(Firth, 1957)</ref></cit> which is in the field of discourse of many scholars, but many others require considerably greater familiarity with Anglo-Saxon culture. Even Shakespeare: <sp><speaker>First Servant</speaker></sp> <ab>0, I am slain! My lord, you have one eye left</ab> <ab>To see some mischief on him. O!</ab> </sp> <stage>Dies</stage> <sp> <speaker>CORNWALL</speaker> <ab>Lest it see more, prevent it. Out, vile jelly!</ab> <ab>Where is thy lustre now?</ab> </sp> <sp> <speaker>GLOUCESTER</speaker> <ab>All dark and comfortless. Where's my son Edmund?</ab> <ab>Edmund, enkindle all the sparks of nature,</ab> <ab>To quit this horrid act.</ab> </sp> is not easy, while older English is even harder: <lg> <1>Sire Thopas was a doghty swayn;</1> <1>White was his face as payndemayn,</1> <1>His lippes rede as rose;</1> <1>His rode is lyk scarlet in grayn,</1> <1>And I yow telle in good certayn,</1> <1>He hadde a semely nose.</1> </lg> It will be countered that the words of these examples do not matter much, since it all that is required is to appreciate the markup constructs being used (most people will recall that Shakespeare wrote plays, and this is all that matters). However, sometimes the point of the markup is not obvious, as in this example: Next morning a boy in that dormitory confided to his bosom friend, a <distinct type="psSlang">fag</distinct> of Macrea's, that there was trouble in their midst which King <distinct type="archaic">would faine</distinct> keep secret. Here there is the English word ‘psSlang’ (expandable to ‘public school slang’) for the type attribute of <distinct> to consider, where the value of ‘fag’ gives little help. When the general context itself is clear, and the English text perhaps easy to translate, the names of the elements may stand in the way of easy comprehension. Thus: <persName key="EGBR1"> <roleName type="office">Governor</roleName> <forename sort="2">Edmund</forename> <forename full="init" sort="3">G.</forename> <addName type="nick">Jerry</addName> <addName type="epithet">Moonbeam</addName> <surname sort="1">Brown</surname> <genName full="abb">Jr</genName>. </persName> Can only really be take advantage of by someone who 1. appreciates the cultural context of ‘forename’ and ‘surname’ 2. can mentally expand ‘nick’ to ‘nickname’ (and knows what a nickname is) 3. can appreciate whether a ‘Governor Edmund G. Jerry Moonbeam Brown Jr.’ is a politician, a kind of food, or a new dance The user of the Guidelines may accordingly prefer to: 1. read ‘contiene un único documento TEI, compuesto de una cabecera TEI (TEI header) y un cuerpo de texto (text), aislado o como parte de un elemento corpusTei (teiCorpus)’ instead of ‘contains a single TEI-conformant document, comprising a TEI header and a text, either in isolation or as part of a teiCorpus element.’ in the documentation 2. use element names of <líneaDirección>, <ligneAdresse>, <linDireccio> or <AdresseZeile> instead of <addrLine> 3. see examples from daily life, as in: `<Adresse> <AdresseZeile>Herrn Jürgen Jemandem</AdresseZeile> <AdresseZeile>Computer+Software GmbH</AdresseZeile> <AdresseZeile>Albrecht-Thäär-Straße 22</AdresseZeile> <AdresseZeile>48147 Münster</AdresseZeile> <AdresseZeile>GERMANY</AdresseZeile> </Adresse>` (thanks to [http://www.columbia.edu/kermitt/postal.html#germany](http://www.columbia.edu/kermitt/postal.html#germany) for the example). We will consider later how these translated element names can be reconciled with the English names. It should be noted that element name translation by itself is quick and useful, but necessarily the most effective way to proceed. For example, many of the element names are in an abbreviated form of English (eg <respStmt>) which are not easy to translate sensibly. Furthermore, unless the reference descriptions are also translated, the element names by themselves do not give a clear idea of what the element is for. Using <infoResp> instead of <respStmt> is not as helpful as translating the description ‘supplies a statement of responsibility for someone responsible for the intellectual content of a text, edition, recording, or series, where the specialized elements for authors, editors, etc. do not suffice or do not apply.’ 2. TEI architecture support for the I18N and L10N process 2.1. Unicode The first priority in internationalizing the TEI is to ensure clean support for character sets throughout the system. With this in mind, the P5 revision of the TEI made substantial changes in its dealings with characters. As the W3C ([http://www.w3.org/International/](http://www.w3.org/International/)) recommend, in the TEI scheme: - Unicode is the only supported character encoding schema. This means that entities for characters are deprecated, and the recommended daily use is for UTF-8 encoded text, as in • There is a clean mechanism to use non-Unicode characters • all appropriate text content models are set to allow a mixture of CDATA and <g> (where <g> is a reference to a non-Unicode character) • all elements have an attribute xml:lang to record the language used • there are no places where an attribute is used to hold pure text A non-Unicode character can be defined using the <glyph> element in the TEI header. In the following example, we define a new character and assign it to a position in the Unicode Private Use Area (PUA); we also provide a standardized form as a fallback: ``` <persName xml:lang="el-grc">Φ λ . Θ ἀ λ ὸ γ</persName> ``` ``` <glyph xml:id="z103"> <glyphName>LATIN LETTER Z WITH TWO STROKES</glyphName> <mapping type="standardized">Z</mapping> <mapping type="PUA">U+E304</mapping> </glyph> ``` This can now be referred to using the <gi> element, as in ``` <g ref="#z103"/> ``` At this point we will expect the processing application to work out what to do (either show the PUA character, if it can, or the standardized form). Other facilities in the <charDesc> element allow the user to provide an image file which has a picture of the character. It is also possible to override what appears in the text by using markup like this ``` <g ref="#z103">z</g> ``` where the content of the <g> element can be used immediately without any lookup. Where a character is simply a relatively unimportant variant on a Unicode character, the user does not need to define a point in PUA, but can simply use <charDesc> to describe the variation. ### 2.2. TEI literate programming The TEI is written in a high-level markup language for specifying XML schemas and their documentation. This language is an XML vocabulary known as ODD (One Document Does it all), and is one of the TEI modules. This provides a literate programming language for production and documentation of any XML schema, with three important characteristics: 1. The element and attribute sets making up the schema are formally specified using a special XML vocabulary 2. The specification language also includes support for macros (like DTD entities, or schema patterns), a hierarchical class system for attributes and elements, and the creation of pre-defined groups of elements known as modules. 3. Content models for elements and attributes are written using an embedded RELAXNG XML notation, but tools are available to generate schemas in any of RELAXNG, DTD language, or W3C schema. 4. Documentation describing the supported elements, attributes, value lists etc is managed along with their specification, together with use cases, examples, and other supporting material. The expectation is that many people wish to use only a subset of the TEI, so the TEI's 22 modules (containing 500 elements) can be combined together and customized as desired using the ODD language, to produce a schema suitable for use by a project. Customization may including tightening the constraints on existing elements, removing unused elements, and even adding new elements or attributes (though this will make the text not portable). The ODD language has allowance for translating element name, attribute names, and descriptions, and for preserving information to allow canonicalisation. The technical documentation elements (<gloss> and <desc>) for TEI elements and attributes etc can be specified multiple times, in different languages, distinguished by the standard xml:lang attribute. There is also a container (<equiv>) to specify the relationship of an element, attribute or value to standardised schemes. Each definition of a new primary object (element or attribute) has associated description and examples. A complete example of a definition is as follows: ```xml <elementSpec module="header" ident="taxonomy"> <gloss>taxonomy</gloss> <desc>defines a typology used to classify texts either implicitly, by means of a bibliographic citation, or explicitly by a structured taxonomy.</desc> <content> <rng:choice> <rng:oneOrMore> <rng:ref name="category"/> </rng:oneOrMore> </rng:group> <rng:zeroOrMore> <rng:ref name="model.biblLike"/> </rng:zeroOrMore> </rng:choice> </content> <exemplum> <egXML> <taxonomy xml:id="tax.b"> <bibl>Brown Corpus</bibl> <category xml:id="tax.b.a"> <catDesc>Press Reportage</catDesc> </category> <category xml:id="tax.b.a1"> <catDesc>Daily</catDesc> </category> <category xml:id="tax.b.a2"> <catDesc>Sunday</catDesc> </category> </taxonomy> </egXML> </exemplum> </elementSpec> ``` The important things to note here are that the content model for the element is expressed in RELAXNG, which references other elements only by the names of classes to which they belong; and that the the worked example is well-formed XML embedded in its own namespace. This specification may be processed to produce a DTD, a RELAXNG schema, an XSD schema, or documentation in various forms. The objects identified by the ident attribute in the TEI can be given an alternate name by use of the <altIdent> element; so the example above could be rewritten as ```xml <elementSpec module="header" ident="taxonomy"> <altIdent xml:lang="fr">taxonomie</altIdent> ... </elementSpec> ``` Providing a French name for the element. How does this work in the schema, where other elements might refer to ‘taxonomy’? The normal schema, using RELAXNG compact syntax, has the definition ```groovy taxonomy = ## (taxonomy) defines a typology used to classify texts either ## implicitly, by means of a bibliographic citation, ## or explicitly by a structured taxonomy. element taxonomy { taxonomy.content, taxonomy.attributes } taxonomy.content = category+ | (model.bibLike, category*) taxonomy.attributes = att.global.attributes, empty ``` in which the element <taxonomy> is defined by the containing pattern ‘taxonomy’; it is the pattern name which other elements use, not the element name. If the schema were translated into Greek, it would look like this: ``` taxonomy = element ταξινομία { taxonomy.content, taxonomy.attributes } ... ``` where the pattern name remains the same. This type of schema markup is generated by the TEI tools, picking up the information from <altIdent>. The descriptions work in the same way. We can expand the TEI source to add French translations alongside the English originals, and the appropriate text can be passed to the generated schemas or documentation: ``` <elementSpec module="header" ident="taxonomy"> <altIdent xml:lang="fr">taxinomie</altIdent> <gloss>taxonomy</gloss> <gloss xml:lang="fr">Taxinomie</gloss> <desc>defines a typology used to classify texts either implicitly, by means of a bibliographic citation, or explicitly by a structured taxonomy.</desc> <desc xml:lang="fr">L’élément Taxinomie <gi>taxonomy</gi> définit une typologie employée pour classer des textes soit implicitement au moyen d’une citation bibliographique, soit explicitement au moyen d’une taxinomie structurée.</desc> .... </elementSpec> ``` [We thank Pierre Yves Duchemin for these translations.] What does a translated schema look like in practice? If we take a Spanish play, and translate the element names to Spanish (thank to Alejandro Bia for this work), a text like this will be much more familiar-looking to encoders in Spanish-speaking countries: ``` <cuero> <div1 tipo="part"> <div2 tipo="act"> <encabezado tipo="main">Jornada primera</encabezado> <div3 tipo="scene"> <encabezado tipo="main">Cuadro único</encabezado> <acotacion formato="centered"> <resaltado formato="bold">(Salen soldados)</resaltado> REBOLLED CHISPA soldados</resaltado>.<resaltado formato="bold">) </resaltado> </acotacion> <dialogo> </dialogo> </div3> </div2> </div1> </div2> </cuero> ``` This file will not work with normal TEI publishing tools, or be suitable for archiving, but it is straightforward to write a transformation (e.g. in XSL) which reads the TEI source with the element names and `<altIdent>` information, and puts the text back to canonical form. ### 2.3. TEI applications TEI applications, as well as the texts themselves, need to have developed internationalised interfaces. For example, an application with turns TEI XML into HTML for web display, and provides a heading such as ‘Contents’ when it meets `<div.Gen type=“toc”/>`, will have to provide appropriate translations. The TEI XSL family maintained by Sebastian Rahtz, for example (http://www.tei-c.org/Stylesheets/teic/), can operate in many languages: ``` <table> <thead> <tr> <th>ISO Language code</th> <th>Text</th> </tr> </thead> <tbody> <tr> <td>en</td> <td>Contents</td> </tr> <tr> <td>de</td> <td>Inhalt</td> </tr> <tr> <td>ro</td> <td>Cuprins</td> </tr> <tr> <td>fr</td> <td>Contenu</td> </tr> <tr> <td>pt</td> <td>Índice geral</td> </tr> <tr> <td>es</td> <td>Contenidos</td> </tr> <tr> <td>slv</td> <td>Vsebina</td> </tr> <tr> <td>sv</td> <td>Inneháll</td> </tr> <tr> <td>sr</td> <td>Sadržaj</td> </tr> <tr> <td>pl</td> <td>Spis treścić</td> </tr> <tr> <td>hi</td> <td>Mula Shabda</td> </tr> <tr> <td>nl</td> <td>Inhoud</td> </tr> <tr> <td>tr</td> <td>İçerik</td> </tr> <tr> <td>el</td> <td>Π τ ρ ι τ χ ὐ μ ε ν α</td> </tr> </tbody> </table> ``` [Bulgarian, Chinese, Japanese, Russian, and Thai also available; they are omitted here to avoid printing problems.] ### 2.4. TEI schema-making tools The ODD language files need to be processed to produce schemas in the chosen language. This is done by a set of XSLT scripts, which can either be run on a command-line, or as a web service called Roma (http://www.tei-c.org.uk/Roma/). This currently has support for varying the languages of its interface, but must also allow for supporting the following output schemes: - canonical: English names, descriptions in English - local descriptions: English names, descriptions in chosen language - local names: names designed to make sense to a speaker of the chosen language, descriptions in English fully localized: both names and descriptions in chosen language This work is in progress; while the underlying XSLT supports the generation of documentation in different languages, the web interface has still to be implemented. 2.5. The application of the W3C ITS guidelines to TEI work An Internationalisation Tag Set working group (under the chairmanship of Yves Savourel, Enlaso) is writing a Recommendation (if it is accepted) for the World Wide Web Consortium about markup which encodes information for translators and localisers. The current state can be found at http://www.w3.org/International/its (this document is itself written using the TEI ODD language). The ITS consists of a set of elements and attributes for annotating a text with information for further processing, covering **Internationalization:** - Markup for bidirectional text - Ruby annotation - Language identification and **Localization** - Translatability of content - The localization process in general - Terminology markup It is intended that the ITS annotation elements be added at several stages. The simplest is at the content authoring stage, by technical writers, developers of authoring systems, localizers or translators. In addition, specialist terminologists might annotation a text with terminological information, or localization engineers and translators may add information. The primary ITS notion is that information about elements and attributes can be supplied - in a document schema - in an external rules file - in a rule section in an instance file - attached to instance elements where the information consists of a set of data categories. On an instance element, for example, the following attributes may be attached translate should this object be translated? locInfo Is there some localisation hint? locInfoType What type of hint is it? term Does this object describe a technical term? termRef Where is the term defined? dir What is the text direction? rubyText Is there some Ruby annotation? A complete example of a TEI text marked up with a combination of ITS rules and ITS local markup looks like this: ```xml <TEI xmlns:tei="http://www.tei-c.org/ns/1.0"> <teiHeader> <its:rules> <its:translateRule translate="no" selector="/tei:body/tei:p"/> </its:rules> </teiHeader> <text> <body> <p>Hello <hi world</hi> </p> <p>translate me</p> </body> </text> </TEI> ``` where the ITS rules say that `<p>` elements should not normally be translated, but the second `<p>` has an explicit override. If we take a TEI ODD document, we can express the relationship between the structural elements and the documentation elements with the following ITS rules, which says that the default is to not translate anything, but gives a set of elements which are to be translated: ```xml <its:rules xmlns:tei="http://www.tei-c.org/ns/1.0"> <its:translateRule translate="no" selector="//tei:*"/> <its:translateRule translate="yes" selector="/tei:desc"/> <its:translateRule translate="yes" selector="/tei:gloss"/> <its:translateRule translate="yes" selector="/tei:valDesc"/> <its:translateRule translate="yes" selector="/tei:p[@rend='dataDesc']"/> <its:translateRule translate="yes" selector="/tei:remarks"/> </its:rules> ``` Using this information, we can show graphically in Figure 1, Example of ITS implementation, using an ITS tool, which elements need a translated equivalent (those in green). Figure 1. Example of ITS implementation For the purposes of the formal translation procedure advocated by this paper, the ITS procedure provides a good framework. 3. Results so far We present here some examples showing work completed so far: ```xml <elementSpec module="corpus" id="PERSON" usage="opt" ident="person"> <equiv/> <gloss/> <desc>言語活動の関係者(1件1名)</desc> <content> <rng:choice> <rng:oneOrMore> <rng:ref name="p"/> </rng:oneOrMore> <rng:zeroOrMore> <rng:ref name="tei.demographic"/> </rng:zeroOrMore> </rng:choice> </content> <attList> <attDef ident="role" usage="opt"> <equiv/> <desc>当該関係者の言語活動における役割</desc> </attDef> </attList> </elementSpec> ``` <egXML> <person sex="f" age="42"> </person> </egXML> <p>女性、教養あり、1950年1月12日英国シュロプシャ生まれ、不明、フランス語を流暢に話す、社会経済状態:中</p> Figure 2. Example of translated ODD 1. <elementSpec> person describes a single participant in a language interaction. Declaration <element person > | att.global.attributes, | attribute role (text), | attribute sex (text), | attribute age (text), | (p | te_demographic ) | Attributes (in addition to global attributes) role specifies the role of this participant in the group. sex specifies the sex of the participant. Legal values are: - male - female - unknown or inapplicable age specifies the age group to which the participant belongs. Example <person sex="f" age="42"> Female informant, well educated, born in Shropshire 12, 12 Jan 1950, all unknown occupation speaks French fluently. Socio-economic status 92.</p> </person> May contain a prose description organized as paragraphs, or any sequence of demographic elements in any combination. Figure 3. Example of reference documentation 1. `<elementSpec> person` **鳗鱼** `element person` *attribute global_attributes,* *attribute role { role }*, *attribute sex { "m" | "f" | "u" }*, *attribute age { text },* *(p | doc: demographic)* "人名" (グローバル属性の例) `role` sex | 男 | 男 | | 女 | 女 | "不明または不詳" age *(p)* `<person sex="f" age="42">` "Example informant, well educated, born in Shropshire UK, 12 Jan 1950, of unknown occupation. Speaks French fluently, sociolinguistic status UI."</p> `</person>` "Example informant, well educated, born in Shropshire UK, 12 Jan 1950, of unknown occupation. Speaks French fluently, sociolinguistic status UI."</p> `</person>` Figure 4. Example of reference documentation in Japanese 1. `<elementSpec> person` **鳗鱼** `element person` *attribute global_attributes,* *attribute role { role }*, *attribute sex { "m" | "f" | "u" }*, *attribute age { text },* *(p | doc: demographic)* "鳗鱼 (を持つ) アトリビュート" `role` sex | 男 | 男 | | 女 | 女 | "不明または不詳" age *(p)* `<person sex="f" age="42">` "Example informant, well educated, born in Shropshire UK, 12 Jan 1950, of unknown occupation. Speaks French fluently, sociolinguistic status UI."</p> `</person>` "Example informant, well educated, born in Shropshire UK, 12 Jan 1950, of unknown occupation. Speaks French fluently, sociolinguistic status UI."</p> `</person>` Figure 5. Example of reference documentation in Bulgarian 1. `<elementSpec>` person Deklaration: ```xml <element person { attr.global.attributes, attribute role [ text ]; attribute sex [ text ]; attribute age [ text ]; [p | text.depersonal*] } ``` Attribute (Nielsen global gültigen Attributen) - **role** - **sex** - **m** 男性 - **f** 女性 - **u** 不明または不適切 Beispiel ```xml <person sex="f" age="42"> <gender female informer, well-educated, born in Shropshire first, 17 Apr 1905, of unknown occupation. speaks French fluently, social economic status B2.</gender> </person> ``` Figure 6. Interface translation in Bulgarian Figure 7. Reference documentation in Japanese, with German annotation Figure 8. TEI Guidelines in French 4. Future directions The TEI Consortium is working with TEI scholars to advance I18N and L10N in various languages (listed in). We hope to work on French, Spanish, German, Chinese and Japanese in 2006, and produce translated element and attribute names; translated <desc> and <gloss> texts, and a mechanism to allow users to easily take advantage of the work. The scale of work involved is not impossible to contemplate. The TEI contains: - 494 elements - 489 attributes - 1203 <desc> elements, 106666 characters - 1177 <gloss> elements, 32385 characters The work needed for each language is to: - translate descriptive prose to other languages - translate technical documentation components (note that this includes gloss for fixed attribute lists) - translate examples - localize examples - add W3C ITS information - translate processing workflow tool The infrastructure challenges are not inconsiderable. We need, at least: • An infrastructure to allow translators to submit material, and get prompt feedback • Integrating the translations into the P5 source • Ensuring that translations are flagged as decayed when the English original changes, and that translators are notified • Managing multi-language examples By the end of 2006, we expect to be well on the way to meeting these goals. Appendix A Acknowledgements The first steps in formalized internationalization of the TEI (as opposed to the translations of the Lite document) were made by Alejandro Bia, to whom many thanks are due. Translation examples in this paper come from Pierre Yves Duchemin (French), Marcus Bingenheimer (Chinese), Arno Mittelbach (German) and Alejandro Bia (Spanish). Veronika Lux and Julia Flanders co-wrote some of the explanations of TEI I18N. Appendix B References Appendix C TEI internationalisation partners The following peoples and bodies have agreed to coordinate their respective languages: Chinese Marcus Bingenheimer Chung-hwa Institute of Buddhist Studies, Taipei Dutch Bert Van Elsacker - French Laurent Romary Nancy French Veronika Lux Nancy German Christian Wittern Institute for Research in Humanities, Kyoto University German Werner Wegstein Wuerzburg University Hindi Paul Richards UGS (The PLM Company), http://www.ugs.com/ Hungarian Király Péter - Italian Fabio Ciotii University of Roma Japanese OHYA Kazushi Tsurumi University, Yokohama <table> <thead> <tr> <th>Language</th> <th>Name</th> <th>Institution/Institute</th> </tr> </thead> <tbody> <tr> <td>Norwegian</td> <td>Øyvind Eide</td> <td>-</td> </tr> <tr> <td>Polish</td> <td>Radoslaw Moszczynski</td> <td>Warsaw University</td> </tr> <tr> <td>Portuguese</td> <td>Leonor Barroca</td> <td>Open University</td> </tr> <tr> <td>Romanian</td> <td>Dan Matei</td> <td>CIMEC - Institutul de Memorie Culturala, România</td> </tr> <tr> <td>Serbian</td> <td>dr Cvetana Krstev</td> <td>-</td> </tr> <tr> <td>Slovenian</td> <td>Tomaž Erjavec, Matija Ogrin</td> <td>Dept. of Knowledge Technologies, Jozef Stefan Institute, Slovenia</td> </tr> <tr> <td>Spanish</td> <td>Manuel Sánchez</td> <td>Miguel de Cervantes Digital Library</td> </tr> <tr> <td>Swedish</td> <td>Matt Zimmerman</td> <td>NYU</td> </tr> <tr> <td>Tibetan</td> <td>Linda Patrik, Tensin Namdak</td> <td><a href="http://www.nitartha.org">www.nitartha.org</a></td> </tr> </tbody> </table>
{"Source-Url": "https://ora.ox.ac.uk/objects/uuid:69a8d091-6d15-4554-b513-a3e01da5a5a2/download_file?file_format=application%2Fpdf&safe_filename=Towards_an_internationalized_and_localized_TEI.pdf&type_of_work=Conference+item", "len_cl100k_base": 7975, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 98815, "total-output-tokens": 9037, "length": "2e12", "weborganizer": {"__label__adult": 0.0005602836608886719, "__label__art_design": 0.00217437744140625, "__label__crime_law": 0.0006279945373535156, "__label__education_jobs": 0.0071868896484375, "__label__entertainment": 0.0002930164337158203, "__label__fashion_beauty": 0.0003268718719482422, "__label__finance_business": 0.0007967948913574219, "__label__food_dining": 0.0004172325134277344, "__label__games": 0.0007014274597167969, "__label__hardware": 0.0008039474487304688, "__label__health": 0.00057220458984375, "__label__history": 0.001316070556640625, "__label__home_hobbies": 0.00012105703353881836, "__label__industrial": 0.0006008148193359375, "__label__literature": 0.00821685791015625, "__label__politics": 0.000820159912109375, "__label__religion": 0.0013837814331054688, "__label__science_tech": 0.14697265625, "__label__social_life": 0.00032210350036621094, "__label__software": 0.048828125, "__label__software_dev": 0.77587890625, "__label__sports_fitness": 0.00030159950256347656, "__label__transportation": 0.0005674362182617188, "__label__travel": 0.0002949237823486328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32714, 0.01183]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32714, 0.7467]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32714, 0.77944]], "google_gemma-3-12b-it_contains_pii": [[0, 2566, false], [2566, 5585, null], [5585, 7588, null], [7588, 10183, null], [10183, 12668, null], [12668, 14848, null], [14848, 16087, null], [16087, 18192, null], [18192, 20577, null], [20577, 22485, null], [22485, 24038, null], [24038, 24781, null], [24781, 24991, null], [24991, 25969, null], [25969, 27599, null], [27599, 28276, null], [28276, 29244, null], [29244, 31271, null], [31271, 32714, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2566, true], [2566, 5585, null], [5585, 7588, null], [7588, 10183, null], [10183, 12668, null], [12668, 14848, null], [14848, 16087, null], [16087, 18192, null], [18192, 20577, null], [20577, 22485, null], [22485, 24038, null], [24038, 24781, null], [24781, 24991, null], [24991, 25969, null], [25969, 27599, null], [27599, 28276, null], [28276, 29244, null], [29244, 31271, null], [31271, 32714, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32714, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32714, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32714, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32714, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32714, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32714, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32714, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32714, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32714, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32714, null]], "pdf_page_numbers": [[0, 2566, 1], [2566, 5585, 2], [5585, 7588, 3], [7588, 10183, 4], [10183, 12668, 5], [12668, 14848, 6], [14848, 16087, 7], [16087, 18192, 8], [18192, 20577, 9], [20577, 22485, 10], [22485, 24038, 11], [24038, 24781, 12], [24781, 24991, 13], [24991, 25969, 14], [25969, 27599, 15], [27599, 28276, 16], [28276, 29244, 17], [29244, 31271, 18], [31271, 32714, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32714, 0.06387]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
bbb5d87643518b874c068e66bb72c7438994e8ea
Types and Static Semantic Analysis Stephen A. Edwards Columbia University Fall 2015 Types in C Types of Type Systems Static Semantic Analysis Scope A Static Semantic Analyzer Types A restriction on the possible interpretations of a segment of memory or other program construct. Two uses: **Safety:** avoids data being treated as something it isn’t **Optimization:** eliminates certain runtime decisions Types in C Basic C Types C was designed for efficiency: basic types are whatever is most efficient for the target processor. On an (32-bit) ARM processor, ```c char c; /* 8-bit binary */ short d; /* 16-bit two’s-complement binary */ unsigned short d; /* 16-bit binary */ int a; /* 32-bit two’s-complement binary */ unsigned int b; /* 32-bit binary */ float f; /* 32-bit IEEE 754 floating-point */ double g; /* 64-bit IEEE 754 floating-point */ ``` Pointers and Arrays A pointer contains a memory address. Arrays in C are implemented with arithmetic on pointers. A pointer can create an *alias* to a variable: ```c int a; int *b = &a; /* "pointer to integer b is the address of a" */ int *c = &a; /* c also points to a */ *b = 5; /* sets a to 5 */ *c = 42; /* sets a to 42 */ printf("%d %d %d\n", a, *b, *c); /* prints 42 42 42 */ ``` Pointers Enable Pass-by-Reference ```c void swap(int x, int y) { int temp; temp = x; x = y; y = temp; } ``` Does this work? Pointers Enable Pass-by-Reference ```c void swap(int x, int y) { int temp; temp = x; x = y; y = temp; } Does this work? Nope. ``` ```c void swap(int *px, int *py) { int temp; temp = *px; /* get data at px */ *px = *py; /* get data at py */ *py = temp; /* write data at py */ } void main() { int a = 1, b = 2; /* Pass addresses of a and b */ swap(&a, &b); /* a = 2 and b = 1 */ } ``` Arrays and Pointers ```c int a[10]; ``` ![Array diagram] - `a[0]` to `a[9]` - `*(a + i)` is equivalent to `a[i]` Arrays and Pointers int a[10]; int *pa = &a[0]; Arrays and Pointers ``` int a[10]; int *pa = &a[0]; pa = pa + 1; ``` Arrays and Pointers int a[10]; int *pa = &a[0]; pa = pa + 1; pa = &a[1]; Arrays and Pointers ```c int a[10]; int *pa = &a[0]; pa = pa + 1; pa = &a[1]; pa = a + 5; a[i] is equivalent to *(a + i) ``` Multi-Dimensional Arrays ```c int monthdays[2][12] = { { 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 }, { 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 } }; ``` )`monthdays[i][j] is at address monthdays + 12 * i + j` Structures: each field has own storage ```c struct box { int x, y, h, w; char *name; }; ``` Unions: fields share same memory ```c union token { int i; double d; char *s; }; ``` Structs can be used like the objects of C++, Java, et al. Group and restrict what can be stored in an object, but not what operations they permit. ```c struct poly { ... } struct poly *poly_create(); void poly_destroy(struct poly *p); void poly_draw(struct poly *p); void poly_move(struct poly *p, int x, int y); int poly_area(struct poly *p); ``` A struct holds all of its fields at once. A union holds only one of its fields at any time (the last written). ```c union token { int i; float f; char *string; }; union token t; int i = 10; float f = 3.14159; /* overwrite t.i */ char *s = t.string; /* return gibberish */ ``` Applications of Variant Records A primitive form of polymorphism: ```c struct poly { int type; int x, y; union { int radius; int size; float angle; } d; }; void draw(struct poly *shape) { switch (shape->type) { case CIRCLE: /* use shape->d.radius */ case SQUARE: /* use shape->d.size */ case LINE: /* use shape->d.angle */ } } ``` Name vs. Structural Equivalence ```c struct f { int x, y; } foo = { 0, 1 }; struct b { int x, y; } bar; bar = foo; ``` Is this legal in C? Should it be? C’s Declarations and Declarators Declaration: list of specifiers followed by a comma-separated list of declarators. \[ \text{basic type} \\ \text{static unsigned int}(*f[10])(\text{int, char*}); \] Declarator’s notation matches that of an expression: use it to return the basic type. Largely regarded as the worst syntactic aspect of C: both pre- (pointers) and post-fix operators (arrays, functions). Types of Type Systems Strongly-typed Languages Strongly-typed: no run-time type clashes (detected or not). C is definitely not strongly-typed: ```c float g; union { float f; int i } u; u.i = 3; g = u.f + 3.14159; /* u.f is meaningless */ ``` Is Java strongly-typed? Statically-Typed Languages Statically-typed: compiler can determine types. Dynamically-typed: types determined at run time. Is Java statically-typed? ```java class Foo { public void x() { ... } } class Bar extends Foo { public void x() { ... } } void baz(Foo f) { f.x(); } ``` Say you write a sort routine: ```c void sort(int a[], int n) { int i, j; for ( i = 0 ; i < n-1 ; i++ ) for ( j = i + 1 ; j < n ; j++ ) if (a[j] < a[i]) { int tmp = a[i]; a[i] = a[j]; a[j] = tmp; } } ``` Polymorphism To sort doubles, only need to change two types: ```c void sort(double a[], int n) { int i, j; for ( i = 0 ; i < n-1 ; i++ ) for ( j = i + 1 ; j < n ; j++ ) if (a[j] < a[i]) { double tmp = a[i]; a[i] = a[j]; a[j] = tmp; } } ``` C++ Templates template <class T> void sort(T a[], int n) { int i, j; for (i = 0; i < n-1; i++) for (j = i + 1; j < n; j++) if (a[j] < a[i]) { T tmp = a[i]; a[i] = a[j]; a[j] = tmp; } } int a[10]; sort<int>(a, 10); C++ Templates C++ templates are essentially language-aware macros. Each instance generates a different refinement of the same code. ```cpp sort<int>(a, 10); sort<double>(b, 30); sort<char*>(c, 20); ``` Fast code, but lots of it. Faking Polymorphism with Objects class Sortable { bool lessthan(Sortable s) = 0; } void sort(Sortable a[], int n) { int i, j; for (i = 0 ; i < n-1 ; i++) for (j = i + 1 ; j < n ; j++) if (a[j].lessthan(a[i])) { Sortable tmp = a[i]; a[i] = a[j]; a[j] = tmp; } } Faking Polymorphism with Objects This sort works with any array of objects derived from Sortable. Same code is used for every type of object. Types resolved at run-time (dynamic method dispatch). Does not run as quickly as the C++ template version. Static Semantic Analysis Static Semantic Analysis Lexical analysis: Make sure tokens are valid ```java if i 3 "This" /* valid Java tokens */ #a1123 /* not a token */ ``` Syntactic analysis: Makes sure tokens appear in correct order ```java for ( i = 1 ; i < 5 ; i++ ) 3 + "foo"; /* valid Java syntax */ for break /* invalid syntax */ ``` Semantic analysis: Makes sure program is consistent ```java int v = 42 + 13; /* valid in Java (if v is new) */ return f + f(3); /* invalid */ ``` What To Check Examples from Java: Verify names are defined and are of the right type. ```java int i = 5; int a = z; // Error: cannot find symbol */ int b = i[3]; // Error: array required, but int found */ ``` Verify the type of each expression is consistent. ```java int j = i + 53; int k = 3 + "hello"; // Error: incompatible types */ int l = k(42); // Error: k is not a method */ if ("Hello") return 5; // Error: incompatible types */ String s = "Hello"; int m = s; // Error: incompatible types */ ``` How To Check: Depth-first AST Walk Checking function: environment → node → type \[1 - 5\] \[-\] \[1 \quad 5\] \[1 + "Hello"\] \[+\] \[1 \quad "Hello"\] --- **check(-)** check(1) = int check(5) = int Success: int – int = int **check(+)** check(1) = int check("Hello") = string FAIL: Can’t add int and string Ask yourself: at each kind of node, what must be true about the nodes below it? What is the type of the node? How To Check: Symbols Checking function: environment → node → type \[ 1 + a \] \[ \begin{array}{c} + \\ \hline \\ 1 & a \end{array} \] check(+) check(1) = int check(a) = int Success: int + int = int The key operation: determining the type of a symbol when it is encountered. The environment provides a “symbol table” that holds information about each in-scope symbol. Scope Basic Static Scope in C, C++, Java, etc. A name begins life where it is declared and ends at the end of its block. From the CLRM, “The scope of an identifier declared at the head of a block begins at the end of its declarator, and persists to the end of the block.” Hiding a Definition Nested scopes can hide earlier definitions, giving a hole. From the CLRM, “If an identifier is explicitly declared at the head of a block, including the block constituting a function, any declaration of the identifier outside the block is suspended until the end of the block.” public void example() { // x, y, z not visible int x; // x visible for ( int y = 1 ; y < 10 ; y++ ) { // x, y visible int z; // x, y, z visible } // x visible } // x visible Basic Static Scope in O’Caml A name is bound after the “in” clause of a “let.” If the name is re-bound, the binding takes effect after the “in.” ``` let x = 8 in let x = x + 1 in ``` Returns the pair (12, 8): ``` let x = 8 in (let x = x + 2 in x + 2), x ``` The “rec” keyword makes a name visible to its definition. This only makes sense for functions. ```ocaml let rec fib i = if i < 1 then 1 else fib (i-1) + fib (i-2) in fib 5 ``` ```ocaml (* Nonsensical *) let rec x = x + 3 in ``` Let...and in O’Caml Let...and lets you bind multiple names at once. Definitions are not mutually visible unless marked “rec.” ```ocaml let x = 8 and y = 9 in let rec fac n = if n < 2 then 1 else n * fac1 n and fac1 n = fac (n - 1) in fac 5 ``` Nesting Function Definitions ```haskell let articles words = let report w = let count = List.length (List.filter ((=) w) words) in w ^ "": "^string_of_int count in String.concat ", " (List.map report ["a"; "the"]) in in articles ["the"; "plt"; "class"; "is"; "a"; "pain"; "in"; "the"; "butt"] let count words w = List.length (List.filter ((=) w) words) in let report words w = let count words w = in let articles words = String.concat ", " (List.map (report words) ["a"; "the"]) in articles ["the"; "plt"; "class"; "is"; "a"; "pain"; "in"; "the"; "butt"] Produces “a: 1, the: 2” A Static Semantic Analyzer The Static Semantic Checking Function A big function: \textit{"check: \texttt{ast} \rightarrow \texttt{sast"}} Converts a raw AST to a \textit{“semantically checked AST”} Names and types resolved \begin{align*} \text{AST:} & \quad \text{type } \textit{expression} = \\ & \quad \quad \text{\texttt{IntConst of int}} \\ & \quad \quad \text{\texttt{Id of string}} \\ & \quad \quad \text{\texttt{Call of string \* expression list}} \\ & \quad \quad \text{\nodata} \\ \end{align*} \begin{align*} \downarrow \\ \text{SAST:} & \quad \text{type } \textit{expr\_detail} = \\ & \quad \quad \text{\texttt{IntConst of int}} \\ & \quad \quad \text{\texttt{Id of variable\_decl}} \\ & \quad \quad \text{\texttt{Call of function\_decl \* expression list}} \\ & \quad \quad \text{\nodata} \\ & \quad \quad \text{type } \textit{expression} = \textit{expr\_detail \* Type\_t} \end{align*} The Type of Types Need an OCaml type to represent the type of something in your language. An example for a language with integer, structures, arrays, and exceptions: ```ocaml type t = (* can't call it "type" since that's reserved *) Void | Int | Struct of string * ((string * t) array) (* name, fields *) | Array of t * int (* type, size *) | Exception of string ``` Translation Environments Whether an expression/statement/function is correct depends on its context. Represent this as an object with named fields since you will invariably have to extend it. An environment type for a C-like language: ```haskell type translation_environment = { scope : symbol_table; (* symbol table for vars *) return_type : Types.t; (* Function’s return type *) in_switch : bool; (* if we are in a switch stmt *) case_labels : Big_int.big_int list ref; (* known case labels *) break_label : label option; (* when break makes sense *) continue_label : label option; (* when continue makes sense *) exception_scope : exception_scope; (* sym tab for exceptions *) labels : label list ref; (* labels on statements *) forward_gotos : label list ref; (* forward goto destinations *) } ``` A Symbol Table Basic operation is string → type. Map or hash could do this, but a list is fine. ```ocaml type symbol_table = { parent : symbol_table option; variables : variable_decl list } let rec find_variable (scope : symbol_table) name = try List.find (fun (s, _, _, _) -> s = name) scope.variables with Not_found -> match scope.parent with Some(parent) -> find_variable parent name | _ -> raise Not_found ``` Checking Expressions: Literals and Identifiers (* Information about where we are *) type translation_environment = { scope : symbol_table; } let rec expr env = function (* An integer constant: convert and return Int type *) Ast.IntConst(v) -> Sast.IntConst(v), Types.Int (* An identifier: verify it is in scope and return its type *) | Ast.Id(vname) -> let vdecl = try find_variable env.scope vname (* locate a variable by name *) with Not_found -> raise (Error("undeclared identifier " ^ vname)) in let (_, typ) = vdecl in (* get the variable’s type *) Sast.Id(vdecl), typ | ... Checking Expressions: Binary Operators (* let rec expr env = function *) | A.BinOp(e1, op, e2) -> | let e1 = expr env e1 (* Check left and right children *) | and e2 = expr env e2 in | let _, t1 = e1 (* Get the type of each child *) | and _, t2 = e2 in | if op <> Ast.Equal && op <> Ast.NotEqual then | (* Most operators require both left and right to be integer *) | (require_integer e1 "Left operand must be integer"; | require_integer e2 "Right operand must be integer") | else | if not (weak_eq_type t1 t2) then | (* Equality operators just require types to be "close" *) | error ("Type mismatch in comparison: left is " ^ | Printer.string_of_sast_type t1 ^ "\" right is "" ^ | Printer.string_of_sast_type t2 ^ "\" | ) loc; | Sast.BinOp(e1, op, e2), Types.Int (* Success: result is int *) Checking Statements: Expressions, If ```ocaml let rec stmt env = function (* Expression statement: just check the expression *) Ast.Expression(e) -> Sast.Expression(expr env e) (* If statement: verify the predicate is integer *) | Ast.If(e, s1, s2) -> let e = check_expr env e in (* Check the predicate *) require_integer e "Predicate of if must be integer"; Sast.If(e, stmt env s1, stmt env s2) (* Check then, else *) ``` Checking Statements: Declarations (* let rec stmt env = function *) | A.Local(vdecl) -> let decl, (init, _) = check_local vdecl (* already declared? *) in (* side-effect: add variable to the environment *) env.scope.S.variables <- decl :: env.scope.S.variables; init (* initialization statements, if any *) Checking Statements: Blocks (* let rec stmt env = function *) | A.Block(sl) -> (* New scopes: parent is the existing scope, start out empty *) let scope' = { S.parent = Some(env.scope); S.variables = [] } and exceptions' = { excep_parent = Some(env.exception_scope); exceptions = [] } in (* New environment: same, but with new symbol tables *) let env' = { env with scope = scope'; exception_scope = exceptions' } in (* Check all the statements in the block *) let sl = List.map (fun s -> stmt env' s) sl in scope'.S.variables <- List.rev scope'.S.variables; (* side-effect *) Sast.Block(scope', sl) (* Success: return block with symbols *)
{"Source-Url": "http://www.cs.columbia.edu/~sedwards/classes/2015/4115-fall/types.pdf", "len_cl100k_base": 4747, "olmocr-version": "0.1.50", "pdf-total-pages": 52, "total-fallback-pages": 0, "total-input-tokens": 76579, "total-output-tokens": 7179, "length": "2e12", "weborganizer": {"__label__adult": 0.0004208087921142578, "__label__art_design": 0.0003204345703125, "__label__crime_law": 0.0002551078796386719, "__label__education_jobs": 0.00036787986755371094, "__label__entertainment": 6.973743438720703e-05, "__label__fashion_beauty": 0.0001379251480102539, "__label__finance_business": 9.292364120483398e-05, "__label__food_dining": 0.00044918060302734375, "__label__games": 0.0006003379821777344, "__label__hardware": 0.0008530616760253906, "__label__health": 0.0003464221954345703, "__label__history": 0.0001989603042602539, "__label__home_hobbies": 8.672475814819336e-05, "__label__industrial": 0.00030422210693359375, "__label__literature": 0.0003058910369873047, "__label__politics": 0.0002233982086181641, "__label__religion": 0.0005459785461425781, "__label__science_tech": 0.004230499267578125, "__label__social_life": 8.386373519897461e-05, "__label__software": 0.0030651092529296875, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.0003285408020019531, "__label__transportation": 0.0004661083221435547, "__label__travel": 0.00020897388458251953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16063, 0.01848]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16063, 0.67081]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16063, 0.66912]], "google_gemma-3-12b-it_contains_pii": [[0, 87, false], [87, 182, null], [182, 414, null], [414, 425, null], [425, 882, null], [882, 1273, null], [1273, 1415, null], [1415, 1851, null], [1851, 1967, null], [1967, 2016, null], [2016, 2086, null], [2086, 2160, null], [2160, 2287, null], [2287, 2518, null], [2518, 2718, null], [2718, 3068, null], [3068, 3359, null], [3359, 3753, null], [3753, 3918, null], [3918, 4324, null], [4324, 4346, null], [4346, 4594, null], [4594, 4887, null], [4887, 5179, null], [5179, 5509, null], [5509, 5814, null], [5814, 6046, null], [6046, 6401, null], [6401, 6654, null], [6654, 6679, null], [6679, 7144, null], [7144, 7659, null], [7659, 8085, null], [8085, 8459, null], [8459, 8465, null], [8465, 8733, null], [8733, 9033, null], [9033, 9255, null], [9255, 9518, null], [9518, 9752, null], [9752, 10011, null], [10011, 10632, null], [10632, 10659, null], [10659, 11535, null], [11535, 11907, null], [11907, 12748, null], [12748, 13191, null], [13191, 13836, null], [13836, 14634, null], [14634, 15087, null], [15087, 15408, null], [15408, 16063, null]], "google_gemma-3-12b-it_is_public_document": [[0, 87, true], [87, 182, null], [182, 414, null], [414, 425, null], [425, 882, null], [882, 1273, null], [1273, 1415, null], [1415, 1851, null], [1851, 1967, null], [1967, 2016, null], [2016, 2086, null], [2086, 2160, null], [2160, 2287, null], [2287, 2518, null], [2518, 2718, null], [2718, 3068, null], [3068, 3359, null], [3359, 3753, null], [3753, 3918, null], [3918, 4324, null], [4324, 4346, null], [4346, 4594, null], [4594, 4887, null], [4887, 5179, null], [5179, 5509, null], [5509, 5814, null], [5814, 6046, null], [6046, 6401, null], [6401, 6654, null], [6654, 6679, null], [6679, 7144, null], [7144, 7659, null], [7659, 8085, null], [8085, 8459, null], [8459, 8465, null], [8465, 8733, null], [8733, 9033, null], [9033, 9255, null], [9255, 9518, null], [9518, 9752, null], [9752, 10011, null], [10011, 10632, null], [10632, 10659, null], [10659, 11535, null], [11535, 11907, null], [11907, 12748, null], [12748, 13191, null], [13191, 13836, null], [13836, 14634, null], [14634, 15087, null], [15087, 15408, null], [15408, 16063, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16063, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16063, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16063, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16063, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16063, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16063, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16063, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16063, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16063, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 16063, null]], "pdf_page_numbers": [[0, 87, 1], [87, 182, 2], [182, 414, 3], [414, 425, 4], [425, 882, 5], [882, 1273, 6], [1273, 1415, 7], [1415, 1851, 8], [1851, 1967, 9], [1967, 2016, 10], [2016, 2086, 11], [2086, 2160, 12], [2160, 2287, 13], [2287, 2518, 14], [2518, 2718, 15], [2718, 3068, 16], [3068, 3359, 17], [3359, 3753, 18], [3753, 3918, 19], [3918, 4324, 20], [4324, 4346, 21], [4346, 4594, 22], [4594, 4887, 23], [4887, 5179, 24], [5179, 5509, 25], [5509, 5814, 26], [5814, 6046, 27], [6046, 6401, 28], [6401, 6654, 29], [6654, 6679, 30], [6679, 7144, 31], [7144, 7659, 32], [7659, 8085, 33], [8085, 8459, 34], [8459, 8465, 35], [8465, 8733, 36], [8733, 9033, 37], [9033, 9255, 38], [9255, 9518, 39], [9518, 9752, 40], [9752, 10011, 41], [10011, 10632, 42], [10632, 10659, 43], [10659, 11535, 44], [11535, 11907, 45], [11907, 12748, 46], [12748, 13191, 47], [13191, 13836, 48], [13836, 14634, 49], [14634, 15087, 50], [15087, 15408, 51], [15408, 16063, 52]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16063, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
a171968e75a954ee1ee465a8f92b48f82287a45e
Virtual Memory Instructor: Justin Hsia Review of Last Lecture • Multiple instruction issue increases max speedup, but higher penalty for a stall – Superscalar because can achieve CPI < 1 – Requires a significant amount of extra hardware • Employ more aggressive scheduling techniques to increase performance – Register renaming – Speculation (guessing) – Out-of-order execution Agenda • Virtual Memory • Page Tables • Administrivia • Translation Lookaside Buffer (TLB) • VM Performance Memory Hierarchy Earlier: Caches Next Up: Virtual Memory Upper Level Faster Larger Lower Level Regs Instr Operands L1 Cache Blocks L2 Cache Blocks Memory Pages Disk Files Tape Virtual Memory • Next level in the memory hierarchy – Provides illusion of very large main memory – Working set of “pages” residing in main memory (subset of all pages residing on disk) • **Main goal:** Avoid reaching all the way back to disk as much as possible • **Additional goals:** – Let OS share memory among many programs and protect them from each other – Each process thinks it has all the memory to itself Virtual to Physical Address Translation - Each program operates in its own virtual address space and thinks it’s the only program running - Each is protected from the other - OS can decide where each goes in memory - Hardware gives virtual \(\rightarrow\) physical mapping Agenda • Virtual Memory • Page Tables • Administrivia • Translation Lookaside Buffer (TLB) • VM Performance • VM Wrap-up • Divide into equal sized chunks (about 4 KiB - 8 KiB) • Any chunk of Virtual Memory can be assigned to any chunk of Physical Memory ("page") Mapping VM to PM Virtual Memory ∞ Heap Static Physical Memory 64 MB Virtual Memory Mapping Function • How large is main memory? Disk? – Don’t know! Designed to be interchangeable components – Need a system that works regardless of sizes • Use lookup table (page table) to deal with arbitrary mapping – Index lookup table by # of pages in VM (not all entries will be used/valid) – Size of PM will affect size of stored translation Address Mapping: Page Table • **Page Table functionality:** – Incoming request is Virtual Address (VA), want Physical Address (PA) – Physical Offset = Virtual Offset (page-aligned) – So just swap Virtual Page Number (VPN) for Physical Page Number (PPN) **Table:** | Physical Page # | Virtual Page # | Page Offset | • **Implementation?** – Use VPN as index into PT – Store PPN and management bits (Valid, Access Rights) – Does NOT store actual data (the data sits in PM) ### Page Table Layout **Virtual Address:** - VPN - Offset 1) Index into PT using VPN <table> <thead> <tr> <th>V</th> <th>AR</th> <th>PPN</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td>X</td> <td>XX</td> <td></td> </tr> </tbody> </table> 2) Check Valid and Access Rights bits 3) Combine PPN and offset 4) Use PA to access memory - Physical Address Page Table Entry Format • Contains either PPN or indication not in main memory • **Valid** = Valid page table entry – 1 → virtual page is in physical memory – 0 → OS needs to fetch page from disk • **Access Rights** checked on every access to see if allowed (provides protection) – *Read Only*: Can read, but not write page – *Read/Write*: Read or write data on page – *Executable*: Can fetch instructions from page Page Tables (1/2) • A page table (PT) contains the mapping of virtual addresses to physical addresses • Page tables located in main memory – Why? – Too large to fit in registers ($2^{20}$ entries for 4 KiB pages) – Faster to access than disk and can be shared by multiple processors • The OS maintains the PTs – Each process has its own page table • “State” of a process is PC, all registers, and PT – OS stores address of the PT of the current process in the Page Table Base Register Page Tables (2/2) • *Solves fragmentation problem*: all pages are the same size, so can utilize all available slots • OS must reserve “*swap space*” on disk for each process – Running programs requires hard drive space! • To grow a process, ask Operating System – If unused pages in PM, OS uses them first – If not, OS swaps some old pages (LRU) to disk Paging/Virtual Memory Multiple Processes User A: Virtual Memory User B: Virtual Memory Physical Memory 64 MB Stack Static Code Page Table A Page Table B 0 0 0 0 8/05/2013 Summer 2013 -- Lecture #24 Review: Paging Terminology • Programs use *virtual addresses (VAs)* – Space of all virtual addresses called *virtual memory (VM)* – Divided into pages indexed by *virtual page number (VPN)* • Main memory indexed by *physical addresses (PAs)* – Space of all physical addresses called *physical memory (PM)* – Divided into pages indexed by *physical page number (PPN)* **Question:** How many bits wide are the following fields? - 16 KiB pages - 40-bit virtual addresses - 64 GiB physical memory <table> <thead> <tr> <th></th> <th>VPN</th> <th>PPN</th> </tr> </thead> <tbody> <tr> <td>(A)</td> <td>26</td> <td>26</td> </tr> <tr> <td>(B)</td> <td>24</td> <td>20</td> </tr> <tr> <td>(C)</td> <td>22</td> <td>22</td> </tr> <tr> <td>(D)</td> <td>26</td> <td>22</td> </tr> </tbody> </table> Agenda • Virtual Memory • Page Tables • Administrivia • Translation Lookaside Buffer (TLB) • VM Performance • VM Wrap-up Administrivia (1/2) • Project 3 (individual) due Sunday 8/11 • Final Review – Tue 8/13, 7-10pm in 10 Evans • Final – Fri 8/16, 9am-12pm, 155 Dwinelle – 2\textsuperscript{nd} half material + self-modifying MIPS – MIPS Green Sheet provided again – Two two-sided handwritten cheat sheets • Can re-use your midterm cheat sheet! Administrivia (2/2) "Only the bolded text in the notes will be on the test" Everything is bolded http://www.geekosystem.com/engineering-professor-meme/2/ Agenda - Virtual Memory - Page Tables - Administrivia - Translation Lookaside Buffer (TLB) - VM Performance - VM Wrap-up Retrieving Data from Memory 1) Access page table for address translation 2) Access correct physical address Requires two accesses of physical memory! Virtual Memory Problem • 2 physical memory accesses per data access = SLOW! • Since locality in pages of data, there must be locality in the translations of those pages • Build a separate cache for the Page Table – For historical reasons, cache is called a Translation Lookaside Buffer (TLB) – Notice that what is stored in the TLB is NOT data, but the VPN → PPN mapping translations TLBs vs. Caches - TLBs usually small, typically 16 – 512 entries - TLB access time comparable to cache (« main memory) - TLBs usually are fully/highly associativity Where Are TLBs Located? - Which should we check first: Cache or TLB? - Can cache hold requested data if corresponding page is not in physical memory? **No** - With TLB first, does cache receive VA or PA? Notice that it is now the TLB that does translation, not the Page Table! Address Translation Using TLB TLB: - TLB Tag - TLB Index - Page Offset Virtual Address: VPN Physical Address: - PPN - Page Offset Data Cache: - Tag - Block Data Note: TIO for VA & PA unrelated Typical TLB Entry Format <table> <thead> <tr> <th>Valid</th> <th>Dirty</th> <th>Ref</th> <th>Access Rights</th> <th>TLB Tag</th> <th>PPN</th> </tr> </thead> <tbody> <tr> <td>X</td> <td>X</td> <td>X</td> <td>XX</td> <td></td> <td></td> </tr> </tbody> </table> - **Valid** and **Access Rights**: Same usage as previously discussed for page tables - **Dirty**: Basically always use write-back, so indicates whether or not to write page to disk when replaced - **Ref**: Used to implement LRU - Set when page is accessed, cleared periodically by OS - If Ref = 1, then page was referenced recently - **TLB Index**: VPN mod (# TLB sets) - **TLB Tag**: VPN minus TLB Index (upper bits) **Question:** How many bits wide are the following? - 16 KiB pages - 40-bit virtual addresses - 64 GiB physical memory - 2-way set associative TLB with 512 entries <table> <thead> <tr> <th>Valid</th> <th>Dirty</th> <th>Ref</th> <th>Access Rights</th> <th>TLB Tag</th> <th>PPN</th> </tr> </thead> <tbody> <tr> <td>X</td> <td>X</td> <td>X</td> <td>XX</td> <td></td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>TLB Tag</th> <th>TLB Index</th> <th>TLB Entry</th> </tr> </thead> <tbody> <tr> <td>(A) 12</td> <td>14</td> <td>38</td> </tr> <tr> <td>(B) 18</td> <td>8</td> <td>45</td> </tr> <tr> <td>(C) 14</td> <td>12</td> <td>40</td> </tr> <tr> <td>(D) 17</td> <td>9</td> <td>43</td> </tr> </tbody> </table> Fetching Data on a Memory Read 1) Check TLB (input: VPN, output: PPN) - **TLB Hit**: Fetch translation, return PPN - **TLB Miss**: Check page table (in memory) • **Page Table Hit**: Load page table entry into TLB • **Page Table Miss (Page Fault)**: Fetch page from disk to memory, update corresponding page table entry, then load entry into TLB 2) Check cache (input: PPN, output: data) - **Cache Hit**: Return data value to processor - **Cache Miss**: Fetch data value from memory, store it in cache, return it to processor Page Faults • Load the page off the disk into a free page of memory – Switch to some other process while we wait • Interrupt thrown when page loaded and the process' page table is updated – When we switch back to the task, the desired data will be in memory • If memory full, replace page (LRU), writing back if necessary, and update both page table entries – Continuous swapping between disk and memory called “thrashing” Performance Metrics • VM performance also uses Hit/Miss Rates and Miss Penalties – *TLB Miss Rate:* Fraction of TLB accesses that result in a TLB Miss – *Page Table Miss Rate:* Fraction of PT accesses that result in a page fault • Caching performance definitions remain the same – Somewhat independent, as TLB will always pass PA to cache regardless of TLB hit or miss Data Fetch Scenarios Are the following scenarios for a single data access possible? - TLB Miss, Page Fault Yes - TLB Hit, Page Table Hit No - TLB Miss, Cache Hit Yes - Page Table Hit, Cache Miss Yes - Page Fault, Cache Hit No **Question:** A program tries to load a word at X that causes a TLB miss but not a page fault. Are the following statements TRUE or FALSE? 1) The page table does not contain a valid mapping for the virtual page corresponding to the address X 2) The word that the program is trying to load is present in physical memory (A) F F (B) F T (C) T F (D) T T Updating Scenarios • Using $V = \text{valid}$, $D = \text{dirty}$, $R = \text{ref}$ to mean that field is set to the shown value for any entry in either PT or TLB. • Which of the following scenarios for a single data access are possible? - Read, $D = 1$: No - Write, $R = 1$: Yes - Read, $V = 0$: Yes - Write, $D = 0$: No Get To Know Your Staff - Category: Wishlist Agenda • Virtual Memory • Page Tables • Administrivia • Translation Lookaside Buffer (TLB) • VM Performance • VM Wrap-up VM Performance • Virtual Memory is the level of the memory hierarchy that sits *below* main memory – TLB comes *before* cache, but affects transfer of data from disk to main memory – Previously we assumed main memory was lowest level, now we just have to account for disk accesses • Same CPI, AMAT equations apply, but now treat main memory like a mid-level cache Typical Performance Stats **Caching** - cache entry - cache block (∼32 bytes) - cache miss rate (1% to 20%) - cache hit (∼1 cycle) - cache miss (∼100 cycles) **Demand paging** - page frame - page (∼4Ki bytes) - page miss rate (<0.001%) - page hit (∼100 cycles) - page fault (∼5M cycles) Impact of Paging on AMAT (1/2) • Memory Parameters: – L1 cache hit = 1 clock cycles, hit 95% of accesses – L2 cache hit = 10 clock cycles, hit 60% of L1 misses – DRAM = 200 clock cycles ($\approx$100 nanoseconds) – Disk = 20,000,000 clock cycles ($\approx$10 milliseconds) • Average Memory Access Time (no paging): – $1 + 5\% \times 10 + 5\% \times 40\% \times 200 = 5.5$ clock cycles • Average Memory Access Time (with paging): – $5.5$ (AMAT with no paging) + ? Impact of Paging on AMAT (2/2) • Average Memory Access Time (with paging) = • \[5.5 + 5\% \times 40\% \times (1 - HR_{Mem}) \times 20,000,000\] • AMAT if \(HR_{Mem} = 99\%\)? • \[5.5 + 0.02 \times 0.01 \times 20,000,000 = 4005.5 \approx 728x\] slower • 1 in 20,000 memory accesses goes to disk: 10 sec program takes 2 hours! • AMAT if \(HR_{Mem} = 99.9\%\)? • \[5.5 + 0.02 \times 0.001 \times 20,000,000 = 405.5\] • AMAT if \(HR_{Mem} = 99.9999\%\) • \[5.5 + 0.02 \times 0.000001 \times 20,000,000 = 5.9\] Impact of TLBs on Performance • Each TLB miss to Page Table ~ L1 Cache miss • TLB Reach: Amount of virtual address space that can be simultaneously mapped by TLB: – TLB typically has 128 entries of page size 4-8 KiB – \(128 \times 4 \text{ KiB} = 512 \text{ KiB} = \text{just 0.5 MiB}\) • What can you do to have better performance? – Multi-level TLBs Conceptually same as multi-level caches – Variable page size (segments) – Special situationally-used “superpages” Not covered here Agenda • Virtual Memory • Page Tables • Administrivia • Translation Lookaside Buffer (TLB) • VM Performance • VM Wrap-up Virtual Memory Motivation • Memory as cache for disk (reduce disk accesses) – Disk is so slow it significantly affects performance – Paging maximizes memory usage with large, evenly-sized pages that can go anywhere • Allows processor to run multiple processes simultaneously – Gives each process illusion of its own (large) VM – Each process uses standard set of VAs – Access rights provide *protection* Paging Summary • Paging requires address *translation* – Can run programs larger than main memory – Hides variable machine configurations (RAM/HD) – Solves fragmentation problem • Address mappings stored in page tables in memory – Additional memory access mitigated with TLB – Check TLB, then Page Table (if necessary), then Cache Hardware/Software Support for Memory Protection • Different tasks can share parts of their virtual address spaces – But need to protect against errant access – Requires OS assistance • Hardware support for OS protection – Privileged supervisor mode (a.k.a. *kernel mode*) – Privileged instructions – Page tables and other state information only accessible in supervisor mode – System call exception (e.g. *syscall* in MIPS) Context Switching • How does a single processor run many programs at once? • *Context switch*: Changing of internal state of processor (switching between processes) – Save register values (and PC) and change value in Page Table Base register • What happens to the TLB? – Current entries are for a different process (similar VAs, though!) – Set all entries to invalid on context switch Summary • User program view: – Contiguous memory – Start from some set VA – “Infinitely” large – Is the only running program • Reality: – Non-contiguous memory – Start wherever available memory is – Finite size – Many programs running simultaneously • Virtual memory provides: – Illusion of contiguous memory – All programs starting at same set address – Illusion of ~ infinite memory (2^{32} or 2^{64} bytes) – Protection, Sharing • Implementation: – Divide memory into chunks (pages) – OS controls page table that maps virtual into physical addresses – memory as a cache for disk – TLB is a cache for the page table
{"Source-Url": "http://inst.eecs.berkeley.edu/~cs61c/su13/lec/24/24LecSu13VM.pdf", "len_cl100k_base": 4320, "olmocr-version": "0.1.53", "pdf-total-pages": 47, "total-fallback-pages": 0, "total-input-tokens": 85117, "total-output-tokens": 5941, "length": "2e12", "weborganizer": {"__label__adult": 0.0004229545593261719, "__label__art_design": 0.0004756450653076172, "__label__crime_law": 0.00035119056701660156, "__label__education_jobs": 0.004940032958984375, "__label__entertainment": 0.00010097026824951172, "__label__fashion_beauty": 0.00021386146545410156, "__label__finance_business": 0.00031304359436035156, "__label__food_dining": 0.00041413307189941406, "__label__games": 0.0011911392211914062, "__label__hardware": 0.01739501953125, "__label__health": 0.0005340576171875, "__label__history": 0.0003707408905029297, "__label__home_hobbies": 0.0003581047058105469, "__label__industrial": 0.0015649795532226562, "__label__literature": 0.00026488304138183594, "__label__politics": 0.00021028518676757812, "__label__religion": 0.00055694580078125, "__label__science_tech": 0.127685546875, "__label__social_life": 0.00013399124145507812, "__label__software": 0.0175628662109375, "__label__software_dev": 0.8232421875, "__label__sports_fitness": 0.0005083084106445312, "__label__transportation": 0.0009531974792480468, "__label__travel": 0.0002267360687255859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15042, 0.03423]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15042, 0.42243]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15042, 0.75486]], "google_gemma-3-12b-it_contains_pii": [[0, 40, false], [40, 391, null], [391, 500, null], [500, 691, null], [691, 1122, null], [1122, 1396, null], [1396, 1518, null], [1518, 1735, null], [1735, 2107, null], [2107, 2593, null], [2593, 2863, null], [2863, 3290, null], [3290, 3790, null], [3790, 4153, null], [4153, 4365, null], [4365, 4742, null], [4742, 4986, null], [4986, 5108, null], [5108, 5443, null], [5443, 5600, null], [5600, 5722, null], [5722, 5875, null], [5875, 6266, null], [6266, 6432, null], [6432, 6715, null], [6715, 6914, null], [6914, 7532, null], [7532, 8083, null], [8083, 8632, null], [8632, 9062, null], [9062, 9439, null], [9439, 9672, null], [9672, 10026, null], [10026, 10358, null], [10358, 10403, null], [10403, 10525, null], [10525, 10895, null], [10895, 11184, null], [11184, 11662, null], [11662, 12182, null], [12182, 12676, null], [12676, 12798, null], [12798, 13214, null], [13214, 13557, null], [13557, 13995, null], [13995, 14389, null], [14389, 15042, null]], "google_gemma-3-12b-it_is_public_document": [[0, 40, true], [40, 391, null], [391, 500, null], [500, 691, null], [691, 1122, null], [1122, 1396, null], [1396, 1518, null], [1518, 1735, null], [1735, 2107, null], [2107, 2593, null], [2593, 2863, null], [2863, 3290, null], [3290, 3790, null], [3790, 4153, null], [4153, 4365, null], [4365, 4742, null], [4742, 4986, null], [4986, 5108, null], [5108, 5443, null], [5443, 5600, null], [5600, 5722, null], [5722, 5875, null], [5875, 6266, null], [6266, 6432, null], [6432, 6715, null], [6715, 6914, null], [6914, 7532, null], [7532, 8083, null], [8083, 8632, null], [8632, 9062, null], [9062, 9439, null], [9439, 9672, null], [9672, 10026, null], [10026, 10358, null], [10358, 10403, null], [10403, 10525, null], [10525, 10895, null], [10895, 11184, null], [11184, 11662, null], [11662, 12182, null], [12182, 12676, null], [12676, 12798, null], [12798, 13214, null], [13214, 13557, null], [13557, 13995, null], [13995, 14389, null], [14389, 15042, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15042, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 15042, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15042, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15042, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15042, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15042, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15042, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15042, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15042, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15042, null]], "pdf_page_numbers": [[0, 40, 1], [40, 391, 2], [391, 500, 3], [500, 691, 4], [691, 1122, 5], [1122, 1396, 6], [1396, 1518, 7], [1518, 1735, 8], [1735, 2107, 9], [2107, 2593, 10], [2593, 2863, 11], [2863, 3290, 12], [3290, 3790, 13], [3790, 4153, 14], [4153, 4365, 15], [4365, 4742, 16], [4742, 4986, 17], [4986, 5108, 18], [5108, 5443, 19], [5443, 5600, 20], [5600, 5722, 21], [5722, 5875, 22], [5875, 6266, 23], [6266, 6432, 24], [6432, 6715, 25], [6715, 6914, 26], [6914, 7532, 27], [7532, 8083, 28], [8083, 8632, 29], [8632, 9062, 30], [9062, 9439, 31], [9439, 9672, 32], [9672, 10026, 33], [10026, 10358, 34], [10358, 10403, 35], [10403, 10525, 36], [10525, 10895, 37], [10895, 11184, 38], [11184, 11662, 39], [11662, 12182, 40], [12182, 12676, 41], [12676, 12798, 42], [12798, 13214, 43], [13214, 13557, 44], [13557, 13995, 45], [13995, 14389, 46], [14389, 15042, 47]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15042, 0.05808]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
fbd43a62c9bb84873b140328c0ab16aac3089de3
# C++ IS schedule **Document Number:** P1000R2 **Date:** 2018-07-29 **Reply-to:** Herb Sutter (hsutter@microsoft.com) **Audience:** WG21 --- ## R2: Added procedure for approving schedule exceptions, and FAQs. --- ## IS schedule The following is the current schedule for the C++ IS, approved by WG21 unanimous consent in Jacksonville (2018-03). <table> <thead> <tr> <th>Year/Meeting</th> <th>Event</th> </tr> </thead> <tbody> <tr> <td>2017.2 – Toronto</td> <td>First meeting of C++20</td> </tr> <tr> <td>2017.3 – Albuquerque</td> <td>Try to front-load “big” language features including ones with broad library impact (incl. try to merge TSes here)</td> </tr> </tbody> </table> | 2018.1 – Jacksonville | EWG: Last meeting for new C++20 language proposals we haven’t seen before EWG → LEWG: Last meeting to approve C++20 features needing library response LEWG: Focus on progressing papers on how to react to new language features | | 2018.2 – Rapperswil | C++20 design is feature-complete | | 2018.3 – San Diego | EWG: Last meeting for new C++20 language proposals we haven’t seen before EWG → LEWG: Last meeting to approve C++20 features needing library response LEWG: Focus on progressing papers on how to react to new language features | | 2019.1 – Kona | C++20 design is feature-complete | | 2019.2 – Cologne | CWG+LWG: Complete CD wording EWG+LEWG: Working on C++23 features + CWG/LWG design clarification questions C++20 draft wording is feature complete, start CD ballot | | 2019.3 – Belfast | CD ballot comment resolution | | 2020.1 – Prague | CD ballot comment resolution C++20 technically finalized, start DIS ballot | When a proposal is (or may be) “late” In cases where we receive a proposal that may be late (comes after some deadline on this schedule, such as proposing something that may be considered a new feature request after the “feature-complete” deadline), exceptions to this schedule can be approved by strong consensus at both the design subgroup and WG21 levels. If we receive a proposal that at least one WG21 national body expert thinks is “too late,” then the following procedure applies. In EWG/LEWG subgroups, when handling such a proposal: - The group will first take a procedural poll on whether they have strong consensus that the proposal can be considered for the current IS cycle, where experts may vote “favor” either because they think it is not actually after a deadline (e.g., is a bug-fix, not a new feature request past the feature-complete deadline) or because they think it is worth making a past-deadline exception to the schedule. We look for strong consensus because if the subgroup itself does not have strong consensus, then the proposal is unlikely to achieve strong consensus in plenary to get the same procedural exception. - If that poll succeeds, the group then continues with normal technical discussion about it for this IS cycle, but again requires strong consensus to approve a change. Otherwise, the group can continue with normal technical discussion about it but for some target ship vehicle that is not this IS cycle. - In subgroups, “strong consensus” means 3:1 #favor:#against and more than half of total votes in favor (greater than the usual 2:1). In WG21 plenary, when the subgroups bring a poll to plenary to adopt such a proposal for this cycle: - The group will first take a procedural poll on whether they have strong consensus, in both of individual WG21 national body expert positions and national positions, that the proposal can be considered for the current IS cycle, where experts/national may vote “favor” either because they think it is not actually after a deadline (e.g., is a bug-fix, not a new feature request past the feature-complete deadline) or because they think it is worth making a past-deadline exception to the schedule. - If that poll succeeds, the group then continues to take the normal technical adoption poll for this IS cycle, but again requires strong consensus to approve a change. Otherwise, the poll is struck (not taken). - In WG21 plenary, “strong consensus” means 4:1 #favor:#against and more than half of total votes in favor (greater than the usual 3:1). FAQs (FAQ in year 20<NN-1>) There are bugs in the draft standard, should we delay C++<NN>? Of course, and no. Fixing bugs is the purpose of the final year, and it’s why this schedule set the feature freeze deadline for C++<NN> in early 20<NN-1> (a year before), to leave a year to get a round of international comments and apply that review feedback and any other issue resolutions and bug fixes. If we had just another meeting or two, we could add <feature> which is almost ready, so should we delay C++<NN>? Of course, and no. Just wait a couple more meetings and C++<NN+3> will be open for business and <feature> can be the first thing voted into the C++<NN+3> working draft. For example, that’s what we did with concepts; it was not quite ready to be rushed from its TS straight into C++17, so the core feature was voted into draft C++20 at the first meeting of C++20 (Toronto), leaving plenty of time to refine and adopt the remaining controversial part of the TS that needed a little more bake time (the non-“template” syntax) which was adopted the following year (San Diego). Now we have the whole thing. This feels overly strict. Why do we ship IS releases at fixed time intervals (three years)? Because it’s one of only two basic project management options to release the C++ IS, and experience has demonstrated that it’s better than the other option. What are the two project management options to release the C++ IS? I’m glad you asked. There are two basic release target choices: Pick the features, or pick the release time, and whichever you pick means relinquishing control over determining the other. It is not possible to control both at once. They can be summarized as follows: <table> <thead> <tr> <th>If we choose to control this</th> <th>We give up control of this</th> <th>Can we work on “big” many-year features?</th> <th>When do we merge features into the IS working draft?</th> <th>What do we do if we find problems with a merged feature?</th> </tr> </thead> <tbody> <tr> <td>“What”: The features we ship</td> <td>“When”: The release time</td> <td>Yes, in proposal papers and the IS working draft</td> <td>Typically earlier, to get more integration testing ⇒ lowers average working draft stability</td> <td>Delay the standard</td> </tr> <tr> <td>“When”: The release time</td> <td>“What”: The features we ship</td> <td>Yes, in proposal papers and TS “feature branches”</td> <td>Typically later, when the feature is more baked ⇒ increases average working draft stability</td> <td>Pull the feature out, can merge it again when it’s ready on the next IS “train” to leave the station</td> </tr> </tbody> </table> Elaborating: (1) “What”: Pick the features, and ship when they’re ready; you don’t get to pick the release time. If you discover that a feature in the draft standard needs more bake time, you delay the world until it’s ready. You work on big long-pole features that require multiple years of development by making a release big enough to cover the necessary development time, then try to stop working on new features entirely while stabilizing the release (a big join point). This was the model for C++98 (originally expected to ship around 1994; Bjarne originally said if it didn’t ship by about then it would be a failure) and C++11 (called 0x because x was expected to be around 7). This model “left the patient open” for indeterminate periods and led to delayed integration testing and release. It led to great uncertainty in the marketplace wondering when the committee would ship the next standard, or even if it would ever ship (yes, among the community, the implementers, and even within the committee, some had serious doubts in both 1996 and 2009 whether we would ever ship the respective release). During this time, most compilers were routinely several years behind implementing the standard, because who knew how many more incompatible changes the committee would make while tinkering with the release, or when it would even ship? This led to wide variation and fragmentation in the C++ support of compilers available to the community. Why did we do that? Because we were inexperienced and optimistic: (1) is the road paved with the best of intentions. In 1994/5/6, and again in 2007/8/9, we really believed that if we just slipped another meeting or three we’d be done, and each time we ended up slipping up to four years. We learned the hard way that there’s really no such thing as slipping by one year, or even two. Fortunately, this has changed, with option (2)... (2) “When”: Pick the release time, and ship what features are ready; you don’t get to pick the feature set. If you discover that a feature in the draft standard needs more bake time, you yank it and ship what’s ready. You can still work on big long-pole features that require multiple releases’ worth of development time, by simply doing that work off to the side in “branches,” and merging them to the trunk/master IS when they’re ready, and you are constantly working on features because every feature’s development is nicely decoupled from an actual ship vehicle until it’s ready (no big join point). This has been the model since 2012, and we don’t want to go back. It “closes the patient” regularly and leads to sustaining higher quality by forcing regular integration and not merging work into the IS draft until it has reached a decent level of stability, usually in a feature branch. It also creates a predictable ship cycle for the industry to rely on and plan for. During this time, compilers have been shipping conforming implementations sooner and sooner after each standard (which had never happened before), and in 2020 we expect multiple fully conforming implementations the same year the standard is published (which has never happened before). This is nothing but goodness for the whole market – implementers, users, educators, everyone. Also, note that since we went to (2), we’ve also been shipping more work (as measured by big/medium/small feature counts) at higher quality (as measured by a sharp reduction in defect reports and comments on review drafts of each standard), while shipping whatever is ready (and if anything isn’t, deferring just that). How serious are we about (2)? What if a major feature by a prominent committee member was “almost ready”… we’d be tempted to wait then, wouldn’t we? Very serious, and no. We have historical data: In Jacksonville 2016, at the feature cutoff for C++17, Bjarne Stroustrup made a plea in plenary for including concepts in C++17. When it failed to get consensus, Stroustrup was directly asked if he would like to delay C++17 for a year to get concepts in. Stroustrup said No without any hesitation or hedging, and added that C++17 without concepts was more important than a C++18 or possibly C++19 with concepts, even though Stroustrup had worked on concepts for about 15 years. The real choice then was between: (2) shipping C++17 without concepts and then C++20 with concepts... (which we did), or (1) renaming C++17 to C++20 which is equivalent to (2) except for skipping C++17 and not shipping what was already ready for C++17. Why not every { one, two, four } years? We find three years to be a good balance, and two years is the effective minimum in the ISO process. What about something between (1) and (2), say do basically (2) but with “a little” schedule flexibility to take “a little” extra time when we feel we need it for a feature? No, because that would be (1). The ‘mythical small slip’ was explained by Fred Brooks in The Mythical Man-Month, with the conclusion: “Take no small slips.” For a moment, imagine we did slip C++<NN>. The reality, regardless of any valiant efforts to live in denial of it, is that we would be switching from (2) back to (1). If we had decided to delay C++<NN> for more fit-and-finish, we will delay the standard by at least two years. There is no such thing as a one-meeting or three-meeting slip, because during this time other people will continue to (rightly) say “well my feature only needs one more meeting too, since we’re slipping a meeting let’s add that too.” And once we slip at least two years, we’re saying that C++<NN> becomes C++<NN+2> or more likely C++<NN+3>... but we’re already going to ship C++<NN+3>! — So we’d still be shipping C++<NN+3> on either plan, and the only difference is that we’re not shipping C++<NN> in the meantime with the large amount of work that’s stable and ready to go, and making the world wait three more years for it. And gratuitously so, because the delay will not benefit those baked features, which is most or all of them. So the suggestion to “slip C++<NN>” amounts to “rename C++<NN> to C++<NN+2> or C++<NN+3>,” and the simple answer is “yes, we’re going to have C++<NN+3> too, but in addition to C++<NN> and not instead of it.” But feature X is broken / needs more bake time than we have bugfix time left in C++20! No problem! We can just pull it, as we did with draft C++11 concepts and with draft C++20 contracts. Under our current plan (2), someone needs to write the paper aimed at EWG or LEWG (as appropriate) that shows the problem and/or the future doors we’re closing, and proposes removing it from the IS working draft. Those groups will consider it, and if they decide the feature is broken (and plenary agrees), that’s fine, the feature will be delayed to C++next. But under plan (1), we would be delaying, not only that feature, but the entire feature set of C++<NN> to C++<NN+3>. That would be... excessive. Does (2) mean “major/minor” releases? No. We said that at first, before we understood that (2) really simply means you don’t get to pick the feature set, not even at a “major/minor” granularity. Model (2) simply means “ship what’s ready.” That leads to releases that are: - similarly sized (aka regular medium-sized) for “smaller” features because those tend to take shorter lead times (say, < 3 years each) and so generally we see similar numbers completed per release; and variable sized (aka lumpy) for “bigger” features that take longer lead times (say, > 3 years each) and each IS release gets whichever of those mature to the point of becoming ready to merge during that IS’s time window, so sometimes there will be more than others. So C++14 and C++17 were relatively small, because a lot of the standardization work during that time was taking place in long-pole features that lived in proposal papers (e.g., contracts) and TS “feature branches” (e.g., concepts). C++20 is a big release, therefore didn’t we cram a lot into a three-year cycle for C++20? No, see “lumpy” above. C++20 is big, not because we did more work in those three years, but because many long-pole items (including at least two that have been worked on in their current form since 2012, off to the side as P-proposals and TS “branches”) happened to mature and get consensus to merge into the IS draft in the same release cycle. It has pretty much always been true that major features take many years. The main difference between plan (1) for C++98 and C++11 and plan (2) now is: In C++98 and C++11 we held the standard until they were all ready, now we still ship those big ones once they’re ready but we also ship other things that are ready in the meantime instead of going totally dark. C++20 is the same three-year cycle as C++14 and C++17; it’s not that we did more in these three years than in the previous two three-year cycles, it’s just that more long-pole major features that we invested in during the C++14 and C++17 cycles became ready to merge for C++20, so we’re actually reaping the work we started investing in during other cycles. And if any really are unready, fine, we can just pull them again and let them bake more for C++23. If there is, we need that to be explained in a paper that proposes pulling it, and why, for it to be actionable. I think the right way to think about it is that C++14+17+20 taken as a whole is our third nine-year cycle (2011-2020), after C++98 (1989-1998) and C++11 (2002-2011), but because we were on plan (2) we also shipped the parts that were ready at the three- and six-year points. Isn’t it better to catch bugs while the product is in development, vs. after it has been released to customers? Of course. But if we’re talking about that as a reason to delay the C++ standard, the question implies two false premises: (a) it assumes the features haven’t been released and used before the standard ships (many already have production usage experience); and (b) it assumes all the features can be used together before the standard ships (they can’t). Elaborating: Re (a): Most major C++20 features have been implemented in essentially their current draft standard form in at least one shipping compiler, and in most cases actually used in production code (i.e., has already been released to customers who are very happy with it). For example, coroutines (adopted only five months ago as of this writing) has been used in production in MSVC for two years and in Clang for at least a year with very happy customers at scale (e.g., Azure, Facebook). Re (b): The reality is that we aren’t going to find many feature interaction problems until users are using them in production, which generally means until after the standard ships, because implementers will generally wait until the standard ships to implement most things. That’s why when we show any uncertainty about when we ship, what generally happens is that implementations wait—oh, they’ll implement a few things, but they will hit Pause on implementing the whole thing until they know we’re ready to set it in stone. Ask <favorite compiler> team what happened when they implemented <major feature before it was in a published standard>. In a number of cases, they had to implement it more than once, and break customers more than once. So it’s reasonable for implementers to wait for the committee to ship. Finally, don’t forget the feature interaction problem. In addition to shipping when we are ready, we need time after we ship to find and fix problems with interactions among features and add support for such interactions that we on typically cannot know before widespread use of new features. No matter how long we delay the standard, there will be interactions we can’t discover until much later. The key is to manage that risk with design flexibility to adjust the features in a compatible way, not to wait until all risk is done. The standard is never perfect... don’t we ship mistakes? Yes. If we see a feature that’s not ready, yes we should pull it. If we see a feature that could be better, but we know that the change can be done in a backward-compatible way, that’s not a reason to not ship it now; it can be fixed compatibly in C++next. We do intentionally ship features we plan to further improve, as long as we have good confidence we can do so in a backward-compatible way. But shouldn’t we aim to minimize shipping mistakes? Yes. We do aim for that. However, we don’t aim to eliminate all risk. There is also a risk and (opportunity) cost to not shipping something we think is ready. So far, we’ve been right most of the time. Are we sure that our quality now is better than when were on plan (1)? Yes. By objective metrics, notably national body comment volume and defect reports, C++14 and C++17 have been our most stable releases ever, each about 3-4× better on those metrics than C++98 or C++11. And the reason is because we ship regularly, and put big items into TS branches first (including full wording on how they integrate with the trunk standard) and merge them later when we know they’re more baked. In fact, since 2012 the core standard has always been maintained in a near-ship-ready state (so that even our working drafts are at least as high quality as the shipped C++98 and C++11 standards). That never happened before 2012, where we would often keep the patient open with long issues lists and organs lying around nearby that we meant to put back soon; now we know we can meet the schedule at high quality because we always stay close to a ship-ready state. If we wanted to, we could have shipped the CD even without the work of the Cologne meeting and still been much higher quality than C++98’s or C++11’s CDs (and, probably, higher quality than their published standards). Given that C++98 and C++11 were successful, recognizing that we’re now at strictly higher quality than that all the time means we’re in a pretty good place. C++98 and C++11 each took about 9 years and were pretty good products ... ... and C++14 and C++17 were minor releases, and C++20 is major? Again, I think the right comparable is C++14+17+20 taken as a whole: That is our third nine-year cycle, but because we were on plan (2) we also shipped the parts that were ready at the three- and six-year points. Does (2) allow making feature-based targets like P0592 for C++next? Sure! As long as it doesn’t contain words like “must include these features,” because that would be (1). (The R2 revision of paper P0592 is expected to make this correction.) Aiming for a specific set of features, and giving those ones priority over others, is fine – then it’s a prioritization question. We’ll still take only what’s ready, but we can definitely be more intentional about prioritizing what to work on first so it has the best chance of being ready as soon as possible.
{"Source-Url": "http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1000r2.pdf", "len_cl100k_base": 5101, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 22749, "total-output-tokens": 5547, "length": "2e12", "weborganizer": {"__label__adult": 0.0002942085266113281, "__label__art_design": 0.0002429485321044922, "__label__crime_law": 0.00019681453704833984, "__label__education_jobs": 0.0006709098815917969, "__label__entertainment": 4.8100948333740234e-05, "__label__fashion_beauty": 0.00011640787124633788, "__label__finance_business": 0.0005249977111816406, "__label__food_dining": 0.00027751922607421875, "__label__games": 0.0006432533264160156, "__label__hardware": 0.0004963874816894531, "__label__health": 0.0001962184906005859, "__label__history": 0.0001819133758544922, "__label__home_hobbies": 5.775690078735352e-05, "__label__industrial": 0.0002472400665283203, "__label__literature": 0.00015938282012939453, "__label__politics": 0.00023674964904785156, "__label__religion": 0.0002777576446533203, "__label__science_tech": 0.0026111602783203125, "__label__social_life": 7.939338684082031e-05, "__label__software": 0.0060577392578125, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.0002582073211669922, "__label__transportation": 0.00030612945556640625, "__label__travel": 0.00019812583923339844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21566, 0.03276]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21566, 0.02738]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21566, 0.9621]], "google_gemma-3-12b-it_contains_pii": [[0, 1560, false], [1560, 4500, null], [4500, 7447, null], [7447, 11102, null], [11102, 14122, null], [14122, 17233, null], [17233, 20290, null], [20290, 21566, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1560, true], [1560, 4500, null], [4500, 7447, null], [7447, 11102, null], [11102, 14122, null], [14122, 17233, null], [17233, 20290, null], [20290, 21566, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21566, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21566, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21566, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21566, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21566, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21566, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21566, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21566, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21566, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21566, null]], "pdf_page_numbers": [[0, 1560, 1], [1560, 4500, 2], [4500, 7447, 3], [7447, 11102, 4], [11102, 14122, 5], [14122, 17233, 6], [17233, 20290, 7], [20290, 21566, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21566, 0.10526]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
f9f926dc452d99026e008b42ce1bc278fa3e987e
Live Kernel Patching Miroslav Beneš SUSE Labs, mbenes@suse.cz Why live patching? • **Huge cost of downtime:** • Hourly cost >$100K for 95% Enterprises – ITIC • $250K - $350K for a day in a worldwide manufacturing firm - TechTarget • **The goal is clear: reduce planned downtime** Why live patching? Change management Common tiers of change management 1. Incident response “We are down, actively exploited …” 2. Emergency change “We could go down, are vulnerable …” 3. Scheduled change “Time is not critical, we keep safe” Barcelona Supercomputing Centre Mare Nostrum supercomputer - 50k Sandy Bridge cores - The most beautiful supercomputer in the world - Terabytes of data - Reboot? 5m telescope with adaptive optics on Mount Palomar Avoid atmospheric blurring in Real Time Control 3888 segments of a deformable mirror with a latency <250 μs Reboot? SAP HANA In-memory database and analytics engine - 4-16 TB of RAM - All operations done in memory - Disk used for journalling - Active-Passive HA - Failover measured in seconds - Reboot? HP DL980 w/ 12 TB RAM 1943: Manhattan project – punchcards - IBM punchcard automatic calculators were used to crunch the numbers - A month before the Trinity nuclear device test, the question was: “What will the yield be, how much energy will be released?” - The calculation would normally take three months to complete – recalculating any batches with errors - Multiple colored punch cards introduced to fix errors in calculations while the calculator was running Trinity test site, 16ms after initiation 2003: Windows HotPatching (Microsoft) - Windows Server 2003 SP1 - Stops kernel execution for activeness check - Schedule procedures on all but current CPUs and keep them busy - Uses short jumps patched into functions for redirection - The second redirection (jump) to a new function - Removed in Windows 8 and later versions 2008: Ksplice (MIT, Oracle) - First to patch *Linux kernel* - Stops kernel execution for activeness check - Restarts and tries again later when active - Uses jumps patched into functions for redirection **Commercially deployed** *Kernel patching* *Activeness safety* *Binary patching* 2014: kGraft (SUSE) - *Linux kernel* patching - *Immediate* patching with *lazy migration* - Uses jumps patched into functions for redirection - *Commercially deployed* *Commercially deployed* *Linux kernel* *Immediate* *Lazy migration* 2014: kpatch (Red Hat) - *Linux kernel* patching - Stops kernel execution for activeness check - Restarts and tries again later when active - Uses jumps patched into functions for redirection - *Commercially deployed – technical preview* *Commercially deployed* *Linux kernel* *Activeness safety* *Binary patching* kGraft goals - **Applying limited scope fixes to the Linux kernel** - Security, stability and corruption fixes - **Require *minimal changes* to the source code** - No changes outside the kGraft engine itself - **Have no runtime *performance impact*** - Full speed of execution - **No interruption of applications while patching** - Full speed of execution - **Allow *full review* of patch source code** - For accountability and security purposes Call Redirection How Does It Work - Use of ftrace framework - gcc -pg is used to generate calls to \texttt{\_fentry\_()} at the beginning of every function - ftrace replaces each of these calls with \texttt{NOP} during boot, removing runtime overhead (when CONFIG_DYNAMIC_FTRACE is set) - When a tracer registers with ftrace, the \texttt{NOP} is runtime patched to a \texttt{CALL} again - kGraft uses a tracer, too, but then asks ftrace to change the return address to the new function - And that's it, call is redirected - x86\_64 from now on - Although s390x, powerpc and arm64 are similar Call redirection ftrace: SMP-safe code modification Call redirection ftrace: SMP-safe code modification static int cmdline_proc_show(struct seq_file *m, void *v) { seq_printf(m, "%s ", saved_command_line); return 0; } Call redirection ftrace: SMP-safe code modification <cmdline_proc_show>: \[\begin{array}{ll} e8 & 4b 68 39 00 \quad \text{callq} \quad ffffffff8160d8d0 \quad \_\_fentry\_\_ \\ 48 & 8b 15 7c 3f ef 00 \quad \text{mov} \quad 0xef3f7c(%rip),%rdx \quad \# \quad \text{<saved_command_line>} 31 & c0 \quad \text{xor} \quad %eax,%eax \\ 48 & c7 c6 a3 d7 a4 81 \quad \text{mov} \quad $0xffffffff81a4d7a3,%rsi \\ e8 & e6 1d fb ff \quad \text{callq} \quad ffffffff81228e80 \quad \text{<seq_printf>} 31 & c0 \quad \text{xor} \quad %eax,%eax \\ c3 \quad \text{retq} \\ 0f & 1f 00 \quad \text{nop1} \quad (%rax) \end{array}\] Call redirection ftrace: SMP-safe code modification <cmdline_proc_show>: e8 4b 68 39 00 callq ffffffff8160d8d0 <__fentry__> 48 8b 15 7c 3f ef 00 mov 0xef3f7c(%rip),%rdx # <saved_command_line> 31 c0 xor %eax,%eax 48 c7 c6 a3 d7 a4 81 mov $0xffffffff81a4d7a3,%rsi e8 e6 1d fb ff callq ffffffff81228e80 <seq_printf> 31 c0 xor %eax,%eax 31 c0 retq 0f 1f 00 nop1 (%rax) <cmdline_proc_show>: 0f 1f 44 00 00 nop1 0x0(%rax,%rax,1) 48 8b 15 7c 3f ef 00 mov 0xef3f7c(%rip),%rdx # <saved_command_line> Call redirection ftrace: SMP-safe code modification <cmdline_proc_show>: ``` e8 4b 68 39 00 callq ffffffff8160d8d0 <__fentry__> 48 8b 15 7c 3f ef 00 mov 0xef3f7c(%rip),%rdx # <saved_command_line> 31 c0 xor %eax,%eax 48 c7 c6 a3 d7 a4 81 mov $0xffffffff81a4d7a3,%rsi e8 e6 1d fb ff callq ffffffff81228e80 <seq_printf> 31 c0 xor %eax,%eax c3 retq 0f 1f 00 nopl (%rax) ``` <cmdline_proc_show>: ``` 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1) 48 8b 15 7c 3f ef 00 mov 0xef3f7c(%rip),%rdx # <saved_command_line> ``` <cmdline_proc_show>: ``` e8 7b 3f e5 1e callq 0xfffffffffa00cb000 # ftrace handler 48 8b 15 7c 3f ef 00 mov 0xef3f7c(%rip),%rdx # <saved_command_line> ``` Call Redirection ftrace: return address modification mechanism #include <linux/module.h> #include <linux/kgraft.h> #include <linux/seq_file.h> static int kgr_cmdline_proc_show(struct seq_file *m, void *v) { seq_printf(m, "%s\n", " this is livepatched"); return 0; } static struct kgr_patch patch = { .name = "patch", .owner = THIS_MODULE, .patches = { KGR_PATCH(cmdline_proc_show, kgr_cmdline_proc_show), KGR_PATCH_END } }; static int __init patch_init(void) { return kgr_patch_kernel(&patch); } static void __exit patch_cleanup(void) { kgr_patch_remove(&patch); } module_init(patch_init); module_exit(patch_cleanup); MODULE_LICENSE("GPL"); Patch Generation Manual Approach - kGraft offers a way to create patches entirely by hand - The source of the patch is then a single C file - Easy to review, easy to maintain in a VCS like git - Add fixed functions - Create a list of functions to be replaced - Call kGraft: kgr_patch_kernel(); - Compile - Insert as a .ko module - Done kgr_patch_kernel() - Two ftrace handlers in kGraft - Slow and fast (because of a consistency model) - State machine - It boils down to call register_ftrace_function() and ftrace_set_filter_ip() - Module functions are dealt with in module notifiers Call Redirection The Final Hurdle • Changing a single function is easy • Since ftrace patches at runtime, you just flip the switch • What if a patch contains multiple functions that depend on each other? • Number of arguments changes • Types of arguments change • Return type change • Or semantics change • We need a consistency model • Lazy migration enforcing function type safety kGraft Consistency Model Keeping Threads Intact - We want to avoid calling a new function from old and vice versa - Execution threads in kernel are of four types - Interrupts (initiated by hardware, non-nesting) - User threads (enter kernel through SYSCALL) - Kernel threads (infinite sleeping loops in kernel) - Idle tasks (active when there is nothing else to do) - We want to make sure a thread calls either all old functions or all new - And we can migrate them one by one to 'new' as they enter/exit execution - No stopping for anybody kGraft Consistency Model - welcome to the new universe! - kernel_func - heavy work - buggy_func() - reality_check - which universe are you coming from? - buggy_func - fixed_func kGraft Consistency Model • Per-thread flag • TIF_KGR_IN_PROGRESS • Mark all tasks in a system at the beginning and wait for them to be migrated to a new universe • Finalize kGraft Consistency Model static notrace void kgr_stub_slow(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *ops, struct pt_regs *regs) { struct kgr_patch_fun *p = ops->private; bool go_new; if (in_interrupt()) go_new = *this_cpu_ptr(kgr_irq_use_new); else if (test_bit(0, kgr_immutable)) { kgr_mark_task_in_progress(current); go_new = false; } else { rmb(); /* test_bit before kgr_mark_task_in_progress */ go_new = !kgr_task_in_progress(current); } if (p->state == KGR_PATCH_REVERT_SLOW) go_new = !go_new; /* Redirect the function unless we continue with the original one. */ if (go_new) kgr_set_regs_ip(regs, (unsigned long)p->new_fun); else if (p->loc_old != p->loc_name) kgr_set_regs_ip(regs, p->loc_old); Kraft Consistency Model static notrace void kgr_stub_fast(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *ops, struct pt_regs *regs) { struct kgr_patch_fun *p = ops->private; kgr_set_regs_ip(regs, (unsigned long)p->new_fun); } static inline void kgr_set_regs_ip(struct pt_regs *regs, unsigned long ip) { regs->ip = ip; } kGraft Consistency Model Complications • How about eternal sleepers? • Like `getty` on a console 10 • They'll never exit the kernel • They'll never be migrated to 'new' • They'll block completion of the patching process forever • Wake them up! • Sending a `fake signal` (SIGPENDING flag, but no signal in a queue) • The signal exits the syscall and transparently restarts it • And kthreads? • They cannot exit the kernel ever • Annotate them in a safe place and wake them up Ksplice Consistency Model Making a Clean Cut - Ksplice uses *Activeness safety* - First `stop_kernel();` - That stops all CPUs completely, including all applications - Then, check all stacks, whether any thread is stopped within a patched function - If yes, resume kernel and try again later - And hope it'll be better next time - If not, flip the switch on all functions and resume the kernel - The system may be stopped for 10-40ms typical Kernel Live Patching in Linux upstream - Result of a discussion between Red Hat and SUSE at Linux Plumbers Conference 2014 in Dusseldorf - Basic infrastructure merged in 2015 - Neither kGraft, nor kpatch - Patch format abstraction and function redirection based on ftrace - x86_64, s390x and powerpc architectures supported - arm and arm64 in development - **Main missing feature? Consistency model.** - Reviewed and present in linux-next tree - Should go to 4.12 static int livepatch_cmdline_proc_show(struct seq_file *m, void *v) { seq_printf(m, "%s\n", "this has been live patched"); return 0; } static struct klp_func funcs[] = { { .old_name = "cmdline_proc_show", .new_func = livepatch_cmdline_proc_show, }, { } }; static struct klp_object objs[] = { { /* name being NULL means vmlinux */ .funcs = funcs, }, { } }; static struct klp_patch patch = { .mod = THIS_MODULE, .objs = objs, }; static int livepatch_init(void) { int ret; ret = klp_register_patch(&patch); if (ret) return ret; ret = klp_enable_patch(&patch); if (ret) { WARN_ON(klp_unregister_patch(&patch)); return ret; } return 0; } static void livepatch_exit(void) { WARN_ON(klp_unregister_patch(&patch)); } module_init(livepatch_init); module_exit(livepatch_exit); MODULE_LICENSE("GPL"); MODULE_INFO(livepatch, "Y"); #include <linux/module.h> #include <linux/kgraft.h> #include <linux/seq_file.h> static int kgr_cmdline_proc_show(struct seq_file *m, void *v) { seq_printf(m, "%s ", " this is livepatched"); return 0; } static struct kgr_patch patch = { .name = "patch", .owner = THIS_MODULE, .patches = { KGR_PATCH(cmdline_proc_show, kgr_cmdline_proc_show), KGR_PATCH_END } }; static int __init patch_init(void) { return kgr_patch_kernel(&patch); } static void __exit patch_cleanup(void) { kgr_patch_remove(&patch); } module_init(patch_init); module_exit(patch_cleanup); MODULE_LICENSE("GPL"); Kernel Live Patching in Linux upstream - Hybrid of kGraft and kpatch consistency models - Per-thread migration, but scope limited to a set of patched functions - Based on a stack checking - To ensure that a task does not sleep in a to-be-patched function - Heated discussion when proposed - Stacks and their dumps are unreliable - Josh Poimboeuf then proposed objtool - It analyzes every .o file and ensures the validity of its stack metadata (currently frame pointer usage) - Plans to add CFI (Call Frame Information, DWARF) validation for C files and CFI generation for assembly files - The second proposal sidetracked as well - Josh rewrote the kernel stack unwinder - Plans are to make it DWARF aware static void notrace klp_ftrace_handler(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *fops, struct pt_regs *regs) { struct klp_ops *ops; struct klp_func *func; int patch_state; ops = container_of(fops, struct klp_ops, fops); rcu_read_lock(); func = list_first_or_null_rcu(&ops->func_stack, struct klp_func, stack_node); if (WARN_ON_ONCE(!func)) goto unlock; smp_rmb(); if (unlikely(func->transition)) { smp_rmb(); patch_state = current->patch_state; WARN_ON_ONCE(patch_state == KLP_UNDEFINED); if (patch_state == KLP_UNPATCHED) { func = list_entry_rcu(func->stack_node.next, struct klp_func, stack_node); if (&func->stack_node == &ops->func_stack) goto unlock; } } klp_arch_set_pc(regs, (unsigned long)func->new_func); unlock: rcu_read_unlock(); } Limitations and missing features - Inability to deal with data structure changes - State format transformation - Needed for more complex fixes - *Lazy state transformation?* - New functions able to work with both old and new data format - After code *lazy migration* is complete, start transforming data structures on access - State also contains exclusive access mechanisms - Spinlocks, mutexes - Converting those without a deadlock is an unsolved problem - **Model/consistency verification** - Is the change/fix still within the consistency model? - Currently done by human reasoning – error prone and time consuming Limitations and missing features - **Kernel threads** - Stack checking in upstream vs. manual annotation in kGraft - Stack checking does not always work - Manual annotation is horrible - There is an effort to move kthreads to kworker API with a well-defined checkpoint - **schedule() and switch_to() macro** - `schedule()` is a tricky function. It returns with a different stack - If old and new `schedule()` are compiled differently by GCC (different registers are allocated, stack is different due to local variables, ...) → corruption and crash upon return - Possible solution – ensure the return to a proper version of `schedule()` - Make the value of RIP register part of the saved and restored context - This happened in 4.9 Patch Generation Pitfalls Inlining - Inlining is when a compiler decides that it's worth copying a function as a whole instead of calling it - kGraft uses a special function of the GCC compiler to emit a list of functions where a bug has been inlined - `-fdump-ipa-clones` option Patch Generation Pitfalls Static symbols - Static or unexported symbols are common - But they may be used from the patched function - The kernel keeps a list of all symbols: kallsyms ```c int patched_fn(void) { kgr_orig_static_fn(); } ``` ```c static int __init patch_init(void) { kgr_orig_static_fn = (static_fn_proto)kallsyms_lookup_name("static_fn"); if (!kgr_orig_static_fn) { pr_err("kgr: function %s not resolved\n", "static_fn"); return -ENOENT; } ... } ``` - A proper solution is to use ELF relocations – work in progress Patch Generation Pitfalls IPA-SRA • Compiler optimization, enabled with -O2 and higher • Inter-procedural scalar replacement of aggregates • Gives a significant performance boost • But also a disaster for patching • Can modify CALL at the end of a function into JMP if it's the last statement of a function • Can transform arguments passed by reference to arguments passed by value if the value is never changed • Can create variants of a function with fewer arguments if the removed argument doesn't have any impact on what the function does in a specific case • Again, GCC and DWARF to the rescue (and more work when creating the patch) Patch Generation Pitfalls GCC optimizations - **Partial inlining** - Inline just a part of a function - **IPA-CP** - Interprocedural constant propagation - Optimizes functions if values passed to them are constants - Even several clones of a function are possible if a set of values is limited - **IPA-PURE-CONST** - GCC can eliminate calls to pure const functions, memory accesses can be removed etc. - What if a function is no longer pure const with a fix applied? - **IPA-ICF** - Identical code folding for functions and read-only variables - Replaces a function with an equivalent one Patch Generation Pitfalls Dead Code Elimination - A change of a global variable and GCC code elimination ```c int global; int foo(void) { return 22; } int main(void) { int i; global = 1; i = foo(); if (global == 2) return 11; return 33; } ``` - Generally always when a function somehow affects another one Patch Generation Pitfalls IPA-RA • Use caller save registers for allocation if those registers are not used by any called function. In that case it is not necessary to save and restore them around calls. • Currently disabled thanks to `-pg` option, but this can change easily • Reportedly dubious gain for the kernel • Only a small fraction of functions affected (~0.5 %)
{"Source-Url": "http://d3s.mff.cuni.cz/teaching/advanced_operating_systems/slides/08_live_patching.pdf", "len_cl100k_base": 5017, "olmocr-version": "0.1.53", "pdf-total-pages": 44, "total-fallback-pages": 0, "total-input-tokens": 60852, "total-output-tokens": 6940, "length": "2e12", "weborganizer": {"__label__adult": 0.00030303001403808594, "__label__art_design": 0.0002378225326538086, "__label__crime_law": 0.00023686885833740232, "__label__education_jobs": 0.00027370452880859375, "__label__entertainment": 4.857778549194336e-05, "__label__fashion_beauty": 0.00010824203491210938, "__label__finance_business": 0.0002925395965576172, "__label__food_dining": 0.00024366378784179688, "__label__games": 0.00040268898010253906, "__label__hardware": 0.00223541259765625, "__label__health": 0.0002796649932861328, "__label__history": 0.00016760826110839844, "__label__home_hobbies": 9.322166442871094e-05, "__label__industrial": 0.0004334449768066406, "__label__literature": 0.0001125335693359375, "__label__politics": 0.00018668174743652344, "__label__religion": 0.0003094673156738281, "__label__science_tech": 0.0128326416015625, "__label__social_life": 5.185604095458984e-05, "__label__software": 0.007720947265625, "__label__software_dev": 0.97265625, "__label__sports_fitness": 0.00025844573974609375, "__label__transportation": 0.0003857612609863281, "__label__travel": 0.00015676021575927734}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18451, 0.01918]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18451, 0.3501]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18451, 0.70742]], "google_gemma-3-12b-it_contains_pii": [[0, 63, false], [63, 287, null], [287, 542, null], [542, 705, null], [705, 876, null], [876, 1087, null], [1087, 1573, null], [1573, 1903, null], [1903, 2195, null], [2195, 2437, null], [2437, 2759, null], [2759, 3216, null], [3216, 3822, null], [3822, 3874, null], [3874, 4049, null], [4049, 4662, null], [4662, 5157, null], [5157, 5977, null], [5977, 6040, null], [6040, 6666, null], [6666, 7005, null], [7005, 7258, null], [7258, 7656, null], [7656, 8206, null], [8206, 8391, null], [8391, 8571, null], [8571, 9329, null], [9329, 9690, null], [9690, 10184, null], [10184, 10631, null], [10631, 11110, null], [11110, 11973, null], [11973, 12598, null], [12598, 13316, null], [13316, 14224, null], [14224, 14873, null], [14873, 15628, null], [15628, 15912, null], [15912, 16491, null], [16491, 17139, null], [17139, 17748, null], [17748, 18078, null], [18078, 18451, null], [18451, 18451, null]], "google_gemma-3-12b-it_is_public_document": [[0, 63, true], [63, 287, null], [287, 542, null], [542, 705, null], [705, 876, null], [876, 1087, null], [1087, 1573, null], [1573, 1903, null], [1903, 2195, null], [2195, 2437, null], [2437, 2759, null], [2759, 3216, null], [3216, 3822, null], [3822, 3874, null], [3874, 4049, null], [4049, 4662, null], [4662, 5157, null], [5157, 5977, null], [5977, 6040, null], [6040, 6666, null], [6666, 7005, null], [7005, 7258, null], [7258, 7656, null], [7656, 8206, null], [8206, 8391, null], [8391, 8571, null], [8571, 9329, null], [9329, 9690, null], [9690, 10184, null], [10184, 10631, null], [10631, 11110, null], [11110, 11973, null], [11973, 12598, null], [12598, 13316, null], [13316, 14224, null], [14224, 14873, null], [14873, 15628, null], [15628, 15912, null], [15912, 16491, null], [16491, 17139, null], [17139, 17748, null], [17748, 18078, null], [18078, 18451, null], [18451, 18451, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18451, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18451, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18451, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18451, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18451, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18451, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18451, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18451, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18451, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18451, null]], "pdf_page_numbers": [[0, 63, 1], [63, 287, 2], [287, 542, 3], [542, 705, 4], [705, 876, 5], [876, 1087, 6], [1087, 1573, 7], [1573, 1903, 8], [1903, 2195, 9], [2195, 2437, 10], [2437, 2759, 11], [2759, 3216, 12], [3216, 3822, 13], [3822, 3874, 14], [3874, 4049, 15], [4049, 4662, 16], [4662, 5157, 17], [5157, 5977, 18], [5977, 6040, 19], [6040, 6666, 20], [6666, 7005, 21], [7005, 7258, 22], [7258, 7656, 23], [7656, 8206, 24], [8206, 8391, 25], [8391, 8571, 26], [8571, 9329, 27], [9329, 9690, 28], [9690, 10184, 29], [10184, 10631, 30], [10631, 11110, 31], [11110, 11973, 32], [11973, 12598, 33], [12598, 13316, 34], [13316, 14224, 35], [14224, 14873, 36], [14873, 15628, 37], [15628, 15912, 38], [15912, 16491, 39], [16491, 17139, 40], [17139, 17748, 41], [17748, 18078, 42], [18078, 18451, 43], [18451, 18451, 44]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18451, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
1546c9f1c0099e5e3da6669eb3d96efc1cde1183
DynNav: Toward Open and Interoperable Navigation Services Original Availability: This version is available at: 11583/2594962 since: 2020-12-14T08:57:39Z Publisher: IEEE Computer Society, Los Alamitos, CA Published DOI:10.1109/MITP.2016.20 Terms of use: openAccess This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository Publisher copyright IEEE postprint/Author's Accepted Manuscript ©2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collecting works, for resale or lists, or reuse of any copyrighted component of this work in other works. (Article begins on next page) DynNav: Toward Open and Interoperable Dynamic Navigation Services Piergiuseppe Bettassa, Politecnico di Torino, piergiuseppe.bettassa@polito.it Cecilia Corbi, Telecom Italia Lab, ceciliamaria.corbi@telecomitalia.it Giorgio Ghinamo, Telecom Italia Lab, giorgio.ghinamo@telecomitalia.it Pietro Leggero, Intelligent App, p.leggero@mytaxi.net Fulvio Risso, Politecnico di Torino, fulvio.risso@polito.it Riccardo Sisto, Politecnico di Torino, riccardo.sisto@polito.it Francesco Vadalà, Telecom Italia Lab, francesco.vadala@telecomitalia.it (Headline) So far navigation devices, including navigation apps for smartphones, have been proprietary and closed. A new scenario is emerging with the Open Mobile Alliance Dynamic Navigation Enabler, which lets developers create novel navigation services characterized by openness and interoperability across different information providers. 1 Introduction Navigation Devices (NDs), which are common tools for getting driving assistance, are increasingly integrating local information (e.g., maps, user position) with remote data such as real-time traffic information. However, the ND of one manufacturer is hardly capable to access data provided by a different service provider, as current systems are either proprietary, or even totally closed. Google Navigation and Apple Maps are well-known examples of proprietary solutions. They define a set of APIs, exploitable by other parties, but they do not guarantee interoperability with other solutions; even more importantly, those APIs may have some technical/legal limitations. For instance, Google does not allow the use of its data in applications other than Google Navigation [1], [2]. In addition, Google APIs are subject to change at any time. Other solutions, such as TomTom or Garmin either rely on the old radio-based broadcast communication channel, or exploit a closed architecture (services, protocols), hence preventing users from switching to another provider. Moreover, previous solutions, having monolithic nature, do not allow the use of alternative components (e.g., better real-time traffic information sources), and offer only limited chances to customize them. In order to overcome many of these limitations, recently the Open Mobile Alliance (OMA) standardized an open protocol for dynamic road navigation services. The OMA Dynamic Navigation Enabler [3], in short, DynNav, introduces a bidirectional communication channel and a modular approach, while reusing existing standards for some specific features. The bidirectional channel allows users to ask for or receive only the information they are interested in, by filtering data based on routes, geographic areas and time [4]. The standardized modular approach allows multiple services to be created on top of the defined components and it enables the same service to be provided by multiple operators (avoiding users lock-in), and to be implemented in multiple flavors. In this way, new actors can participate in the provisioning of navigation services, with the creation of valuable and/or specialized components, such as route computation algorithms optimized for vertical markets (e.g., goods delivery), precise traffic information, and more. Modularity and openness allow users not only to exploit multiple providers in their solution (e.g., one for route computation, the other one for traffic information), but also to seamlessly change provider. Among the possible players, telecom mobile operators may have huge benefits from an open standard for navigation services. First, they are constantly looking for the possibility to provide new services to their customers, and dynamic navigation is an appealing option for many users. Second, mobile operators can obtain traffic information by exploiting their own assets, in particular by (anonymously) tracking the position of their mobile users, hence providing real-time traffic information and re-routing capabilities based on real-time data. In this context, and in particular in the case of smart cities, where multiple sources of real-time data are available, such as traffic, parking lots availability, and more, DynNav-enabled solutions can not only provide navigation services, but also become aggregators of multiple information sources, helping users to find localized information such as restaurants or attractions, possibly matching user’s preferences and interests. 2 The DynNav solution OMA, the leading industry forum for developing market-driven, interoperable mobile service enablers, completed the standardization process of DynNav in September 2012. This represents an additional step toward the full support of navigation applications by the OMA standardization framework [3]. 2.1 Architecture Figure 1 presents a possible architecture of a navigation service based on DynNav. The server is the middle block, while the left block represents a typical ND (e.g., a dedicated device or a smartphone). A DynNav client can also be an application residing on a server (right side in figure), such as a web-based journey planner. ![Figure 1: DynNav Enabler Architectural Diagram](image) DynNav specifies the interaction between a DynNav client and a DynNav server (the straight lines in Figure 1) while the interactions with the other components (the dashed lines) rely on protocols that are outside the scope of the DynNav specification. Additional elements, not depicted in Figure 1, can be introduced to optimize the bandwidth consumption, which is critical in mobile communications. For example, proxy servers can cache frequent responses. 2.2 Protocol A server that implements DynNav provides the following main functions: 1. Analysis of client-defined trip parameters and proposal of a set of routes, based on real-time and forecast traffic data; 2. Provisioning of real-time and forecast traffic information related to a set of routes (or geographical areas), previously proposed by the DynNav server itself or by the ND; 3. Proposal of new routes when the quality indicators associated with the current route become unacceptable or the user deviates from the current route; 4. Provisioning of complementary information such as Points of Interest (POI) related to either a route or an area; 5. Possibility for the client to subscribe to a set of routes, and to be notified of available information about these routes only when the user is driving along one of them. It is worth noting that a client could not be allowed to access all the functions listed above: the DynNav service provider can allow/deny access to some functions depending on the user contract or other criteria. ### 2.3 Data structures The DynNav specification reuses, whenever possible, data structures already defined in previous standards, with the objective of speeding up the implementation and facilitating the integration of additional components/services into the framework. Particularly, it exploits the Transport Protocol Experts Group (TPEG) standards for road traffic messages and location, traffic events, road performance (e.g., travelling time) and location entity description [7]. Moreover, it exploits IETF RFC 4776 [8] for civic addresses and the W3C “Points of Interest Core” draft [9] for POI information. The drawback of this approach is the impossibility to obtain a fully optimized protocol, as data formats and protocol messages may be more verbose than necessary. As no public standards are currently available for encoding route information, a novel encoding schema has been proposed: a route is represented by a sequence of segments (i.e., road sections without intersections), each one including segment origin, destination, name, measured or forecast performance parameters (travelling time in regular conditions, delays, expected speed) and segment shape (sequence of points for its graphical representation on the map). Figure 2 depicts a portion of the DynNav resource tree (the complete structure is shown in [4]) and a sample of request and response messages. The curly brackets identify parametric parts of the URIs. The `trips` resource contains a collection of trip resources, each one identified by its unique `tripId`. A trip resource includes information about the journey (e.g., source and destination) along with nested resources representing alternative routes to the destination. These resources, identified by their unique `routeId`, include routing information and may also include references to sets of events, grouped by categories. Users can obtain the details by sending a request to the server, containing the list of events to be retrieved. The selection of interesting events can either be done manually or automatically by the client application. This choice can reduce the amount of bytes transmitted over the network because only interesting events are retrieved, even if it introduces a small delay due to the additional request. If the same event is shared by multiple resources, such as different routes or areas, it is transmitted only once. Events can be stored in proxy servers, if present, reducing the number of requests to the main server. 2.4 Use case example A typical application of the DynNav service refers to a ND without path computation capabilities, which may originate the message flow depicted in Figure 3 (a detailed description is available in [4]). In this scenario, the user first specifies the parameters of a trip by sending Message #1, which triggers the creation of a new trip resource in the server, including all the details about the trip as provided by the user. Trip creation also forces the server to calculate a set of routes that satisfy the constraints. Then the client retrieves and selects one of the possible routes proposed by the server by Messages #2 and #3. The route includes links to the occurred events (whose descriptions are stored on the server) and a field that specifies the category (e.g., traffic, weather) of each event. A client application may decide not to retrieve all those events from the server (Message #4), based on user preferences or other criteria (e.g., priorities). Message #5 creates a subscription to the notification service in order to receive real-time traffic information updates and the proposal of alternative routes. The DynNav server will use user-provided location data in order to update the user’s status and send notifications about traffic and other events specified in the user’s subscription. If the estimated travelling time becomes too high, e.g., because of road congestion, a new route can be suggested. Message #6 represents a notification from the server and it contains the references to related events, which can be retrieved as in Message #4. 2.5 Deployment scenarios The flexibility of the DynNav standard enables the creation of rich navigation services without locking users with a specific service provider and/or using a fixed set of functions. This is possible because the same service (e.g., route computation) can be provided by different entities, and the basic set of DynNav messages can be combined in (almost) arbitrary ways to create complex applications. For example, DynNav supports both smart NDs, which have the capability to calculate routes and rely on central services only for real-time information (e.g., traffic), and lightweight NDs, which delegate everything but the user interaction (e.g., route display) to central services. The possibility to exploit services provided by multiple entities also enables the creation of applications targeting very specific vertical markets. For example, a shipping goods company can opt for a lightweight ND for its employees, relying on an in-house service for route computation, with the ability to minimize delivery costs, and on a telecom operator to obtain real-time traffic information. Along the same line, a company can setup only a specialized service that suggests the best points of interest based on the user’s preferences or other parameters, relying on the fact that other providers can offer the other information required for building a whole navigation service. 3 Validation We implemented the DynNav standard in a prototype that includes both the client and server portions of the specification. We used those components to carry out an extended set of tests aimed to validate the characteristics of the standard in a real world environment and to give an insight of the performance achievable by this solution. Particularly, we were interested in checking whether DynNav is suitable for delivering navigation services to real mobile devices. Although our prototype was not engineered to compete with commercially available solutions, we compared DynNav with the widely used Google Directions Web Services [1], which offers similar primitives and exploits a similar data structure. 3.1 Prototype The DynNav server was developed in Java and installed on a GlassFish application server, while the ND hosting our application was a low-end Android 2.2 smartphone clocked at 768MHz with 512MB of memory. Some specific functions of the server were set up by reusing existing external components: route calculation is delegated to an external web service (http://openrouteserver.org), while traffic data are simulated using a real data source to feed the simulator. The DynNav prototype described here has been shown in an OMA demo session. Details (presentation and interview) can be found in [10], [11]. 3.2 Tests We focused on a common worst-case usage pattern in which a lightweight DynNav client asks for the entire information related to a trip, delegating route computation to the server. This implies that the client device is forced to request even the data necessary to display the chosen path on the map. First, we measured the amount of data generated by the protocol, which, if excessive, could have a negative impact on performance in low-bandwidth environments. The overall results, computed by averaging several trips of different complexities, showed that DynNav messages are in average 40% bigger than Google messages. However, this overhead also includes information about real-time driving times and events (e.g., traffic jams, accidents) associated with the route, not reported by Google Directions. In order to limit this overhead, we enabled the HTTP compression that is supported by most HTTP implementations and reduces the total bytes transferred. In our experiments, we observed a compression rate of about 2.8:1 in route messages, with a 10% processing time increase (due to compression/decompression) in the ND. With compression enabled, DynNav messages are in average 35% bigger than (compressed) Google messages. In the second test we measured the time elapsed between starting a route request and receiving the associated response. This time needs to be as short as possible as it represents the waiting time for the ND user. Tests showed that the average latency for obtaining the route is approximately 300ms, excluding the time required by the server to compute the route, calculated on 25 realistic trips of different complexity. While this number is three times the one experienced in Google Directions, this value is still reasonable as human beings will hardly notice the difference in terms of responsiveness between the two systems. The higher latency is due to the larger size and number of messages (i.e., HTTP GET requests) needed to complete the same operation. This is required in order to enable a greater extent of flexibility in the protocol and support different deployment scenarios with a limited set of primitives, such as NDs with/without route computation capabilities and/or navigation maps. A third test evaluated the processing overhead of the protocol in the ND, with the final objective of assessing the possibility to execute DynNav services on low-end user terminals. Our measures showed that the average processing cost for a message transporting route information in the selected ND is about 2.5ms for each Km of the route, which represents an acceptable value even for long routes. This confirms that the choice of using rich XML messages, which are known to be more computationally demanding than binary encoded messages, does not represent an issue on modern user terminals at least in this use case. Table 1 shows how the performance of the system changes when dealing with routes of increasing lengths, with HTTP compression. These data confirm that the DynNav performance figures remain acceptable within the typical route complexity range. <table> <thead> <tr> <th>Route length [kilometers]</th> <th>Message size [KB]</th> <th>Transmission Time [ms]</th> <th>Processing time on client [ms]</th> </tr> </thead> <tbody> <tr> <td></td> <td>Google Directions</td> <td>DynNav</td> <td>Google Directions</td> </tr> <tr> <td></td> <td>53,73</td> <td>4,29</td> <td>5,24</td> </tr> <tr> <td></td> <td>244,84</td> <td>7,18</td> <td>9,34</td> </tr> <tr> <td></td> <td>600,96</td> <td>20,63</td> <td>31,48</td> </tr> <tr> <td></td> <td>878,95</td> <td>34,07</td> <td>45,53</td> </tr> <tr> <td></td> <td>1034,00</td> <td>29,40</td> <td>39,43</td> </tr> </tbody> </table> Table 1: Tests results 4 Conclusions Based on the new scenarios enabled by DynNav and on our experiments, we can conclude that DynNav has the potential to change the way navigation services are conceived, implemented and deployed, by making it more open and interoperable. Improvements are also possible and OMA is working on an enhanced version. Future work should be directed to optimize the definition of resources and to add new resources such as detailed parking information, public transportation, indoor navigation, weather conditions. In fact, the warm acceptance of this standard among different players (ND/smartphone manufacturers, telecom operators) is pushing for further evolutions, particularly with respect to value-added services (e.g., points of interest, support for vertical applications such as logistics). Additional studies, in collaboration with other mobile operators, are currently ongoing to define optimized mechanisms for reducing and compressing the amount of transferred data. The prototype that has been developed to validate the DynNav solution shows excellent results, even on a non-optimized implementation. Although in some cases the performance looks inferior than in proprietary solutions (e.g., Google Directions) albeit hardly noticeably by final users, our DynNav prototype enables interoperability and greater flexibility thanks to its additional features, such as push-based notification services and customizable real-time information. 5 Biographies Piergiuseppe Bettassa graduated in Computer Engineering from Politecnico di Torino in 2011 and he enrolled in the PhD program. His research interests are on formal methods applied to security protocols and security-aware applications, software engineering and model driven development. Contact him at piergiuseppe.bettassa@polito.it. Fulvio Risso, PhD, is assistant professor with the Control and Computer Engineering Department of Politecnico di Torino, Italy. His current research activities focus on network protocols, traffic analysis, efficient packet processing and software-defined networks. Contact him at fulvio.risso@polito.it. Riccardo Sisto, PhD in Computer Engineering, has been working at Politecnico di Torino, in the Control and Computer Engineering Department, first as a researcher, then as an associate professor and, since 2004, as a full professor of Computer Engineering. His main research interests are in the areas of formal methods and communication protocol engineering. Contact him at riccardo.sisto@polito.it. Cecilia Corbi graduated in 1985 in Mathematics. With more than 25 years of experience in the telecommunications field, she has been working as Project Manager for cellular network operators in several areas of expertise in innovation and strategy. She is currently Application and Services Standards manager in Telecom Italia, and serving as Board Vice Chair of the Open Mobile Alliance. Contact her at ceciliamaria.corbi@telecomitalia.it. Giorgio Ghinamo, graduated cum Laude in 1995 at Politecnico di Torino in Telecommunication Engineering. Researcher at Telecom Italia, his current interests refer to positioning technologies and related applications. He has been involved in navigation related standardization framework in OMA. Contact him at giorgio.ghinamo@telecomitalia.it. Francesco Vadalà graduated cum Laude in 1996 from the Università di Messina in Electronic Engineering. Researcher at Telecom Italia, he has been working in different innovation/technical strategy and R&D areas. He analyses emerging technologies and coordinates standard activities in the areas of service layer. He is currently serving as Technical Plenary Chairman of the Open Mobile Alliance. Contact him at francesco.vadala@telecomitalia.it. Pietro Leggero graduated in 2012 from Politecnico di Torino in Computer Engineering. Software architect and mobile application developer, specialist on Android, Java, RESTful WS, Geolocation, PAAS and NFC technologies, he develops and contributes to investigate on the new emergent mobile technologies. Head of Mobile development at SeekAnGoo. Contact him at p.leggero@mytaxi.net. 6 References
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/2594962/64686/IT_ITPro-2014-03-0027.R2_Bettassa.pdf", "len_cl100k_base": 4491, "olmocr-version": "0.1.48", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 29834, "total-output-tokens": 5759, "length": "2e12", "weborganizer": {"__label__adult": 0.0018520355224609375, "__label__art_design": 0.0007257461547851562, "__label__crime_law": 0.0013532638549804688, "__label__education_jobs": 0.0011148452758789062, "__label__entertainment": 0.0005068778991699219, "__label__fashion_beauty": 0.0005373954772949219, "__label__finance_business": 0.0010061264038085938, "__label__food_dining": 0.00130462646484375, "__label__games": 0.002574920654296875, "__label__hardware": 0.0232086181640625, "__label__health": 0.0013446807861328125, "__label__history": 0.0021991729736328125, "__label__home_hobbies": 0.00019347667694091797, "__label__industrial": 0.0020923614501953125, "__label__literature": 0.0009694099426269532, "__label__politics": 0.0011816024780273438, "__label__religion": 0.0014238357543945312, "__label__science_tech": 0.3603515625, "__label__social_life": 0.00032591819763183594, "__label__software": 0.057769775390625, "__label__software_dev": 0.398681640625, "__label__sports_fitness": 0.0012760162353515625, "__label__transportation": 0.135986328125, "__label__travel": 0.0020694732666015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25263, 0.03798]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25263, 0.13967]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25263, 0.89777]], "google_gemma-3-12b-it_contains_pii": [[0, 1133, false], [1133, 1669, null], [1669, 5856, null], [5856, 7695, null], [7695, 10405, null], [10405, 11996, null], [11996, 14118, null], [14118, 17847, null], [17847, 22199, null], [22199, 25188, null], [25188, 25263, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1133, true], [1133, 1669, null], [1669, 5856, null], [5856, 7695, null], [7695, 10405, null], [10405, 11996, null], [11996, 14118, null], [14118, 17847, null], [17847, 22199, null], [22199, 25188, null], [25188, 25263, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25263, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25263, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25263, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25263, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25263, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25263, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25263, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25263, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25263, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25263, null]], "pdf_page_numbers": [[0, 1133, 1], [1133, 1669, 2], [1669, 5856, 3], [5856, 7695, 4], [7695, 10405, 5], [10405, 11996, 6], [11996, 14118, 7], [14118, 17847, 8], [17847, 22199, 9], [22199, 25188, 10], [25188, 25263, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25263, 0.07921]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
8cb901a7dfc1c621363dff09125edc17aa5af0f4
today’s topics: • introduction to robotics autonomous agents and autonomous robotics. - we will be discussing autonomous mobile robots - what is a robot? - “a programmable, multifunction manipulator designed to move material, parts, tools or specific devices through variable programmed motions for the performance of various tasks.” [Robot Institute of America] - “an active, artificial agent whose environment is the physical world” [Russell&Norvig, p773] - what is an agent? - “anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors.” [Russell&Norvig, p32] - what is autonomy? - no remote control!! - an agent makes decisions on its own, guided by feedback from its sensors; but you write the program that tells the agent how to make its decisions environment. Our definition of a *robot*. - *robot* = autonomous embodied agent - has a *body* and a *brain* - exists in the physical world (rather than the virtual or simulated world) - is a mechanical device - contains *sensors* to perceive its own state - contains *sensors* to perceive its surrounding environment - possesses *effectors* which perform actions - has a *controller* which takes input from the sensors, makes *intelligent* decisions about actions to take, and effects those actions by sending commands to motors a bit of robot history. - The word *robot* came from the Czech word *robota*, which means *slave* - Used first by playwright Karel Capek, “Rossum’s Universal Robots” (1923) - Human-like automated devices date as far back as ancient Greece - Modern view of a robot stems from science fiction literature - Foremost author: Isaac Asimov, “I, Robot” (1950) - The *Three Laws of Robotics* 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. (1) effectors. - comprises all the mechanisms through which a robot can effect changes on itself or its environment - *actuator* = the actual mechanism that enables the effector to execute an action; converts software commands into physical motion - types: - arm - leg - wheel - gripper - categories: - *manipulator* - *mobile* (1) mobile robots. - classified by manner of locomotion: - wheeled - legged - stability is important - static stability - dynamic stability (1) degrees of freedom. - number of directions in which robot motion can be controlled - free body in space has 6 degrees of freedom: - three for position \((x, y, z)\) - three for orientation \((roll, pitch, yaw)\) * \textit{yaw} refers to the direction in which the body is facing i.e., its orientation within the \(xy\) plane * \textit{roll} refers to whether the body is upside-down or not i.e., its orientation within the \(yz\) plane * \textit{pitch} refers to whether the body is tilted i.e., its orientation within the \(xz\) plane - if there is an actuator for every degree of freedom, then all degrees of freedom are controllable \(\Rightarrow\) \textit{holonomic} - most robots are \textit{non-holonomic} (1) sensors. • ⇒ perception – *proprioceptive*: know where your joints/sensors are – *odometry*: know where you are • function: to convert a physical property into an electronic signal which can be interpreted by the robot in a useful way <table> <thead> <tr> <th>property being sensed</th> <th>type of sensor</th> </tr> </thead> <tbody> <tr> <td>contact</td> <td>bump, switch</td> </tr> <tr> <td>distance</td> <td>ultrasound, radar, infra red (IR)</td> </tr> <tr> <td>light level</td> <td>photo cell, camera</td> </tr> <tr> <td>sound level</td> <td>microphone</td> </tr> <tr> <td>smell</td> <td>chemical</td> </tr> <tr> <td>temperature</td> <td>thermal</td> </tr> <tr> <td>inclination</td> <td>gyroscope</td> </tr> <tr> <td>rotation</td> <td>encoder</td> </tr> <tr> <td>pressure</td> <td>pressure gauge</td> </tr> <tr> <td>altitude</td> <td>altimeter</td> </tr> </tbody> </table> more on sensors. - operation - passive: read a property of the environment - active: act on the environment and read the result - noise - internal: from inside the robot - external: from the robot’s environment - calibration: can help eliminate/reduce noise (1) environment. - **accessible vs inaccessible** - robot has access to all necessary information required to make an informed decision about what to do next - **deterministic vs nondeterministic** - any action that a robot undertakes has only one possible outcome. - **episodic vs non-episodic** - the world proceeds as a series of repeated episodes. - **static vs dynamic** - the world changes by itself, not only due to actions effected by the robot - **discrete vs continuous** - sensor readings and actions have a discrete set of values. (1) state. - knowledge about oneself and one's environment - *kinematics* = study of correspondence between actuator mechanisms and resulting motion * motion: - rotary - linear - combines sensing and acting - *did i go as far as i think i went?* - but one's environment is full of information - for an agent, what is relevant? (1) control. - autonomy - problem solving - modeling - knowledge - representation - control architectures - deliberative control - reactive control - hybrid control (1) autonomy. • to be truly autonomous, it is not enough for a system simply to establish direct numerical relations between sensor inputs and effector outputs • a system must be able to accomplish *goals* • a system must be able to *solve problems* • ⇒ need to represent problem space – which contains goals – and intermediate states • there is always a trade-off between *generality* and *efficiency* – more specialized ⇒ more efficient – more generalized ⇒ less efficient (1) problem solving: example. - GPS = General Problem Solver [Newell and Simon 1963] - Means-Ends analysis <table> <thead> <tr> <th>operator</th> <th>preconditions</th> <th>results</th> </tr> </thead> <tbody> <tr> <td>PUSH(obj, loc)</td> <td>at(robot, obj) ∧ large(obj) ∧ clear(obj) ∧ armempty()</td> <td>at(obj, loc) ∧ at(robot, loc)</td> </tr> <tr> <td>CARRY(obj, loc)</td> <td>at(robot, obj) ∧ small(obj)</td> <td>at(obj, loc) ∧ at(robot, loc)</td> </tr> <tr> <td>WALK(loc)</td> <td>none</td> <td>at(robot, loc)</td> </tr> <tr> <td>PICKUP(obj)</td> <td>at(robot, obj)</td> <td>holding(obj)</td> </tr> <tr> <td>PUTDOWN(obj)</td> <td>holding(obj)</td> <td>~holding(obj)</td> </tr> <tr> <td>PLACE(obj1, obj2)</td> <td>at(robot, obj2) ∧ holding(obj1)</td> <td>on(obj1, obj2)</td> </tr> </tbody> </table> (1) modeling the robot’s environment. - modeling - the way in which *domain knowledge* is embedded into a control system - information about the environment stored internally: *internal representation* - e.g., maze: robot stores a *map* of the maze “in its head” - knowledge - information in a context - organized so it can be readily applied - understanding, awareness or familiarity acquired through learning or experience - physical structures which have correlations with aspects of the environment and thus have a predictive power for the system (1) memory. - divided into 2 categories according to duration - short term memory (STM) - transitory - used as a buffer to store only recent sensory data - data used by only one behaviour - examples: * avoid-past: avoid recently visited places to encourage exploration of novel areas * wall-memory: store past sensor readings to increase correctness of wall detection - long term memory (LTM) - persistent - metric maps: use absolute measurements and coordinate systems - qualitative maps: use landmarks and their relationships - examples: * Markov models: graph representation which can be augmented with probabilities for each action associated with each sensed state (1) knowledge representation. - must have a relationship to the environment (temporal, spatial) - must enable predictive power (look-ahead), but if inaccurate, it can deceive the system - *explicit*: symbolic, discrete, manipulable - *implicit*: embedded within the system - *symbolic*: connecting the meaning (semantics) of an arbitrary symbol to the real world - difficult because: - sensors provide signals, not symbols - symbols are often defined with other symbols (circular, recursive) - requires interaction with the world, which is noisy - other factors - speed of sensors - response time of effectors (1) components of knowledge representation. - **state** - totally vs partially vs un-observable - discrete vs continuous - static vs dynamic - **spatial**: navigable surroundings and their structure; metric or topological maps - **objects**: categories and/or instances of detectable things in the world - **actions**: outcomes of specific actions on the self and the environment - **self/ego**: stored proprioception (sensing internal state), self-limitations, capabilities - **perceptive**: how to sense - **behaviour**: how to act - **intentional**: goals, intended actions, plans - **symbolic**: abstract encoding of state/information (1) types of representations. - maps - *euclidean map* * represents each point in space according to its metric distance to all other points in the space - *topological map* * represents locations and their connections, i.e., how/if they can be reached from one another; but does not contain exact metrics - *cognitive map* * represents behaviours; can store both previous experience and use for action * used by animals that forage and home (animal navigation) * may be simple collections of vectors - graphs - nodes and links - Markov models - associates probabilities with states and actions (1) control architecture. - a control architecture provides a set of principles for organizing a control system - provides structure - provides constraints - refers to software control level, not hardware! - implemented in a programming language - don’t confuse “programming language” with “robot architecture” - architecture guides how programs are structured (1) classes of robot control architectures. - **deliberative** - look-ahead; think, plan, then act - **reactive** - don’t think, don’t look ahead, just react! - **hybrid** - think but still act quickly - **behaviour-based** - distribute thinking over acting (1) deliberative control. - classical control architecture (first to be tried) - first used in AI to reason about actions in non-physical domains (like chess) - natural to use this in robotics at first - example: Shakey (1960’s, SRI) - state-of-the-art machine vision used to process visual information - used classical planner (STRIPS) - planner-based architecture 1. sensing (S) 2. planning (P) 3. acting (A) - requirements - lots of time to think - lots of memory - (but the environment changes while the controller thinks) reactive control. - operate on a short time scale - does not look ahead - based on a tight loop connecting the robot’s sensors with its effectors - purely reactive controllers do not use any internal representation; they merely react to the current sensory information - collection of rules that map situations to actions - simplest form: divide the perceptual world into a set of mutually exclusive situations recognize which situation we are in and react to it - (but this is hard to do!) - example: subsumption architecture (Brooks, 1986) - hierarchical, layered model (1) hybrid control. - use the best of both worlds (deliberative and reactive) - combine open-loop and closed-loop execution - combine different time scales and representations - typically consists of three layers: 1. reactive layer 2. planner (deliberative layer) 3. integration layer to combine them 4. (but this is hard to do!) (2) LEGO Mindstorms. - Hitachi h8300 microprocessor called RCX - with an IR transceiver - and 3 input ports, for: - light sensor - touch sensor - and 3 output ports, for: - motors - light bulbs (2) programming the LEGO Mindstorms. - you write programs on your computer and *download* them to the RCX using an IR transmitter ("communication tower") - Mindstorms comes with RoboLab — graphical programming environment - but people have built other interfaces, e.g.: - Not-Quite C (NQC) - Brickos - lejos (2) Not-Quite C. - programming language based on C which runs on the RCX - first you need to download *firmware* onto the RCX so that it will understand the NQC code which you write - then you can write programs - NQC is mostly like C, with some exceptions... - for download and full documentation: - a smattering of NQC follows - basic command-line operation: ```bash bash# nqc -d <rcx-program-file> bash# nqc -firmward <firmware-file> bash# nqc -help ``` - note that the NQC subset presented is for RCX 2.0 (2) NQC: program structure. - comprised of global variables and code blocks - variables are all 16-bit integers - code blocks: * tasks * inline functions * subroutines - features include: - event handling - resource allocation mechanism - IR communication (2) NQC: tasks. - multi-tasking program structure ``` task <task-name> { // task code goes in here } ``` - up to 10 tasks - invoked using `start <task-name>` - stopped using `stop <task-name>` (2) NQC: inline functions. - functions can take arguments but always void ``` void <function-name> ( <arguments> ) { // function code goes in here } ``` - return statement, just like C - arguments <table> <thead> <tr> <th>type</th> <th>meaning</th> <th>description</th> </tr> </thead> <tbody> <tr> <td>int</td> <td>pass by value</td> <td>value can change inside function, but changes won’t be seen by caller</td> </tr> <tr> <td>int &amp;</td> <td>pass by reference</td> <td>value can change inside function, and changes will be seen by caller; only variables may be passed</td> </tr> <tr> <td>const int</td> <td>pass by value</td> <td>value cannot be changed inside function; only constants may be passed</td> </tr> <tr> <td>const int &amp;</td> <td>pass by reference</td> <td>value cannot be changed inside function; value is read each time it is used</td> </tr> </tbody> </table> (2) NQC: subroutines. - subroutines cannot take any arguments ```perl sub <subroutine-name> { // subroutine code goes in here } ``` - allow a single copy of code to be shared by multiple callers - so more efficient than inline functions - cannot be nested (2) NQC: variables. - all are 16-bit signed integers - scope is either global or local (just like C) - use as many local variables as possible (for efficiency) - arrays - declaration just like C - cannot pass whole arrays to functions (but can pass individual array elements) - cannot use shortcut operators on array elements (++, --, +=, -=, etc) - cannot do pointer arithmetic - hexadecimal notation, e.g.: 0x12f - special values: true (non-zero) and false (zero) (2) NQC: operators. - operators listed in order of precedence <table> <thead> <tr> <th>operator</th> <th>action</th> </tr> </thead> <tbody> <tr> <td><code>abs()</code></td> <td>absolute value</td> </tr> <tr> <td><code>sign()</code></td> <td>sign of operand</td> </tr> <tr> <td><code>++</code>, <code>--</code></td> <td>increment, decrement</td> </tr> <tr> <td><code>−</code>, <code>~</code>, <code>!</code></td> <td>unary minus, bitwise negation, logical negation</td> </tr> <tr> <td><code>∗</code>, <code>/</code>, <code>%</code>, <code>+</code>, <code>−</code></td> <td>multiply, divide, modulo, addition, subtraction</td> </tr> <tr> <td><code>&lt;&lt;</code>, <code>&gt;&gt;</code></td> <td>left and right shift</td> </tr> <tr> <td><code>&gt;</code>, <code>&lt;</code>, <code>&gt;=</code>, <code>&lt;=</code>, <code>==</code>, <code>!==</code></td> <td>relational and equivalence operators</td> </tr> <tr> <td><code>&amp;</code>, <code>∧</code>, `</td> <td>`</td> </tr> <tr> <td><code>&amp;&amp;</code>, `</td> <td></td> </tr> <tr> <td><code>=</code></td> <td>assignment operator</td> </tr> <tr> <td><code>+=</code>, <code>−=</code>, <code>∗=</code>, <code>/=</code>, <code>&amp;=</code>, `</td> <td>=`</td> </tr> <tr> <td>`</td> <td></td> </tr> <tr> <td><code>+-=</code></td> <td>set variable to sign (-1,+1,0) of expression</td> </tr> </tbody> </table> (2) NQC: preprocessor. • following directives included: • `#include "<filename>"` – file name must be listed in double quotes (not angle brackets) • macro definition (`define, ifdef, ifndef, undef`) • conditional compilation (`if, elif, else, endif`) • program initialization – special initialization function (`_init`) called automatically (sets all 3 outputs to full power forward, but not turned on) – suppress it using: `#pragma noinit` – redirect it using: `#pragma init <function-name>` • reserving global storage locations (there are 32): `#pragma reserve <value>` 1 macro redefinition not allowed (2) NQC: branching statements. • if / else — just like C if ( <condition> ) <consequence> if ( <condition> ) <consequence> else <alternative> • switch — just like C switch ( <expression> ) { case <constant-expression1> : <body> . . case <constant-expressionN> : <body> default : <body> } (2) NQC: looping statements. - while, do..while, for — just like C while ( <condition> ) <body> do <body> while ( <condition> ) for ( <statement0> ; <condition> ; <statement1> ) <body> - also use of break and continue statements just like C - repeat loop (not like C): repeat ( <expression> ) <body> - <expression> is evaluated once, indicating the number of times to perform the body statements - until loop (not like C): until ( <condition> ); - effectively a while loop with an empty body; program waits until condition is true before proceeding (2) NQC: resource acquisition. - `acquire ( <resources> ) <body>` - `acquire ( <resources> ) <body> catch <handler>` - resource access control given to task that makes the call - execution jumps to `catch` handler if access is denied - note that access can be lost in mid-execution of a task with a higher priority requests the resource; to set task’s priority, use `SetPriority( <p> )` where `<p>` is between 0..255; note that lower numbers are higher priority - resource returned to the system when `<body>` is done - example: ``` acquire( ACQUIRE_OUT_A ) { Wait( 1000 ); } catch { PlaySound( SOUND_UP ); } ``` (2) NQC: event handling. - monitor ( <events> ) <body> catch ( <catch-events> ) <handler> . . catch <handler> - you can configure 16 events, numbered 0..15 and use EVENT_MASK() macro to identify monitor( EVENT_MASK(2) | EVENT_MASK(3) | EVENT_MASK(4) ) { Wait( 1000 ); } catch ( EVENT_MASK(4) ) { PlaySound( SOUND_DOWN ); // event 4 happened } catch { PlaySound( SOUND_UP ); // event 2 or 3 happened } (2) NQC: sensors. - identifiers: SENSOR_1, SENSOR_2, SENSOR_3 - SetSensorType( <sensor>,<type> ) - sets sensor type - <type> is one of: SENSOR_TYPE_NONE, SENSOR_TYPE_TOUCH, SENSOR_TYPE_TEMPERATURE, SENSOR_TYPE_LIGHT or SENSOR_TYPE_ROTATION - SetSensorMode( <sensor>,<mode> ) - sets sensor mode - <mode> is one of: SENSOR_MODE_RAW, SENSOR_MODE_BOOL, SENSOR_MODE_PERCENT, SENSOR_TYPE_LIGHT or SENSOR_TYPE_ROTATION - SensorValue( <sensor> ) - reads sensor value (2) NQC: outputs. - identifiers: OUT_A, OUT_B, OUT_C - SetOutput( <outputs>,<mode> ) - sets output mode - <mode> is one of: OUT_OFF, OUT_ON or OUT_FLOAT - SetDirection( <outputs>,<direction> ) - sets output direction - <direction> is one of: OUT_FWD, OUT_REV or OUT_TOGGLE - SetPower( <outputs>,<power> ) - sets output power (speed) - <power> is one of: OUT_LOW, OUT_HALF, OUT_FULL or 0..7 (lowest..highest) - multiple <output> identifiers can be added together - also: On( <outputs> ), Off( <outputs> ), Fwd( <outputs> ), Rev( <outputs> ), OnFwd( <outputs> ), OnRev( <outputs> ), OnFor( <outputs>,<time> ) (where <time> is in 100ths of a second) (2) NQC: sound. - PlaySound( <sound> ) - plays a sound - <sound> is one of: SOUND_CLK, SOUND_DOUBLE_BEEP, SOUND_DOWN, SOUND_UP, SOUND_LOW_BEEP or SOUND_FAST_UP - PlayTone( <frequency>, <time> ) - plays “music” - <frequency> is in Hz - <time> is in 100ths of a second - for example: PlayTone( 440, 100 ) (2) NQC: LCD display. - **SelectDisplay( <mode> )** - displays sensor values - <mode> is one of: DISPLAY_WATCH, DISPLAY_SENSOR_1, DISPLAY_SENSOR_2, DISPLAY_SENSOR_3, DISPLAY_OUT_A, DISPLAY_OUT_B or DISPLAY_OUT_C - **SetUserDisplay( <value>,<precision> )** - displays user values - <value> is the value to display - <precision> is the number of places to the right of the decimal point (?!) cis32-spring2006-sklar-lec6 (2) NQC: IR communication. - simple communication can send single (one-byte) messages with values between 0..255 - \( x = \text{Message()} \) - reads and returns the most recently received message - \( \text{ClearMessage()} \) - clears the message buffer - \( \text{SendMessage( <message> )} \) - sends a message - \(<message>\) is a value between 0..255 (2) NQC: serial IR communication. - serial communication allows up to 16-byte messages - for example: ```c SetSerialComm( SERIAL_COMM_DEFAULT ); SetSerialPacket( SERIAL_PACKET_DEFAULT ); SetSerialData( 0, 10 ); SetSerialData( 1, 25 ); SendSerial( 0, 2 ); ``` - `SetSerialData( <byte-number>,<value> )` - puts data in one byte of the 16-byte transmit buffer - `<byte-number>` is between 0..15 - `SendSerial( <start-byte>,<number-of-bytes> )` - sends all or part of the transmit buffer - `<start-byte>` is between 0..15 - `<number-of-bytes>` is between 1..16 (2) NQC: timers. - allows setting/getting of timers with 100th of a second resolution (fast mode) or 10th of a second resolution (default) - 4 timers, numbered 0..3 - `ClearTimer( <n> )` - clears specified timer - `<n>` is between 0..3 - `x = Timer( <n> )` - returns the value of specified timer (for default resolution) - `x = FastTimer( <n> )` - returns the value of specified timer (for 100th of a second resolution) - `SetTimer( <n>,<value> )` - sets specified timer - `<value>` can be any constant or expression (2) NQC: counters. - 3 counters, numbered 0..2 - ClearCounter( <n> ) - clears specified counter - IncCounter( <n> ) - increments specified counter - DecCounter( <n> ) - decrements specified counter - x = Counter( <n> ) - gets the value of specified counter \(^2\)note that these overlap with global storage locations so these should be reserved if they are going to be used; see #pragma reserve description (2) NQC: event handling. - allows up to 16 events - `SetEvent( <event>,<source>,<type> )` - configures an event - `<event>` is between 0..15 - `<source>` is the source of the event (e.g., SENSOR_1) - `<type>` is one of\(^3\): `EVENT_TYPE_PRESSED`, `EVENT_TYPE_RELEASED`, `EVENT_TYPE_PULSE` (indicates a toggle), `EVENT_TYPE_EDGE`, `EVENT_TYPE_LOW` (use `SetLowerLimit()` to set threshold), `EVENT_TYPE_HIGH` (use `SetUpperLimit()` to set threshold), `EVENT_TYPE_NORMAL`, `EVENT_TYPE_MESSAGE` - `ClearEvent( <event> )` - clears configuration --- \(^3\) a subset is shown (2) NQC: data logging. - **CreateDatalog( <size> )** - creates a data log for recording sensor readings, variable values and the system watch - <size> is the number of points to record; 0 clears the data log - **AddToDatalog( <value> )** - adds a value to the data log - **UploadDatalog( <start>,<count> )** - uploads the contents of the data log - to upload and print the content of the data log to the computer from the command-line: bash# nqc -datalog bash# nqc -datalog_full (2) NQC: miscellaneous functions - **Wait( <time> )** - to sleep - <time> is a value in 100ths of a second - **SetRandomSeed( <n> )** \[ x = \text{Random}( <n> ) \] - sets random number seed and generates/returns random number between 0..<n> - **SelectProgram( <n> )** - sets the current program - <n> is between 0..4 - **x = Program()** - gets currently selected program - **x = BatteryLevel()** - monitors the battery and returns the battery level in millivolts • SetWatch( <hours>,<minutes> ) – sets system clock – <hours> is between 0..23 – <minutes> is between 0..59 • x = Watch() – gets value of system clock in minutes
{"Source-Url": "http://www.sci.brooklyn.cuny.edu/~sklar/teaching/s06/ai/notes/lec6.pdf", "len_cl100k_base": 6762, "olmocr-version": "0.1.50", "pdf-total-pages": 50, "total-fallback-pages": 0, "total-input-tokens": 77155, "total-output-tokens": 8379, "length": "2e12", "weborganizer": {"__label__adult": 0.0004699230194091797, "__label__art_design": 0.0009021759033203124, "__label__crime_law": 0.0005578994750976562, "__label__education_jobs": 0.0032863616943359375, "__label__entertainment": 0.0001080632209777832, "__label__fashion_beauty": 0.0002435445785522461, "__label__finance_business": 0.0002639293670654297, "__label__food_dining": 0.0005540847778320312, "__label__games": 0.00136566162109375, "__label__hardware": 0.018218994140625, "__label__health": 0.0008931159973144531, "__label__history": 0.0005125999450683594, "__label__home_hobbies": 0.0009431838989257812, "__label__industrial": 0.0021266937255859375, "__label__literature": 0.0004055500030517578, "__label__politics": 0.0003371238708496094, "__label__religion": 0.0006699562072753906, "__label__science_tech": 0.339599609375, "__label__social_life": 0.0001506805419921875, "__label__software": 0.01248931884765625, "__label__software_dev": 0.61279296875, "__label__sports_fitness": 0.00067138671875, "__label__transportation": 0.0022983551025390625, "__label__travel": 0.0002677440643310547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24875, 0.01256]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24875, 0.73484]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24875, 0.79096]], "google_gemma-3-12b-it_contains_pii": [[0, 45, false], [45, 850, null], [850, 1368, null], [1368, 2154, null], [2154, 2495, null], [2495, 2644, null], [2644, 3395, null], [3395, 4398, null], [4398, 4668, null], [4668, 5222, null], [5222, 5576, null], [5576, 5746, null], [5746, 6230, null], [6230, 7195, null], [7195, 7762, null], [7762, 8461, null], [8461, 9082, null], [9082, 9732, null], [9732, 10358, null], [10358, 10720, null], [10720, 10987, null], [10987, 11531, null], [11531, 12110, null], [12110, 12449, null], [12449, 12654, null], [12654, 12970, null], [12970, 13570, null], [13570, 13849, null], [13849, 14061, null], [14061, 14968, null], [14968, 15231, null], [15231, 15706, null], [15706, 16757, null], [16757, 17390, null], [17390, 17714, null], [17714, 18286, null], [18286, 18920, null], [18920, 19359, null], [19359, 19829, null], [19829, 20489, null], [20489, 20814, null], [20814, 21246, null], [21246, 21610, null], [21610, 22195, null], [22195, 22725, null], [22725, 23142, null], [23142, 23725, null], [23725, 24221, null], [24221, 24706, null], [24706, 24875, null]], "google_gemma-3-12b-it_is_public_document": [[0, 45, true], [45, 850, null], [850, 1368, null], [1368, 2154, null], [2154, 2495, null], [2495, 2644, null], [2644, 3395, null], [3395, 4398, null], [4398, 4668, null], [4668, 5222, null], [5222, 5576, null], [5576, 5746, null], [5746, 6230, null], [6230, 7195, null], [7195, 7762, null], [7762, 8461, null], [8461, 9082, null], [9082, 9732, null], [9732, 10358, null], [10358, 10720, null], [10720, 10987, null], [10987, 11531, null], [11531, 12110, null], [12110, 12449, null], [12449, 12654, null], [12654, 12970, null], [12970, 13570, null], [13570, 13849, null], [13849, 14061, null], [14061, 14968, null], [14968, 15231, null], [15231, 15706, null], [15706, 16757, null], [16757, 17390, null], [17390, 17714, null], [17714, 18286, null], [18286, 18920, null], [18920, 19359, null], [19359, 19829, null], [19829, 20489, null], [20489, 20814, null], [20814, 21246, null], [21246, 21610, null], [21610, 22195, null], [22195, 22725, null], [22725, 23142, null], [23142, 23725, null], [23725, 24221, null], [24221, 24706, null], [24706, 24875, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24875, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24875, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24875, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24875, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24875, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24875, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24875, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24875, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24875, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24875, null]], "pdf_page_numbers": [[0, 45, 1], [45, 850, 2], [850, 1368, 3], [1368, 2154, 4], [2154, 2495, 5], [2495, 2644, 6], [2644, 3395, 7], [3395, 4398, 8], [4398, 4668, 9], [4668, 5222, 10], [5222, 5576, 11], [5576, 5746, 12], [5746, 6230, 13], [6230, 7195, 14], [7195, 7762, 15], [7762, 8461, 16], [8461, 9082, 17], [9082, 9732, 18], [9732, 10358, 19], [10358, 10720, 20], [10720, 10987, 21], [10987, 11531, 22], [11531, 12110, 23], [12110, 12449, 24], [12449, 12654, 25], [12654, 12970, 26], [12970, 13570, 27], [13570, 13849, 28], [13849, 14061, 29], [14061, 14968, 30], [14968, 15231, 31], [15231, 15706, 32], [15706, 16757, 33], [16757, 17390, 34], [17390, 17714, 35], [17714, 18286, 36], [18286, 18920, 37], [18920, 19359, 38], [19359, 19829, 39], [19829, 20489, 40], [20489, 20814, 41], [20814, 21246, 42], [21246, 21610, 43], [21610, 22195, 44], [22195, 22725, 45], [22725, 23142, 46], [23142, 23725, 47], [23725, 24221, 48], [24221, 24706, 49], [24706, 24875, 50]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24875, 0.0727]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
c38bb48d6168ca47e6f3dc300d14e0dee973abef
D2S2: Drag ’n’ Drop Mobile App Screen Search Soumik Mohian soumik.mohian@mavs.uta.edu University of Texas at Arlington Arlington, Texas, USA Tony Tang ttt9489@mavs.uta.edu University of Texas at Arlington Arlington, Texas, USA Tuan Trinh tqt5663@mavs.uta.edu University of Texas at Arlington Arlington, Texas, USA Don Dang dpd5574@mavs.uta.edu University of Texas at Arlington Arlington, Texas, USA Christoph Csallner ccallner@uta.edu University of Texas at Arlington Arlington, Texas, USA ABSTRACT The lack of diverse UI element representations in publicly available datasets hinders the scalability of sketch-based interactive mobile search. This paper introduces D2S2, a novel approach that addresses this limitation via drag-and-drop mobile screen search, accommodating visual and text-based queries. D2S2 searches 58k Rico screens for relevant UI examples based on UI element attributes, including type, position, shape, and text. In an evaluation with 10 novice software developers, D2S2 successfully retrieves target screens within the top-20 search results in 15/19 attempts within a minute. The tool offers interactive and iterative search, updating its search results each time the user modifies the search query. Interested users can freely access D2S2 (http://pixeltoapp.com/D2S2), build on D2S2 or replicate results via D2S2’s open-source implementation (https://github.com/toni-tang/D2S2), or watch D2S2’s video demonstration (https://youtube.com/fofiw8lAn0). CCS CONCEPTS • Human-centered computing → Interface design prototyping; • Software and its engineering → Software prototyping, Search-based software engineering. KEYWORDS User interface design, prototyping, information retrieval, design examples, interactive screenshot search ACM Reference Format: 1 INTRODUCTION Iterative app screen search, while an exciting area of recent work [16, 17, 18], still faces many challenges. First, Google image search is fast, searches many images from the open web, and supports text-based search queries. But searching for an app screen via text queries remains clumsy, especially when looking for screens that contain certain UI elements in specific locations. Such searches result in long text queries and produce few relevant results. Recent work has searched the widely-used Rico dataset via a combination of text search [18] and sketched element doodles [16, 17]. These approaches support a few UI element types via deep learning. Expanding their scope would require additional specialized training data, which must be collected and curated. When designing mobile applications, studying real-world examples aids in gathering requirements, analyzing current trends, and cultivating motivation to develop a compelling mobile app [9, 10]. Given the broad and rapidly expanding market for mobile apps, an efficient mobile app screen search tool becomes valuable. Designers commonly use drag-and-drop tools (e.g., Figma [2]) to create wireframes. Similarly, software developers utilize drag-and-drop-based visual kits (e.g., the Android Layout Editor [1] or Protopr [3]) for UI development. The popularity of these techniques is growing due to their user-friendly nature, intuitive interfaces, and since they do not require specialized technical expertise [13]. D2S2 offers an interactive solution via drag and drop for mobile screen search. D2S2 is for novice users who want help creating a complete UI design during the early software development stages. Users can search for mobile screens by dragging and dropping UI elements on the canvas. The tool’s search interface includes basic features such as undo and redo. Users can also add plain text and put text in a text-button. As a user adds, removes, resizes, and moves UI elements, D2S2 searches through 58k Rico [7] screens to fetch UI examples based on UI element type, position, element shape, and texts as shown in Figure 1. D2S2 fetches the top-20 screens and displays them in its website’s top-pick screen search results section. We recruited 10 software developers without prior UI/UX design training to assess D2S2’s effectiveness. The participants searched for a given target Rico screen with D2S2 until the screen appeared in D2S2’s top-20 search results. In our experiment, D2S2 successfully obtained 15/19 target screens within a minute and 19/19 within four minutes. D2S2 further retrieved more relevant mobile screens than the other closely related competitor, Google image search. In summary, this paper makes the following major contributions. • D2S2 is the first interactive drag-and-drop app screen search tool. After each query change it updates its search results. D2S2 searches 58k Android screens and is freely available (http://pixeltoapp.com/D2S2). In a preliminary user study, D2S2 performed similarly as the deep-learning based TpD (but without requiring training) and better than Google image search. D2S2’s implementation (https://github.com/toni-tang/D2S2) is available under a permissive open-source license. ### 2 BACKGROUND D2S2 searches 58k mobile Android app screenshots from the Rico dataset by Deka et al. Each screenshot has a corresponding DOM-tree container hierarchy, where each UI element is described by its Android class name, x/y coordinates, textual information, and on-screen visibility. Liu et al. expanded on this dataset by collecting 73k screen elements, categorizing them into 25 types of UI components, and further dividing text buttons into 197 and icon into 135 sub-classes. D2S2 incorporates several common Android UI elements identified by Liu et al. Previous studies have explored using sketches and wireframe images to search for relevant mobile screens. However, wireframe-based approaches such as Swire rely on complete wireframe images to identify screens with similar visual characteristics, often not considering UI element type and text within the screen. Dependence on an entire wireframe image does not support the iterative nature of the design process. Besides Google Image Search, our closest competitors are PSDoodle and TpD, which offer an interactive and iterative approach to searching mobile screens. PSDoodle employs a deep neural network to identify sketched UI elements and then computes a ranking score for Rico’s screens based on various factors, including UI element type and text within the screen. TpD extends PSDoodle by adding a text-based search that matches a text query with visible text on the mobile screen and UI element descriptions. Notably, TpD allows queries to contain text, UI element sketches, or both. ### 3 OVERVIEW AND DESIGN To create a search system that is easy to use, we followed a user-centric approach. Via the Figma graphical design tool, we thus first created a UI prototype (Figure 2), showed the prototype to 11 computer science undergraduate students, and collected their feedback. By incorporating their feedback, we then iteratively enhanced D2S2’s user experience, mostly by refining D2S2’s UI. All user feedback is in D2S2’s repository. #### 3.1 User Interface & Query Language In D2S2, a search query consists of a set of UI elements arranged on a canvas that models the screenshot of a mobile app. Starting with an empty canvas, the user interactively refines this canvas, adding and adjusting UI elements as they should appear on the desired app screens. Each time the user modifies this search query, D2S2 retrieves matching app screens that have the query’s UI elements at about the location the user placed them on the canvas. A part of the search is matching any texts the user added to the search query with screens’ text contents and descriptions of their UI elements. Figure 3 shows D2S2’s current UI. Besides moving the app bar to the bottom, the biggest change is allowing users to search D2S2’s library of 52 built-in UI elements by the UI element’s name and various synonyms. Figure 4 lists these 52 UI elements in the order D2S2’s UI presents them. The order is TpD’s UI elements first, then... UI elements identified by Liu et al., ordered by how common they appear in Rico screens [15]. ![Image of D2S2's webpage](http://pixeltoapp.com/D2S2) Figure 3: D2S2’s webpage: Searchable UI elements (left), canvas with 3 user-placed UI elements (right), app bar (bottom). Cropped screenshot from [http://pixeltoapp.com/D2S2](http://pixeltoapp.com/D2S2), not shown: D2S2’s top-20 search result screen gallery. Liu et al. classified the UI elements of Rico’s screens into 25 categories. 11/25 categories are various container types, which D2S2 represents via a single general container. D2S2 directly supports 11/14 of the remaining categories, plus 44/135 icon types. Combining 3/56 of these UI elements due to their similar visual representation with another UI element (e.g., slider vs. slider icon) yields D2S2’s 52 UI element types plus text. ![Image of user building a query](http://example.com) Figure 4: A user builds an app screen search query on D2S2’s canvas by dragging and adjusting text or these 52 UI elements. The user searches (or scrolls) the UI element list, selects a UI element and drags and drops it on the canvas. The user can interact with the UI element, i.e., to move or resize it there. The app bar at the bottom of the canvas allows undoing and redoing the last element modifications and clearing the screen. The user can add text either via a text-button from the UI element collection or via the app bar’s “TEXT” feature. The latter adds a text element to the canvas the user can manipulate like any other UI element. Clicking on such a text element enables modifying its text content. As in the earlier PSDoodle and TpD, UI elements may be nested, i.e., to support UI elements grouped in a container element. D2S2 encodes the canvas’s current state as a set of 6-tuples of the form $(x, y, w, h, c, t)$, one tuple per UI element on the canvas. The tuple lists an element’s left-top corner’s location in pixel-space, the UI element’s width, height, category, and text content (for text and text buttons). The D2S2 webpage is written in React, as it provides client-side rendering [14] and efficiently manages various events such as drag-start, drag-end, and the undo/redo functionality. 3.2 D2S2’s Back-end Figure 5 illustrates D2S2’s overall architecture, which consists of D2S2’s webpage front-end and its AWS-hosted back-end. Each time the user modifies the query, D2S2 sends the updated query’s tuple encoding via HTTP post request to AWS EC2. For the current query, the D2S2 back-end ranks its 58k Rico screens and sends the IDs of the top-20 ranked screens back to the front end. The front-end then retrieves a lower-resolution version of the 20 corresponding screen images from D2S2’s AWS S3 bucket and displays them in the top-pick gallery. When clicking on a result screen, D2S2 fetches and displays a higher-resolution version of the result screen. ![Image of D2S2's architecture](http://example.com) Figure 5: D2S2’s architecture: A user drags or adjusts Figure 4 UI elements on the Figure 3 D2S2 front-end webpage (left), which communicates with its AWS-hosted back-end (right). To rank its 58k screens, D2S2 uses TpD’s infrastructure, which in turn builds on PSDoodle’s. For non-text UI elements, D2S2 uses PSDoodle’s screen scoring scheme (which TpD similarly reused). Specifically, D2S2 divides a mobile app screen into 24 equally sized tiles (6 along the width and 4 along the height) and maintains TpD’s tile configuration. The main change is in more than doubling TpD’s 23 UI element classes to D2S2’s 52. In the back-end, this is straightforward by adding one screen ID index for each of the additional UI element classes to allow fast screen lookup. For text elements, D2S2 reuses TpD’s pipeline [18], which preprocesses the Rico screens’ text contents and UI element descriptions (remove stop words, identify names, lemmatization, adding synonyms via contextual analysis, and tagging text content with on-screen location). As for text contents TpD only supports four different screen areas (top-left, top-right, bottom-left, and bottom-right), D2S2 first maps the location of a text element to one of these four TpD screen areas. As TpD, D2S2 then uses ElasticSearch with Levenshtein edit distance one, to heuristically also match slightly mis-typed user-provided text to screen contents. 4 D2S2 USAGE To compare with its closest competitors TpD and Google Image Search, we enlisted 10 computer science students who did not have formal UI/UX design training. While the participants differ, we recruited them using the same criteria the TpD study used. To ensure diversity, we selected five individuals with and five without previous mobile app development experience. All participants were early-stage undergraduates aged 20–25. As a token of appreciation, each participant received USD 10 compensation. Specifically, we are interested in the following research questions. RQ1 How does D2S2 compare with TpD, in terms of total time of the interactive search, final queries’ UI element counts, and final queries’ top-k screen retrieval accuracy? RQ2 How does D2S2 compare with Google Image Search on a free user query, for producing relevant top-20 search results? For each participant, we had one video conference of about 30 minutes that started with us explaining D2S2’s objectives. We then demonstrated the search process for an icon, dragging the icon to the canvas, resizing and adjusting the icon’s position on the canvas, the functionality of the undo/redo/clear-screen buttons, and how to add text using the text and text-button features. Each participant accessed D2S2 over the internet via a web browser on their personal machine. We used D2S2’s standard setup as a website hosted on an Amazon AWS EC2 general-purpose instance (t2.large), featuring two virtual CPUs and 8GB RAM. D2S2’s repository contains all experimental results. 4.1 Similar Screen Search Performance as TpD For this second part of a participant meeting, we used the 26 randomly selected Rico target screens used by TpD’s evaluation. For each participant, we randomly selected from this pool one target screen per search session. We instructed the participant to create a query that would retrieve the target screen and refine the query until the target screens appeared in D2S2’s top-20 results. Table 1: Participants’ search sessions for target screens via D2S2 (left) and free search via D2S2 and Google Images (right); t = search session’s total time; n = final query’s UI elements (including texts); r = target screen’s rank for final query; G/D = top-20 Google/D2S2 results participant judged relevant. <table> <thead> <tr> <th>Target 1</th> <th>Target 2</th> <th>Free 1</th> <th>Free 2</th> </tr> </thead> <tbody> <tr> <td></td> <td>t[s] n r</td> <td>t[s] n r</td> <td>G D G G</td> </tr> <tr> <td>1</td> <td>40 4 2</td> <td>27 3 1</td> <td>2 18 18 0</td> </tr> <tr> <td>2</td> <td>120 9 1</td> <td>52 3 2</td> <td>1 3 3 4</td> </tr> <tr> <td>3</td> <td>37 3 4</td> <td>48 4 3</td> <td>5 20 0 20</td> </tr> <tr> <td>4</td> <td>50 3 10</td> <td>23 2 4</td> <td>8 16 10 20</td> </tr> <tr> <td>5</td> <td>70 4 1</td> <td>240 2 1</td> <td>0 7 4 13</td> </tr> <tr> <td>6</td> <td>59 3 12</td> <td>196 15 14</td> <td>3 16 4 20</td> </tr> <tr> <td>7</td> <td>63 3 18</td> <td>51 3 2</td> <td>7 11 1 3</td> </tr> <tr> <td>8</td> <td>42 4 16</td> <td>51 3 7</td> <td>7 10 0 7</td> </tr> <tr> <td>9</td> <td>39 3 9</td> <td>42 7 17</td> <td>1 10 1 19</td> </tr> <tr> <td>10</td> <td>50 4 8</td> <td>- - -</td> <td>2 5 - -</td> </tr> </tbody> </table> Nine participants used D2S2 twice, and one used it once, yielding 19 D2S2 search sessions. For each such search session, we recorded the total time, the number of UI elements and texts in the participant’s final query, and the target screen’s rank in D2S2’s results for that final query. D2S2’s top-k retrieval accuracy is the number of search sessions in which D2S2 ranks the target screen in its answer to the participant’s final query in the top-k. We use top-k retrieval accuracy, as the metric is widely used to evaluate related work [6, 8, 11] and correlates with user satisfaction [12]. Table 1 summarizes the results. Comparing with TpD’s results is a little tricky as TpD’s participants were instructed to search until the target screen appears in TpD’s top-10 search results or the search exceeds 3 minutes. So TpD participants were encouraged to spend a bit of additional time to refine a query. With this caveat, the overall results for D2S2 and TpD are similar. D2S2’s top-20 screen retrieval accuracy is 100% (19/19) vs. TpD’s 97% (29/30). D2S2’s total search session time was at least 23 seconds, 240s maximum, 68s average, and 50s median. This compares to a 5s minimum, 156s maximum, average 45s, and 35s median for TpD. Contributing to TpD shorter search sessions are TpD’s experimental setup (which allowed participants to practice using TpD for some 10 minutes before collecting results) and D2S2 having more than twice the number of UI elements to choose from for a search query. We observed participants using significant time browsing the UI elements available in D2S2 and selecting the correct UI element. 4.2 More Targeted Than Google Image Search In this final meeting part, we instructed each participant to formulate a Google-style search query and perform a corresponding search using both D2S2 and Google image search (an example of a participant’s query is “mobile screen menu icon top left and search icon top right”). We then asked the participant to rate each result in both tools’ top-20 results as relevant or non-relevant to the query. Participants judged 20% (77/380) of Google image search’s results as relevant and 58% (222/380) of D2S2’s result screens. D2S2’s 58% relevance here is largely in line with TpD’s 52% reported for searches for a given target screen [18]. Given D2S2’s and TpD’s slightly different experimental setups, it is hard to draw conclusions about their relative performance. For the search scenario over 58K Rico screens, both tools clearly perform better than Google Image Search. 5 CONCLUSIONS Current sketch-based iterative mobile screen search has limitations in supporting many UI elements. Drag-and-drop provides a flexible alternative. D2S2 provides an interactive, drag-and-drop search that displays results interactively. The tool is freely available and has undergone user testing, demonstrating its effectiveness. D2S2 is a promising solution for novice users who require assistance creating a comprehensive UI design in the initial development phases. ACKNOWLEDGEMENTS Christoph Csallner has a potential research conflict of interest due to a financial interest with Microsoft and The Trade Desk. A management plan has been created to preserve objectivity in research in accordance with UTA policy. This material is based upon work supported by the National Science Foundation (NSF) under Grant No. 1911017. REFERENCES Received 2023-05-11; accepted 2023-07-20
{"Source-Url": "https://ranger.uta.edu/~csallner/papers/Mohian23D2S2.pdf", "len_cl100k_base": 4664, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 16080, "total-output-tokens": 6136, "length": "2e12", "weborganizer": {"__label__adult": 0.0004916191101074219, "__label__art_design": 0.005950927734375, "__label__crime_law": 0.0003581047058105469, "__label__education_jobs": 0.0020656585693359375, "__label__entertainment": 0.0001832246780395508, "__label__fashion_beauty": 0.0002849102020263672, "__label__finance_business": 0.00027370452880859375, "__label__food_dining": 0.0003085136413574219, "__label__games": 0.0007128715515136719, "__label__hardware": 0.00109100341796875, "__label__health": 0.0003085136413574219, "__label__history": 0.00037550926208496094, "__label__home_hobbies": 0.0001208782196044922, "__label__industrial": 0.0002815723419189453, "__label__literature": 0.0004055500030517578, "__label__politics": 0.00017631053924560547, "__label__religion": 0.00051116943359375, "__label__science_tech": 0.02197265625, "__label__social_life": 0.00013327598571777344, "__label__software": 0.0213775634765625, "__label__software_dev": 0.94189453125, "__label__sports_fitness": 0.00022780895233154297, "__label__transportation": 0.00038504600524902344, "__label__travel": 0.00021708011627197263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22523, 0.07088]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22523, 0.07488]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22523, 0.84967]], "google_gemma-3-12b-it_contains_pii": [[0, 5092, false], [5092, 8444, null], [8444, 12794, null], [12794, 19065, null], [19065, 22523, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5092, true], [5092, 8444, null], [8444, 12794, null], [12794, 19065, null], [19065, 22523, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22523, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22523, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22523, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22523, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22523, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22523, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22523, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22523, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22523, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22523, null]], "pdf_page_numbers": [[0, 5092, 1], [5092, 8444, 2], [8444, 12794, 3], [12794, 19065, 4], [19065, 22523, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22523, 0.11607]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
c880e005df12e6c8ec7f98bdb5ab6ac0f4f28182
Industrial Use of Domain-Specific Modeling: Panel Summary Juha-Pekka Tolvanen MetaCase Niels Brouwers Altran Robert Hendriksen SoLay-Tec and Sioux Gökhan Kahraman ASELSAN A.S Jeroen Kouwer Thales Abstract Domain-specific languages and modeling provide a viable solution for continuing to raise the level of abstraction beyond coding, making development faster and easier. This paper summarizes the panel on Industrial Use of Domain-Specific Modeling held at the workshop on Domain-Specific Modeling at SplashCon (Amsterdam, Netherlands 30th October 2016). Panelists included Niels Brouwers from Altran, Robert Hendriksen from SoLay-Tec and Sioux, Gökhan Kahraman from ASELSAN A.S and Jeroen Kouwer from Thales. Panel was moderated by Juha-Pekka Tolvanen from MetaCase. General Terms Design, Languages, Verification. Keywords industry experience; panel; domain-specific modeling; generators, modeling languages 1. Introduction Raising the level of abstraction with languages, yet enabling the generation of code and other needed artifacts, has been a successful recipe for productivity and quality improvements for decades. In this panel, experienced industry experts were asked to share their experiences, both good and bad, on applying Domain-Specific Modeling (DSM) in various application areas. The panel discussion offered insight into the nature of DSM language design, implementation, and application, as well as the possibilities of diverse organizational introduction and use. The audience was invited to ask questions and join with their own opinions and experiences. The panelists included the following participants: • Niels Brouwers, Software Architect, Altran • Robert Hendriksen, Software Architect at SoLayTec and Sioux • Gökhan Kahraman, Team Leader, ASELSAN A.S • Jeroen Kouwer, Software Engineering Consultant, Thales The bios of each panelist are provided at the end of the document. 2. Position Statements Juha-Pekka Tolvanen from MetaCase acted as a moderator, introducing the panelists, who then gave their position statements as follows: Niels Brouwers works at Altran in a competence center of 25 persons creating domain-specific languages (DSLs) and software factories, while further developing Altran’s MDE competence. Language bridges, an MDE continuous integration environment, and portfolios are examples of assets that increase productivity to develop and industrialize the software factories and integrate them in the client’s engineering process. Altran plays a significant role in the MDE ecosystem, mainly concentrated in the Brainport area in the southern part of the Netherlands. It is actively participating in research studies, the industrialization of MDE techniques in industry, and reporting on industrial challenges and experiences to research institutes. Although generic modeling tools play a successful and useful role in the MDE ecosystem, Altran believes most gains in productivity, quality and reducing accidental complexity is only achievable with usage of domain specific software factories. These software factories provide a perfect fit to both the practitioner and the process in which it is being used. Abstraction of the created DSLs reaches up toward the problem domain to smoothen communication with stakeholders and reduces the engineering gap from problem to solution space. His experiences are from several projects in which modeling languages, highly advanced generators and related tools are developed and applied in a complex industrial context. In this setting, 8 DSLs, consisting of 50-100 smaller DSLs, have been developed that are used by 20-50 engineers on a daily basis. Together, it is estimated that multi-million lines of code have been generated and integrated in products build for the high-tech industry. Typical DSLs show a productivity gain of at least a factor 5. Figure 1. Figure 1: 8 DSL Software Factories industrialized in client’s engineering process Niels sees that the demand for software continues to grow rapidly, but the amount of people developing this software does not seem to grow at the same pace. Because it is challenging to develop all of the software for complex machines with reasonable time to market, there is a real problem in software development. Niels concluded that he has not seen a technique comparable to model-based approaches that provides similar results with productivity improvements. Therefore, the real question is not “if” MDE/DSM/DSL approaches should be applied, but “how” these techniques can be introduced into an industrial ecosystem. Robert Hendriksen has applied DSLs at SoLayTec for developing machines for atomic layer deposition (ALD) on solar cells. The use of DSLs started as skunk works, with a tool called oaw, and in a short period of time they started to obtain immediate benefits. Later, oaw was abandoned, but the principles continued to be applied. Robert explained the use of DSLs and generators in this domain in more detail (see figure below). First, a gas expert creates models of the system on the hardware side (e.g., heating, valves and other instruments connected with pipes). These are also used as the source for generators – initially for HAL (Hardware Abstraction Layer), but then also for various other purposes: - Simulation is very helpful because often the hardware is not available; by generating simulators, development can be started earlier. - State modeling can be used for machine description and control code behavior. Also, many stakeholders who are not software engineers can understand state machines and the importance of the states to their domain. - Models can serve as a user interface for both a real system and a simulator. - Models include information also from the hardware target, such as a PLC platform from a different company. The hardware interfaces can be generated, too. Tests can also be created from models for new acquired hardware (e.g., new panels from suppliers). Gökhan Kahraman summarized his group’s experiences in terms of the benefits and challenges as follows: - Increase in productivity (5-7 times) - Maintain the created models easily and quickly - Decrease in the number of errors via automatic code generation - Multiple artifacts generated from interface definitions, like Communication Middleware, Test Driver, ICD - Tooling maturity is often a challenge (Eclipse GMF: trouble in using it, especially when it comes to evolving the language, debugging!) - Lack of support for large-scale projects (theoretically and tooling); in particular, when using multiple DSLs that need to be integrated Jeroen Kouwer is a software architect at Thales serving as an engineering consultant. Thales has applied modeling and generators for developing sensors for over 10 years, e.g., a sensor suite in a ship monitoring the environment using radar. After 10 years, they have evaluated whether the effort has been worthwhile and the answer is clear that their investment has produced great benefit. Jeroen introduced the TCO approach (Total Cost of Ownership) to modeling (Figure below). In particular, he emphasized that a DSL helps to define the terminology of the domain. He gave a simple example to illustrate the benefits: nobody is expected to speak about a ‘boat’ as the domain concept is a ‘ship’. A prerequisite for successful modeling is a (rich) domain-specific platform. Having this domain data in a model is very useful because it can be used for various generation purposes. Thales is using Capella, a system modeling language, and an internal framework. The modeling is done with UML and stereotypes on top of this platform. Jeroen told the workshop attendees that they do not have measurements based on calculated gains and investments, but they know that the second sensor they built took only a few weeks, but the first effort took a few months. This gives a clear indication that there are productivity improvements in place with their TCO approach. 3. Audience questions and discussion Because all the panelists were clearly in favor of modeling and the use of generators, the audience asked if someone could play the role of “devil’s advocate.” This led the audience to ask how panelists deal with the challenges they have faced. The following topics were addressed by the panel. 3.1 How to deal with legacy code and how to introduce it when moving to modeling with DSLs? Robert told that if there is an existing system, its evolution can be handled by module revolution: take changes of the system apart and redo them with a model-driven way, he called “System evolution by module revolution”. Gökhan agreed with Robert and mentioned that their DSLs are also based on existing code, but they re-generate code based on the DSL. He has observed that a legacy system is not an obstacle to using DSLs. He believes that DSLs bring a great opportunity to legacy system evolution. Jeroen stated that if a legacy system can be included as a reusable asset, then the maintenance is easier. A challenge observed at Thales is that the interfaces to legacy components and the code generator need to be consistent (i.e., code is generated according to the interfaces). He sees that before the language can be implemented first recurring patterns need to be identified and used as candidates for language constructs. Since with legacy there were clear interfaces and components having those interfaces it became natural that they should have a system modeling language using these concepts while modeling too. There was also a question about general guidelines on identifying architectural concepts and if there is known literature on the topic. A suggested book on the topic was: Domain-Specific Application Frameworks, edited by Mohamed E. Fayad and Ralph E. Johnson. 3.2 If technology changes at a fast pace, how can we manage the changes to the modeling solution? Jeroen spoke about how he and his colleagues have seen management support important to handle the evolution in the past. Now they are introducing a new framework that requires a new language and expects management support for that too. In this time the new language will be used by current engineers and rely on existing toolsets as well. Niels mentioned that they handle the evolution of languages and generators for their customers. When the models do not match a new metamodel, they create migrators as needed. Altran also ships the migrators, if needed, with the developed modeling solutions to their customers. Meta-model co-evolution of other artefacts such as concrete syntax, code generators and validation rules is still a problem that currently negatively impacts the productivity of the toolsmiths. Gökhan shared his experience that minor changes are manageable, but larger changes are still challenging; in particular with their case when the domain changes several times over a year (~4/year). Tools could help ease the challenge and make the process easier. When asked how the model changes are done – automatically or manually – he stated that changes are done manually. 3.3 Why certain industries, like those presented in the panel, are ready to use modeling, but others not? Robert voiced his opinion that one reason for his group is that the language they are using fits well to the domain they are working with. He also sees their language as easy to adopt by domain experts. Perhaps other fields do not yet have such suitable languages. It was also discussed among the panelists that perhaps in some areas it is just hard to identify a good language, or that there is no support from the company side to try new things. The panelists and participants (30) of the workshop were all software developers. It was observed that if technology experts do not have a problem domain it may make challenging to identify appropriate language constructs. 3.4 How to introduce DSM/DSL/MDD? The discussion ventured into a reflection on how to best introduce DSLs and modeling into the culture of a development organization. A comment from the audience was that there is a simple solution: Budget. “You make the budget for the project so small that no other technology than model-based development with domain-specific languages can solve the issues.” This somewhat strong proposal seemed to receive general acceptance among the workshop attendees. Niels added that the introduction of DSLs will be successful only if both architects/engineers and management are convinced about the benefits and return on investment. From a technical perspective, it is helpful to be able to demonstrate how DSL/MDE techniques work using concrete examples (e.g., application of general software techniques instead of magic, how it increases quality, supports architecture and improves communication with stakeholders). Management can be convinced by productivity gains, platform independence (risk) and adoption of techniques at peer companies. Secondly, depending on the company, the strategy used to introduce DSL/MDE techniques might be chosen differently. In one company, it may be beneficial to start bottom-up; i.e., implement a horizontal DSL within a single project and gradually increase the level of abstraction and/or expand to multiple projects. Alternatively, from top-down, where the MDE solution is defined at the correct level of abstraction (up until problem area) and gradually implemented to support all aspects. Niels has seen both strategies work successfully. 3.5 Role of “reverse” engineering / creating or updating models In addition to code generation, the audience asked about other reasons for using models. A question was asked, “Is there work being done on updating the models based on external sources, like asking computers to do part of the work, rather than humans?” Robert emphasized the use of models to visualize the system to show errors while design or during execution. Niels also told that in theory they could use models also to examine external behavior of legacy components with models, if needed. Also, applying models in debugging mode, as presented in the workshop, was mentioned by the audience as a benefit, as well as incorporating the test results or simulation results to the models or to the generated artefacts from the models. It was also emphasized that models can be used just to visualize things for customers so that validation (are we solving the right problem) is done together with the customer. 3.6 How to sustain the momentum and move to the next domain within the company? Although panelists have testified about several cases of success (5x productivity or improved quality), it is sometimes hard to “sell” the idea of DSLs again to a new project in the company. A key question to the panelists was how they created several DSLs across different projects, and how they then managed to move to the next DSL creation project? Jeroen mentioned that they are currently in a process of having a software architecture language already in place and leverage it now projects using it in France and in the Netherlands. Once management smells money due to benefits gained then it makes re-applying modeling easier. In Thales case they could show improvements with data reducing development time from 3 weeks to 3 days. Those kinds of numbers make a success and they have now 50% of all code generated. Robert told that they have had similar experiences – albeit not having used DSLs so long yet. Gökhan told that when presenting the modeling and code generation idea he collected data from the projects done and presented this data to the management. It helped then to make the change for the next project. 3.7 What are the current challenges? When asked about the challenges, Niels summarized the main issues as follows: - How to quantify gains of adopting MDE in the engineering process? Especially when working with a new customer, it is hard to estimate the gains in terms of productivity and quality. - How to remove resistance by software engineers that prevent adoption of MDE? While some people fear being replaced by “code generators” (he does not see this as a valid concern), this issue could be addressed better. - How to reduce complexity to develop the software factories? Gökhan mentioned the following challenges met at ASELSAN: - How compatible are new DSLs when integrated into the software development lifecycle of large-scale and distributed systems? - How can we provide language interoperability whereby DSLs and GPLs can co-exist and work together, such as when multiple DSLs and GPLs capture different system aspects in a large system? - How can we obtain high-quality languages, which may be a key toward obtaining high-quality software? - How do we address language evolution concerns when the DSL specification changes? DSLs evolve as the concepts in a domain evolve. This is a relevant challenge in EMF/GMF within Eclipse. - Poor tooling (user friendly tooling, insufficient debugging tools) remains a deep concern. Jeroen raised the essential question about whether we are modeling the right kinds of things. Thales does not create only system architectures, but sensors and sensor systems as well as integration, yet they model system and software architectures. After many years of modeling he is wondering whether they are modeling the right thing. Should they model sensors and sensors systems rather than architectures? 4. Closing Remarks Jeroen restated the key question he made earlier: Are we modeling the right thing? Is the language operating at the right level of abstraction? Gökhan emphasized the challenge of creating good quality languages and keeping up their quality during the maintenance phase. Robert pointed to the future and mentioned integration with models and AI, and the use of guidance for making generators. He does not see that AI provides all of capabilities for implementing generators, but AI can offer new capabilities over past approaches. Niels wants to see more use of modeling with DSLs because he does not envision any other technology that is able to provide similar results. 5. About the Panelists Niels Brouwers is a software architect at Altran and specialized in the field of model-driven engineering. His passion for model-driven engineering originated in 2007 and was further pursued by joining Altran in 2011, a global leader in innovation and high-tech engineering consulting that strongly believes in model-driven engineering. For more than 5 years, he has led multiple teams in the development of DSLs and advanced code generators that are applied in a large industrial software organization. Robert Hendriksen joined Sioux, which is based in Eindhoven, the Netherlands, in 2006. He has been involved in various projects for Sioux, but at the moment, he works on a full-time basis as software architect for SoLayTec’s products. Any time left after his core responsibilities is devoted to the construction and application of DSLs. Gökhan Kahraman received the M.Sc. and Ph.D. degrees in Electrical and Electronics Engineering from Hacettepe University, and Middle East Technical University (METU), Ankara, Turkey, respectively. He is currently working as a senior expert software engineer at ASELSAN A.S. in Turkey. He has over 10 years of experience in embedded software development using model-driven development and DSM approaches, taking on developer, architect and team leader roles in large scale and complex system projects. He is the team leader of the DSL development team in the ASELSAN-REHS group. His team designed and implemented many DSLs that are used in several projects. These DSLs continue to evolve and are maintained. His Ph.D. work focused on the assessment of DSLs and his current research interests include the quality of DSLs and cyber-physical systems. Jeroen Kouwer started his career in 1998 at Thales and has worked since then for various companies, and then rejoining Thales at the end of 2006. Upon returning to Thales, he started work on a software service framework and the modeling methodology on top of this framework. Since then he has applied his modeling and software skills in various projects. He has experience with C, Java, modeling, meta-modeling and DSL development. He has a strong focus on enhancing and simplifying developing and testing of software. Juha-Pekka is the CEO of MetaCase, a company providing MetaEdit+ tool for DSM. Juha-Pekka has worked with model-driven development and tools, notably metamodeling and domain-specific languages and models, since 1991. He has acted as a world-wide consultant for modeling language development, authored a book on Domain-Specific Modeling, and written over 70 articles for various software development magazines and conferences. Juha-Pekka holds a Ph.D. in computer science from the University of Jyväskylä, Finland.
{"Source-Url": "http://dsmforum.org/events/DSM16/DSM16_Industry_Panel_Summary.pdf", "len_cl100k_base": 4167, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 15710, "total-output-tokens": 4439, "length": "2e12", "weborganizer": {"__label__adult": 0.00028824806213378906, "__label__art_design": 0.0003972053527832031, "__label__crime_law": 0.0002353191375732422, "__label__education_jobs": 0.0008969306945800781, "__label__entertainment": 4.303455352783203e-05, "__label__fashion_beauty": 0.00012767314910888672, "__label__finance_business": 0.0003218650817871094, "__label__food_dining": 0.0003025531768798828, "__label__games": 0.00028133392333984375, "__label__hardware": 0.0006399154663085938, "__label__health": 0.0002727508544921875, "__label__history": 0.00013399124145507812, "__label__home_hobbies": 6.73532485961914e-05, "__label__industrial": 0.0007944107055664062, "__label__literature": 0.0001423358917236328, "__label__politics": 0.0001659393310546875, "__label__religion": 0.0004115104675292969, "__label__science_tech": 0.01030731201171875, "__label__social_life": 7.50422477722168e-05, "__label__software": 0.005535125732421875, "__label__software_dev": 0.9775390625, "__label__sports_fitness": 0.00021135807037353516, "__label__transportation": 0.0004508495330810547, "__label__travel": 0.00014483928680419922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20855, 0.00708]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20855, 0.23633]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20855, 0.95532]], "google_gemma-3-12b-it_contains_pii": [[0, 4404, false], [4404, 7493, null], [7493, 13790, null], [13790, 19822, null], [19822, 20855, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4404, true], [4404, 7493, null], [7493, 13790, null], [13790, 19822, null], [19822, 20855, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20855, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20855, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20855, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20855, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20855, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20855, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20855, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20855, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20855, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20855, null]], "pdf_page_numbers": [[0, 4404, 1], [4404, 7493, 2], [7493, 13790, 3], [13790, 19822, 4], [19822, 20855, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20855, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
044206df3ff630a6f783076078d2d0a2d9a05349
GRNSPG Internship Introduction to Software Engineering and Digital Systems Reliability Pisa, 15 June, 2009 Testing, Verification and Validation of I&C Systems Andrea Domenici DIIEIT, Università di Pisa Andrea.Domenici@iet.unipi.it The lifecycle of a product is the set of activities affecting it from conception to retirement. The lifecycle of software products is similar to the general lifecycle, but there are important differences. Software lifecycle *Coding* is actually the final phase of design. Software maintenance is actually redesign/recoding to correct or improve deployed software components. In this seminar we ignore issues about software decommissioning. A software development process maps activities (i.e., things to be done) into phases (i.e., periods wherein activities are carried out). Each phase has well-defined milestones (important events at planned dates) and produces some deliverables (documents or code). The waterfall family of processes maps each activity to a distinct phase. Other families of processes, such as the *iterative* processes, have more complex mappings from activities to phases. The waterfall processes, however, are usually preferred (or mandatory) for software with safety requirements. The IAEA TRS 282 waterfall model The Gullfoss waterfall model (brunnur.stjr.is/embassy/strasb.nsf/pages/index.html) Requirements - A requirement is a statement about a relevant aspect of a system that must be developed: a service it must deliver, a property or constraint it must satisfy, and so on. - Some requirements may concern the development process itself, rather than the system; for example, it may be a requirement that the system is developed following certain standard procedures. Functional and non-functional requirements - *Functional* requirements describe the services offered by the system in terms of relationships between inputs and outputs. - A system that satisfies its functional requirements is said to be *correct*. - *Non-functional* requirements describe properties of or constraints on the system or process. - Some non-functional requirements are *reliability, robustness, safety, and performance*. Constraints - *Constraints* are non-functional requirements that impose limitations on design choices. - *Time constraints* are particularly important in control systems. - *Synchronization* constraints impose some *ordering* among events, without specifying bounds on time intervals between events. - E.g.: “Valve B must not be opened before valve A”. - *Real-time* constraints impose *bounds* on time intervals between events. - E.g.: “Valve B must be opened between 10 and 20 seconds after valve A”. Note that a real-time system is not necessarily a fast system. *Predictability* is more important than speed. Analysis and specification - Requirements *analysis* is the process of understanding what is required of a system. - Requirements analysis involves studying the application domain, so that software developers may understand the relationship of the software with its environment. - A requirements *specification* is the result of the requirements analysis process, i.e., a document that gives a precise and complete description of the requirements. - The analysis process results in the construction of a *model*, i.e., an abstraction of the relevant aspects of the system and its environment. - Using a specific analysis method helps making this model and its underlying assumptions more *explicit* and *verifiable*. Traceability Requirements should be traceable to design; design should be traceable to code; requirements, design and code should be traceable to tests. Traceability should be maintained when changes are made. There should be traceability in the reverse direction, to ensure that no unintended functions have been created. IAEA Safety Guide *Software for Computer Based Systems Important to Safety in Nuclear Power Plants*, NS-G-1.1 Testability Each requirement and each design feature should be expressed in such a manner that a test can be done to determine whether that feature has been implemented correctly. Both functional and non-functional requirements should be testable. Test results should be traceable back to the associated requirements. IAEA Safety Guide *Software for Computer Based Systems Important to Safety in Nuclear Power Plants*, NS-G-1.1 The analysis phase produces an analysis model. The analysis model is defined by the Software Requirements Specification document. Other deliverables include the Test Plan and User Manual(s). DESIGN AND REALIZATION Architectures and modules - In the design phase developers find technical solutions to build a system that satisfies the requirements. - The design phase produces a software architecture. - An architecture is a model of the internal structure of the software. - A system architecture consists in the set of the system’s components and their reciprocal connections and relationships. - The architecture is hierarchical: the major components (subsystems) are composed of smaller components (modules) and so on, recursively. - The smallest-scale modules are called unit modules. An architectural model A *hypothetical* architecture, using the UML notation for subsystems. **WARNING:** this is *not* a real system! Coding - In the *unit coding and testing* phase the modules specified in the design phase are implemented in some programming language. - Programming involves translating the structuring concepts used in the design, such as interfaces and relationships of various kinds, into the concepts made available by the chosen programming language. - Programming also involves filling in several details left unspecified in the design phase. Development tools - **Basic** tools support programming and SW building: *text editors, compilers, debuggers, linkers*, tools for automatic compilation and configuration... - **Integrated development environments** co-ordinate basic tools through a user-friendly interface, and may offer more advanced capabilities. - **Computer assisted software engineering (CASE)** tools enable designers to create analysis and design models in some formal or semiformal language (we’ll see them shortly). - Tools may also support various forms of verification, and documentation. Automatic code generation - CASE tools can generate code for a usually partial implementation of the system. - *If* the model is correct *and* the code generation mechanism is correct, then we are reasonably sure that the code is correct. Software integration is the process of assembling the whole software system from its components. This process is carried out incrementally: - a first module is linked to a test harness that simulates the rest of the system, - and this simulated system (harness plus the integrated module) is tested, - then another module is integrated and tested, and so on. The process of testing each module as it is added to the system under integration is called integration testing. Hardware and software integration is the installation of the software on the computer(s) where it must run (that may be different from the development computer). The Integrated computer system tests are then performed. A common definition: - **Verification** checks an implementation against its specification. - Are we **making the product right**? - **Validation** checks a product (final or intermediate) against its intended requirements. - Are we **making the right product**? With these definitions, deliverables at any stage of the process may be validated. Verification and validation (2) Another definition, as in IAEA TRS 384: - **Verification**: The process of determining whether or not the product of each phase of the digital computer system process fulfils all the requirements imposed by the previous phase. - **Validation**: The testing and evaluation of the integrated computer system (hardware and software) to ensure compliance with the functional, performance and interface requirements. With these definitions, only the final product is validated. Verification applies to the intermediate deliverables. The V-model (as in IAEA TRS 384). Wovon man nicht sprechen kann, darüber muß man schweigen. *Apie ką negalima kalbėti,* *apie tai reikia patylėti* Modeling and programming languages - *Modeling* languages enable developers to define analysis and design models. - *Programming* languages are the material out of which software is built. - Programming languages can be seen as modeling languages with a very fine-grained level of detail. Modeling languages - **Informal**: natural language, or loosely defined graphical notations. - **Formal**: textual or graphical languages with well-defined syntax and semantics. - **Semiformal**: graphical languages with well-defined syntax but loosely defined semantics. A formal language identifies some basic attributes that are simple and general enough to describe a large class of systems in an abstract way. - E.g., the behavior of many systems can be described in terms of sets of states and sequences of actions. The possible values of these attributes form the domain of the language (just like numbers form the domain of algebra). The language defines operations that act on the elements of the domain, such as forming sets and sequences, and combining them in various ways. - E.g., we may define operations for parallel and sequential composition to describe the interaction of two processes. We can then describe systems with formulas whose meaning can be understood in terms of mathematical concepts, such as sets and functions. Formal languages (2) - **Finite State Automata** (FSA). A large class of languages and a fundamental modeling paradigm, based on *states*, *transitions*, *inputs*, and *outputs* (and many extensions). Also called *(Finite)* State Machines. - **Petri Nets**. More abstract, based on *places*, *transitions*, and *markings*. They can model the interaction of components of a complex system (whereas FSA’s model a system as a whole). - **Predicate logics**. Based on predicate logic and set theory, very general applicability. - **Temporal logics**. Used to specify properties related to synchronization. - **Process algebras**. A large class of languages that describe concurrent processes by means of operators on elementary actions. Often used in conjunction with temporal logics. ... Example: LOTOS (1) Subsystem \( S_1 \) is composed of processes \( P \) and \( Q \). Subsystem \( S_2 \) is composed of processes \( R \) and \( S \). \( P \) can execute actions of type \( a \) and \( b \), \( Q \) can execute actions of type \( c \) and \( d \). \( R \) can execute actions of type \( a \), \( S \) can execute actions of type \( c \). \( P \) and \( Q \) execute their actions independently of each other: An unconstrained parallel composition. Similarly for \( R \) and \( S \). Subsystems \( S_1 \) and \( S_2 \) must execute “simultaneously” actions of type \( a \) or \( c \): A synchronized parallel composition. The next slide shows this specification as a diagram and as a LOTOS expression. Example: LOTOS (2) \[(P[a, b] \parallel Q[c, d]) \mid [a, c] \mid (R[a] \parallel S[c])\] Within the theory underlying the LOTOS language, the previous expression can be transformed in an equivalent one, where the processes are rearranged. The resulting system has the same behavior of the initial one, but a simpler structure. The next slide shows the transformed system. Example: LOTOS (4) \[(P[a, b] | [a] | R[a]) || (Q[c, d] | [c] | S[c])\] A few modeling languages - **Z**. Based on predicate logic and Zermelo-Fränkel set theory. - **Vienna Development Method (VDM)**. Well-known predicate logic formalism. - **Calculus of Communicating Systems (CCS)**. A process algebra. - **Communicating Sequential Processes (CSP)**. Another process algebra. - **Language of Temporal Ordering Specification (LOTOS)**. Yet another process algebra. - **SDL** (Specification and Description Language). Based on an extension of FSA’s, used for telecommunication systems, process control and real-time systems. - **UML** (Unified Modeling Language). Probably the most popular language to date. Very large, general-purpose, composed of several sub-languages. Semiformal, with formal parts. Programming languages - **Imperative**: The programmer describes the algorithm, i.e., the sequence of steps, that the computer must execute to produce the solution. - *Procedure-oriented*: The elementary module is the procedure (or function, subroutine), a small subprogram that can be combined with other subprograms. Some data are private to each procedure, other data are shared among procedures. - *Object-oriented*: The elementary module is the class, a template that defines a set of related subprograms (operations, or methods) and the data they act upon. The instances of a class are called objects, and the data contained in an object are private to the object. A program is composed of objects that interact by calling each other’s methods. - **Declarative**: The programmer describes the conditions that the solution must satisfy. The language interpreter finds the solution with a built-in general-purpose problem-solving algorithm. A few programming languages - **Microcode.** Extremely low-level, defines the behavior of single HW components, such as CPU’s and network adapters. - **Assembler.** Very low-level, specific to each processor type. Used to program the machine-dependent parts of the operating system. - **C.** Wide-spectrum (from low- to high-level), procedure-oriented, used both for system and application programming. - **Fortran.** High-level, procedure-oriented, a mainstay of scientific and engineering computation. - **Ada.** High-level, object-oriented, conceived to support rigorous development processes, sometimes mandatory for SW with safety constraints. - **C++.** Wide-spectrum, object-oriented, currently one of the most used languages in industrial applications. - **Java.** High-level, object-oriented, used in a very wide range of applications, particularly web applications and graphical interfaces. - **TTCN-3** (Testing and Test Control Notation). Used mainly in the communications area to program the test harness. Augusta Ada King, Countess of Lovelace, born Byron. Goals of SW design Designers strive to maximize several properties of the design model: - **Correctness**: The model satisfies the specification. - **Comprehensibility**: Easy to understand (obvious, but hard to achieve). - **Verifiability**: Easy to verify. - **Modifiability**: Easy to change ("Design for change!"). - **Reusability**: Some or all the components can be reused in other applications. - Relevant to safety: old, proven components may be more reliable. - ... - **Modularity**: The key to all above properties. A system is modular if each of its components has a clearly understood and defined boundary (interface), is responsible for a well defined task and depends on other components in a simple way. Logical and physical architecture - The *logical architecture* models the system as a set of abstractions (modules) that have structure and behavior, with their reciprocal relationships. - The *physical architecture* describes the system as a set of software *artifacts* and hardware components. The software artifacts are the files (such as executables and libraries) containing the actually executed code, or the source code, or auxiliary data. Layers of a SW system - **Execution environment**: general-purpose operating system (e.g., Linux), real-time system, micro-kernel... - **Libraries and frameworks**: E.g., for numerical computation, input/output, graphical user interface... - A library is a collection of independent modules that are assembled without modification to build a complex system. - A framework is a collection of inter-related and configurable modules that are specialized and assembled to build a complex system. Frameworks provide ready-made solution schemas for common problems. - **Application-specific components**: They implement the application-specific logic. Execution environments and libraries are often provided by third-party suppliers, i.e., they are *Commercial off-the-shelf software* (COTS), or, more generally, *pre-existing software* (PSW). Modular design - **Interface**: Specification of the services (including operations, data, and types) offered by a module. - **Implementation**: The internal elements of a module, whose structure and behavior satisfy the interface specification. - **Hiding**: Making the internal elements of a module invisible to elements outside that module. Information hiding is the basis of modular design, as it avoids unnecessary inter-module dependencies. Object-oriented languages allow each internal element of a module to be declared *public* (externally visible) or *private* (hidden). The following is based on IAEA TRS 384: - **Walk-through**: A document (such as code, design, etc.) is presented to a group including developers and people possibly not involved in development. The document is evaluated and criticized. - **Inspection**: A document or set of documents is read by a group of people who check the document(s) for expected defects of some class (e.g., typical programming mistakes, common design flaws...). Checklists are commonly used. - **Formalized descriptions**: Writing a specification (or design) document using a formal language. This is a way to validate an informal requirements statement, and a pre-requisite for formal verification. Static analysis in V&V (2) - **Program proving**: Assertions (statements about relationships among variables) are associated with the beginning (pre-conditions) and the end (post-conditions) of a program segment (such as a procedure), and if the program is correct, the post-conditions can be proved by logical arguments to be a consequence of the pre-conditions. - **Symbolic execution**: The input variables are assigned symbolic values (say, $x$) instead of numeric ones. A symbolic interpreter applies the program statements to the input variables and computes the output values as symbolic (i.e., algebraic or logical) expressions that can be checked for compliancy with the program specification. - **Automatic analysis**: Automatic tools can check a program for indicators of possible anomalies, such as unreachable statements or usage of non initialized variables. --- *a*This technique is not mentioned in IAEA TRS 384. Example: Program proving for (i = 0; i < M; i++) { // 0 ≤ i < M for (j = 0; j < M-i; j++) { // 0 ≤ j < M − i if (v[j] > v[j+1]) { t = v[j]; v[j] = v[j+1]; v[j+1] = t; } // v_j ≤ v_{j+1} (i) } // ∀k(M − i − 1 ≤ k < M ⇒ v_k ≤ v_{k+1}) (ii) } // ∀k(0 ≤ k < M ⇒ v_k ≤ v_{k+1}) (iii) This program sorts an $M$-element vector. Formulas (i), (ii), and (iii) are conditions that must hold at the respective points in the program. Formula (iii) is the definition of an ordered vector. \[\text{a}\text{l.e., a sequence of values, in computer science jargon.}\] A formal model enables developers to find out about important properties of the system: - **Safety** properties: undesired states will *not* be reached, undesired actions will *not* be executed. - E.g., *Will the temperature raise above a given threshold? Will the reactor trip on a false alarm?* - **Liveness** properties: desired states *will* be reached, desired actions *will* be executed. - E.g., *Will the reactor reach full power? Will it trip on a real alarm?* Dynamic analysis - **Prototype execution**: A prototype is a partial or simplified version of a software system, whose internal structure is often unrelated to that of the final product. Prototypes can be used to assess the feasibility of design choices, to let users validate the specifiers’ interpretation of the requirements, and to experiment with different design choices. - **Simulation**: The environment (e.g., the plant) where the software will operate may be simulated by a simulator program. - **Testing**: The software is exercised by a selected set of inputs (*test data*) to discover possible malfunctions. Program testing can be used to show the presence of bugs, but never to show their absence. E. Dijkstra, quoted in Dahl et al., Structured Programming. (www.adeptis.ru/vinci/m_part7.html) Software faults and failures - A *failure* is an incorrect behaviour of the software. - A *fault* (commonly known as *bug*) is a defect in the software that *may* cause an observable failure. - Software failures are *systematic*, not random. - Nevertheless software failure appear to occur randomly, since a given failure may occur only when a particular “unlucky” input *activates* the fault. - Further, a given fault may cause different failures, a given failure may be produced by different faults, a fault may hide another one... Oracles An *oracle* is someone or something that tells us if a test result is correct or not. How does the oracle know? - Well known data (e.g., standard numerical tables). - Hand computation. - Comparison with results of previous or alternate versions. - *Executable specifications*: A prototype is written in a formal executable language (e.g., LOTOS, Prolog...), and used as a reference. Testing in the development process (1) Testing in the development process (2) - **Unit testing**: Each unit module is tested in the coding phase. - **Integration testing**: In the SW integration phase, the correct interfacing of each unit module with the rest of the system is tested. - **Regression testing**: After each change in the software, tests are made to ensure that at least the previous functionalities are preserved. Regression testing is then applied during coding, integration, and maintenance. - **System testing**: The final system is tested. SW integration testing (1) A module within the architecture. A module under test within the test harness. Another module is integrated. Data selection criteria Testing techniques may be grouped by their criteria for data selection: - **Structural** (white-box): the choice of test data is driven by our knowledge of the internal structure of the software, i.e., we try to exercise (probe) the various parts of the software. - **Functional** (black-box): the choice of test data is driven by our knowledge of the requirements. - **Statistical testing**: test data are selected according to a statistical distribution of the input data. Any single pair \((x_1, y_1)\) with \(x_1 \neq 0\) covers all statements, but two pairs \(\{(x_1, y_1), (0, y_1)\}\), with \(x_1 \neq 0\), cover all branches, and find the fault in stmt 5. Different coverage criteria spot different types of faults. Data flow criteria A form of structural criteria, where data are selected in order to cover all paths containing significant operations on variables: - Variable definition. - Variable usage in a computation. - Variable usage in the evaluation of a logical condition. ### Functional testing - **Equivalence classes and boundary values**: The set of possible inputs (including invalid ones) is divided into subsets, such that the values within each class produce equivalent outputs (under some criterion). Test data are then chosen “inside” each class and at the class boundaries. - **Decision tables**: First, we identify logical conditions that different outputs must satisfy (e.g., a variable may be positive or negative, a signal may be ON or OFF...), then we find combinations of conditions on inputs that make output conditions true or false. These relationships are summarized in a decision table (or graph). Data are selected to cover all columns of the table. - **Formal models**: If there is a formal model, test data may be selected by criteria based on the model. For example, if the system is modeled as a state machine, possible criteria are coverage of all states, all transitions, all paths... The statistical distribution from which test data are obtained can be derived from experimental data and/or assumptions on: - The application’s input space. - The fault distribution in the object to be tested. - The frequency with which the different parts of the input space will be exercised. (Adapted from IAEA TRS 384). SAFETY-RELATED STANDARDS - *Licensing of safety critical software for nuclear reactors – Common positions of seven European nuclear regulators and authorised technical support organizations*, Revision 2007, © AVN (Belgium), BfS (Germany), CNS (Spain), ISTec (Germany), NII (United Kingdom), SKI (Sweden), STUK (Finland).
{"Source-Url": "http://www2.ing.unipi.it/~a009435/issw/extra/swe_grnspg090615.pdf", "len_cl100k_base": 5280, "olmocr-version": "0.1.53", "pdf-total-pages": 65, "total-fallback-pages": 0, "total-input-tokens": 88859, "total-output-tokens": 7957, "length": "2e12", "weborganizer": {"__label__adult": 0.0003077983856201172, "__label__art_design": 0.00045013427734375, "__label__crime_law": 0.0003418922424316406, "__label__education_jobs": 0.0030364990234375, "__label__entertainment": 6.16908073425293e-05, "__label__fashion_beauty": 0.00013399124145507812, "__label__finance_business": 0.00020945072174072263, "__label__food_dining": 0.00031876564025878906, "__label__games": 0.0005669593811035156, "__label__hardware": 0.0012617111206054688, "__label__health": 0.00037384033203125, "__label__history": 0.0002397298812866211, "__label__home_hobbies": 0.00011152029037475586, "__label__industrial": 0.0006556510925292969, "__label__literature": 0.00026488304138183594, "__label__politics": 0.00019466876983642575, "__label__religion": 0.0004322528839111328, "__label__science_tech": 0.0256195068359375, "__label__social_life": 0.00010377168655395508, "__label__software": 0.00682830810546875, "__label__software_dev": 0.95751953125, "__label__sports_fitness": 0.0002765655517578125, "__label__transportation": 0.0005369186401367188, "__label__travel": 0.00019240379333496096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25712, 0.00772]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25712, 0.82277]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25712, 0.86993]], "google_gemma-3-12b-it_contains_pii": [[0, 233, false], [233, 233, null], [233, 439, null], [439, 676, null], [676, 1015, null], [1015, 1244, null], [1244, 1463, null], [1463, 1547, null], [1547, 1547, null], [1547, 1926, null], [1926, 2363, null], [2363, 2982, null], [2982, 3700, null], [3700, 4137, null], [4137, 4569, null], [4569, 4760, null], [4760, 4783, null], [4783, 5360, null], [5360, 5497, null], [5497, 5931, null], [5931, 6499, null], [6499, 6740, null], [6740, 7435, null], [7435, 7435, null], [7435, 7787, null], [7787, 8350, null], [8350, 8384, null], [8384, 8540, null], [8540, 8830, null], [8830, 9103, null], [9103, 9877, null], [9877, 10668, null], [10668, 11392, null], [11392, 11483, null], [11483, 11768, null], [11768, 11841, null], [11841, 12574, null], [12574, 13525, null], [13525, 14545, null], [14545, 14597, null], [14597, 14597, null], [14597, 15323, null], [15323, 15772, null], [15772, 16620, null], [16620, 17204, null], [17204, 17204, null], [17204, 17882, null], [17882, 18816, null], [18816, 19430, null], [19430, 19905, null], [19905, 20529, null], [20529, 20718, null], [20718, 21253, null], [21253, 21646, null], [21646, 21931, null], [21931, 22452, null], [22452, 22514, null], [22514, 22559, null], [22559, 22589, null], [22589, 23093, null], [23093, 23343, null], [23343, 23612, null], [23612, 24556, null], [24556, 24882, null], [24882, 25712, null]], "google_gemma-3-12b-it_is_public_document": [[0, 233, true], [233, 233, null], [233, 439, null], [439, 676, null], [676, 1015, null], [1015, 1244, null], [1244, 1463, null], [1463, 1547, null], [1547, 1547, null], [1547, 1926, null], [1926, 2363, null], [2363, 2982, null], [2982, 3700, null], [3700, 4137, null], [4137, 4569, null], [4569, 4760, null], [4760, 4783, null], [4783, 5360, null], [5360, 5497, null], [5497, 5931, null], [5931, 6499, null], [6499, 6740, null], [6740, 7435, null], [7435, 7435, null], [7435, 7787, null], [7787, 8350, null], [8350, 8384, null], [8384, 8540, null], [8540, 8830, null], [8830, 9103, null], [9103, 9877, null], [9877, 10668, null], [10668, 11392, null], [11392, 11483, null], [11483, 11768, null], [11768, 11841, null], [11841, 12574, null], [12574, 13525, null], [13525, 14545, null], [14545, 14597, null], [14597, 14597, null], [14597, 15323, null], [15323, 15772, null], [15772, 16620, null], [16620, 17204, null], [17204, 17204, null], [17204, 17882, null], [17882, 18816, null], [18816, 19430, null], [19430, 19905, null], [19905, 20529, null], [20529, 20718, null], [20718, 21253, null], [21253, 21646, null], [21646, 21931, null], [21931, 22452, null], [22452, 22514, null], [22514, 22559, null], [22559, 22589, null], [22589, 23093, null], [23093, 23343, null], [23343, 23612, null], [23612, 24556, null], [24556, 24882, null], [24882, 25712, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25712, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25712, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25712, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25712, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25712, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25712, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25712, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25712, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25712, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25712, null]], "pdf_page_numbers": [[0, 233, 1], [233, 233, 2], [233, 439, 3], [439, 676, 4], [676, 1015, 5], [1015, 1244, 6], [1244, 1463, 7], [1463, 1547, 8], [1547, 1547, 9], [1547, 1926, 10], [1926, 2363, 11], [2363, 2982, 12], [2982, 3700, 13], [3700, 4137, 14], [4137, 4569, 15], [4569, 4760, 16], [4760, 4783, 17], [4783, 5360, 18], [5360, 5497, 19], [5497, 5931, 20], [5931, 6499, 21], [6499, 6740, 22], [6740, 7435, 23], [7435, 7435, 24], [7435, 7787, 25], [7787, 8350, 26], [8350, 8384, 27], [8384, 8540, 28], [8540, 8830, 29], [8830, 9103, 30], [9103, 9877, 31], [9877, 10668, 32], [10668, 11392, 33], [11392, 11483, 34], [11483, 11768, 35], [11768, 11841, 36], [11841, 12574, 37], [12574, 13525, 38], [13525, 14545, 39], [14545, 14597, 40], [14597, 14597, 41], [14597, 15323, 42], [15323, 15772, 43], [15772, 16620, 44], [16620, 17204, 45], [17204, 17204, 46], [17204, 17882, 47], [17882, 18816, 48], [18816, 19430, 49], [19430, 19905, 50], [19905, 20529, 51], [20529, 20718, 52], [20718, 21253, 53], [21253, 21646, 54], [21646, 21931, 55], [21931, 22452, 56], [22452, 22514, 57], [22514, 22559, 58], [22559, 22589, 59], [22589, 23093, 60], [23093, 23343, 61], [23343, 23612, 62], [23612, 24556, 63], [24556, 24882, 64], [24882, 25712, 65]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25712, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
91a30fc4eeca6458c55d400ec5ea063dcda4fc4d
Preface Only 28% of application system projects are completely successful (source: The Standish Group “1998 Chaos Report”) – meaning that the project is completed on time and on budget, with all features and functions as originally specified. While updated versions of that report show some improvement, there is still significant room for improvement. This white paper describes, at a high level, some of the many areas to consider regarding the design and development of application systems in order to improve your chances for success. Overview Application systems are generally best when they are designed and developed using a systematic approach. **Seldom does a system evolve into something great over time without standardized processes in place.** It is critical to understand the goals, requirements, and expectations of the system before development starts. Everyone needs to understand “what success looks like” before the first specification is created or the first line of source code is written. Then it is just as important to plan for success, track the progress, proactively identify and eliminate obstacles, and then plan to test and roll-out the new system to production. These steps are even true when using XP (Extreme Programming, something that has fallen out of favor) or “Agile” programming techniques. Many people believe that a discipline like Agile programming gives them license to do everything “on the fly” (“that is what refactoring is for, isn’t it?”). That is not true. Goals and scope are important when identifying releases. The primary benefit that we have seen from this methodology, when properly implemented, is the team practices (such as pair programming, collaboration, and the use of a “Tracker” to track progress on a daily or weekly schedule) benefit the project. The problem is that it can be difficult to be disciplined enough to do this properly. Below are issues that are germane to any development project. This is not an all-inclusive list by any means, but it does address several critical areas for the majority of projects. By addressing these issues in the initial stages of a project the likelihood of delivering a system that works as needed, on time and within budget is greatly increased. If you are unfamiliar with any of these subject areas then it is best to seek assistance from a consulting company with proven expertise and success. Project Management Is there a common vision of success? Is the scope of the project aligned with business goals? Is there a project plan? Are there defined tasks with assigned roles and responsibilities? What are the acceptance criteria (i.e., how do we know when we are really done)? Do we know what the available resources are, when they are available, for how long and what portion of their time? Are specific tasks mapped out with specific deliverables? It is important to identify problems as early in the process as possible, and that is very difficult if there are not “hard targets.” **The value of project management cannot be overstated.** If needed, is there high-level Management support for this effort? If we require additional resources or talent, will funding be available? Has a risk analysis been performed and is there a risk management plan? Is there a communication plan? There are many aspects to project management that, if performed properly, will add significant value to systems design and development efforts. For more information on Project Management please see our various white papers on the topic. Service Level Agreements It is important to determine what level of service the user community is not only expecting, but actually needs in order to be fully satisfied with the system or application, now and in the future. Service Level Agreements (SLAs) address system availability (e.g., Monday through Saturday, 5:00 a.m. - 10:00 p.m. CST), acceptable scheduled down time for maintenance, maximum acceptable unscheduled down time (usually due to hardware or software problems), and system response time (average and maximum, e.g., 90% of all online transactions will complete within 2 seconds, 10% of all online transactions can take up to 4 seconds to complete). This knowledge will be especially useful when configuring the system and when designing applications. It is much easier to design a high performance application than it is to modify a system to improve performance. Business Rules This is critical to the overall success in a large, integrated environment. These map the fundamental requirements and relationships in business. For example, a business rule might be “a cashier can only access a cash register while they are working.” The list of business rules can be quite extensive, intricate and often not well defined. It is very important to document these rules, their interactions, and dependencies. Exceptions to the rules should also be noted. Exception processing can often be very difficult and therefore exceptions should be minimized as much as possible. The list can then be reviewed to determine where the rules will be implemented (in the application or server). It is important to understand both the process and procedures involved with exception handling, and then make sure that the new application system is flexible enough to accommodate them. Documentation Never underestimate the importance of well-designed documentation in the development of an application or system – technical internal IT design and maintenance documentation as well as external end-user documentation. Documentation is the common thread that makes a system tangible, and provides a consistent perspective of what is being done versus what was requested. Application system projects are much more likely to fail (i.e., not meet business expectations) from end-user adoption issues than from a poor implementation. A successful end-user rollout, and new-user adoption throughout the life of an application can only happen with properly designed and thought-out documentation. Documentation design considerations include overall content, optimum delivery mediums, initial introduction and training use, routine use, and helping users deal with problems and exceptions. Remember that end-users are not only the applications users but also system administrators and operations staff that need to schedule, operate, monitor, back-up, etc. – in any way support the application system and data. This information should be widely available to people within your company and reviewed on a regular basis. Accurate, comprehensive internal documentation will not only prove to be of great value during the development process, but also during ongoing support and maintenance. Documentation should include at a minimum the system requirements, functional specifications, business rules, a data element dictionary (that defines common data elements and provides domain information), Entity-Relationship (ER) diagrams, and Data Flow Diagrams (DFD). Object Oriented environments should describe the various business objects and their methods, as well as Object / Relational Mapping when using a relational database. The value of this documentation increases with the complexity of an environment. Documentation is as important and valuable as the software it describes! If your company is considering outsourcing the development, maintenance, or support of application systems, the value of documentation increases even further. It is easy to make assumptions if you “know everything” about a system. In an outsourcing scenario this context is often absent. Therefore, documentation at all levels needs to be complete, provide specific examples, and be as unambiguous as possible. Database Design This is critical to the overall success of any large project. Design problems can limit the overall flexibility of the system, and may cause excessive table joins and/or coding work-arounds that impact performance. We recommend using a three schema approach to DB design: Business schema (ensures that all business requirements are defined); Logical schema (the normalized business schema); and Physical schema (storage structure selection for performance, controlled denormalization, security). A design goal should always be consistent access time. This may require summary tables (such as "fact" tables in a data warehouse) to help facilitate that. The applications should contain functionality to monitor and track access times. Nothing frustrates end-users more than inconsistent access time. Many Clients prefer to use a RAD or XP approach with new system development. While this may be great for application system design, it is not a good idea for database design. It is generally worth the extra effort to carefully and properly design the database. Even if the database is already designed it is good practice to have that design independently reviewed by a design expert. Application Design and Development Performance and traceability are two critical aspects of system / application design that are often overlooked. It is much easier to design performance into a system than to try to "back" it in later. Critical information as the service level agreements and anticipated user load should be used to create a high performance, highly concurrent application. This includes good transaction and query design, efficient data access and manipulation, and proper error detection / retry / restart logic. This can also improve the overall responsiveness and performance consistency that is so important in a system. Please see our white paper on transaction management and restart logic for more information. It is also important to be able to gather performance metrics and debug / trace information. This can be used to validate performance on an ongoing basis as well as to isolate problem areas of code when an issue arises. This is especially helpful when it can be started using non-intrusive means such as an environment variable or command line option. It can also help by providing the necessary information for performance profiling and trend analysis. Having a good foundation for an application will have an immense payback during the life of that system and is therefore a very good investment in time, effort and money. As with database design, consistent performance is a worthy and achievable goal. **User Interface** What type of user interface is being used? More and more frequently it is a web browser. Is this a graphical (GUI) or character based (CHUI) interface? In a web-based environment this could mean the difference between using HTML (which can be operated in a pseudo-GUI manner) and Java (which is a true GUI tool, but could be made to handle a CHUI application if necessary). Once the type of interface is selected it is important to select the best tool or environment for developing that interface (rather than picking a tool and then deciding on the type of interface). When developing browser-based systems it is important to contrast the functionality and technology used with the anticipated browsers and platforms. Different browsers can render a screen in many different ways, and therefore testing must be done not only with several browsers and versions, but also on several platforms. Designing an interface that only works with a single web browser (or worse yet, a single version of that web browser) is not a good practice. Older browsers often cause problems, but it may not be possible to specify that an end user use a specific browser or version of that browser. Furthermore, certain technologies (java, java script, active server pages, cascading style sheets) have the potential to restrict the overall functionality of the application when they don’t execute properly in a given browser. NOTE: While this is not as much of a problem anymore, it can still present numerous problems when applications are intended to perform in a non-controlled environment (such as the Internet). **Security** Will the system interface with existing systems? What about external systems? What are the requirements to safeguard the data? Authentication and identification are critical for most web applications. This needs to be implemented in a way that is secure, scales well, does not place too much of a load on the system, and is easy to use (smart cards are ideal, but not widely used). Use of relative paths within protected directories is a good and simple approach to security once the session has been authenticated and the end user identified. It is very important to have an understanding of what the potential threats are, and what the impact of a security breach or exploit is. What is the cost of such a breach? What will the loss of sensitive customer information, trade secrets, pricing and availability, and other competitive information be to your business – directly and indirectly? Security, like performance, should be design into a system. **Open Systems & Open Source** In general, Open Systems and Open Source are preferable to proprietary systems (no matter who makes the proprietary system or how open they claim it to be). There are usually multiple avenues for support, increased options for tools and plug-ins that enhance the features or maintainability of the system, and readily available reference materials to assist those working on the systems. Keep in mind that existing “legacy” systems may need to interface with the new system, so look for exposed, powerful APIs (application program interfaces). **XML** is becoming a more commonly used and accepted interface, and should therefore be considered. It is now fairly common to find **Unicode** support in software and hardware products. In a global environment with language differences (traditional single-byte versus double-byte character sets) and numerous localization issues (such as currency) it is important to ensure that your system will support a diverse global user base. **Popular Products** Keep in mind that choosing popular products can be beneficial when looking for third party products and tools, reference and training materials, as well as staff and consultants. The “best choice” is not always the **best product**. While popularity should not be your primary consideration, it is something that should be taken into consideration, especially for proprietary products. **Training** It will be important to provide training to the development and administrative staff in order to fully utilize their capabilities. Training is a sometimes underestimated cost of implementing new systems, and can negatively impact a development or production rollout schedule if not properly addressed. The Project Manager should be intimately familiar with project management concepts and their application. System Designers should understand the technology that they are working with, and be familiar with various modeling methods. Database Administrators (DBA) and Developers should have a good working knowledge of SQL and an understanding of ANSI transaction semantics. The DBA will usually benefit from a Performance Tuning course, and can pass that knowledge along to Developers. Having the right skills at all levels is important. Good training will help quickly bridge those gaps. Spending time and money on good, relevant training is generally much more cost effective than allowing people to learn from their own mistakes. Typically it is good to either mix training with hands-on experience or have the two occur within a close time frame. A benefit of working with knowledgeable Consultants early in the design and development process is the knowledge transfer that occurs, jump-starting the process. **Development Best Practices** Best practices are a way to increase quality by utilizing approaches that have been proven at time. There are basic things, such as: - Making sure the team understands the goals, requirements, and constraints. - Making sure that the team understands the environment. For example, security issues would likely be different for an internal order processing system versus an external order processing system accessible from the Internet. - Making sure that the team understands the importance of lost data and/or inconsistent data. While this is never good, it can be far worse in a regulated environment (e.g., FDA, SOX, HIPAA). - Making sure that the team understands the cost and impact of downtime. Knowing that “downtime currently costs our organization $4 million/hour” might motivate the team members to be more careful or make the system a little more robust in order to avoid that scenario. Is the source code being managed by a tool like SCCS or RCS? Is only one person modifying a source module at a time? Are “make” files properly designed and implemented so that they know when a dependent object has changed? Are versions for each successful system compile tracked? Could you re-make any previous version of the product at any time? Is all of this information (assets) being checked-in and backed-up on a frequent basis? Are there separate and distinct staging areas for key areas (such as for a project team, an integration area, a QA area, and a production staging area). Are you sure? Does the source code test for anticipated errors and have generic traps for unknown / unanticipated conditions? Does it log that information or create an event notification so that others are aware that it happened? Are consistent naming conventions and data types used throughout the system? Are there enough comments in the code itself? There are many areas involved with the design and support of application system environments. For example, production control / release management, database administration, systems administration, data administration, etc. Each area should be analyzed to ensure that best practices are implemented and consistently utilized. The saying that “a chain is only as strong as its weakest link” is a great metaphor for the complexity of most information technology (IT) application systems. Quality Management The Quality Management Plan really sets the tone for everything done within the project. It identifies the relevant quality standards, defines the processes and best practices, identifies tools and procedures to be used, and much more. The emphasis is usually on prevention over inspection, with a goal of identifying problems as early in the development cycle as possible. Six Sigma and Total Quality Management (TQM) techniques are often the foundation for a quality management plan. Testing is one aspect of Quality Management, and is key to success of any software development effort. Tests are designed to validate various aspects of the system, for example: functionality (i.e., the output of the process), the process itself, performance, and integration of various components. Ideally validation testing should include the following both low-level (unit/module testing, integration testing) and high-level (usability testing, functionality testing, system testing, acceptance testing) testing. In addition to validating functionality, it is important to validate performance and concurrency goals or requirements. It is much better to identify a problem in a pre-production environment than it is when there are hundreds of production users relying on the new system. Testing should be performed against representative sets of data and with a realistic load. In order for a test to be useful it is important that the desired results are known, and that the test is repeatable (both under success and failure conditions). Test cases and test results should always be saved for future reference, and whenever possible performance metrics should be gathered. Additionally, specific source code module information should be tracked along with the tests to ensure that the environment can be reconstructed at a later date if necessary. There are five primary types of testing that Comprehensive Solutions recommends: white-box testing, black-box testing, regression testing, and static code analysis, and stress testing. The following is a brief description of each of these testing methods. **White-box testing** tests and validates the process as a whole. This testing requires that appropriate test cases be designed based on the object’s internal coding structure. Appropriate input and expected output must be devised, and the actual output compared with the expected output. This is also good for exception testing, range testing, and domain testing. Exception testing and handling is an important and often overlooked aspect of testing. **Black-box testing** is requirements based testing. It verifies that, based on its specification, an object generates the correct output(s) from a given set of input parameter values. This can be a complex activity and requires that multiple combinations of input parameter values be generated and used to ensure that the object produces consistent and correct results while input values are varied. The output of black-box testing is typically saved for use during regression testing. **Important Note:** As part of both white-box and black-box testing, exercise of a given object’s source code should be tracked. This ensures that all logic and execution paths are tested. This also helps to find unused code that could cause future errors and/or confusion during maintenance. This coverage analysis should be performed down to the source-code line level. Typically this is accomplished by having someone familiar with the source code develop comprehensive test scenarios, and then having someone familiar with the business process alter those tests. Documentation of the tests performed should indicate the revision of source code tested. **Regression testing** uses the results of black-box testing to compare new results with saved results. If an object has been tested thoroughly and no errors found, and a change is later made to the object, it must be verified that the change did not introduce any new defects. Testing to make sure the object has not taken a step backwards is called "regression testing." Running different tests after a change will not allow verification that no new defects were introduced to the code. Therefore, the same tests must be run after each change. New tests may be added during the development cycle, but the existing tests are kept as part of the testing process. It is highly recommended that regression testing be performed on a daily basis in an XP development environment, which should occur after the daily source code build (following the daily source code check-in). Timing information (metrics) should be gathered during the regression tests and analyzed as well. This will help identify negative trends and performance issues early in the process. **Static code analysis** determines whether the object being tested conforms to a set of coding rules (standards). Some of the items tested for are unused code, un-initialized variables, design errors, run-time optimization issues, and errors in naming and coding practices. This type of testing generally goes hand-in-hand with white box testing. **Stress testing** helps to determine how the object will behave under varying service loads. This is commonly performed when a functional subset of the overall software system has been produced, and again when the entire system has been produced. Stress testing generates varying input loads that are fed to the software under test. This helps to find and eliminate coding errors and system bottlenecks before the software is placed into production. This is one area where an automated tool is especially useful. **System Configuration** Does the selected hardware platform provide more performance and resources than is anticipated? Is there room for growth or is the machine at its peak capacity? Is there redundancy? What about clustering and failover support? Hardware configuration and capacity planning is an art itself. The best advice is to over-anticipate the system requirements, over-configure the system, and make sure that there is plenty of room for growth when the demand on the system is greater than the capacity. Having the best application system in the world will still seem inadequate when run on an underpowered computer! Summary New system development can be complicated, especially when using new technology. Costs are minimized when the entire process is comprehensive and controlled. The value proposition of using good consultants that process can be streamlined and improved, minimizing and controlling risk while maximizing the probability of success. About the Author Chip Nickolett, MBA, PMP is the President of Comprehensive Solutions. He has been a consultant (Technical and Management Consulting) since 1994, and started Comprehensive Solutions in 1999. He started his career as a Developer more than 20 years ago, and has managed many development efforts since then, including large projects with distributed teams and outsourcing efforts. Let Us Help You Succeed! Call today to discuss ways that Comprehensive Solutions can help your organization save money and achieve better results with your IT projects. We provide the confidence that you want and deliver the results that you need. Download our “Confidence” Brochure Back to White Papers Back to Services Comprehensive Solutions 4040 N. Calhoun Road Suite 105 Brookfield, WI 53005 U.S.A. Phone: (262) 544-9954 Fax: (262) 544-1236 Copyright © 1999-2008 Comprehensive Consulting Solutions, Inc. All Rights Reserved No part of this document may be copied without the express written permission of Comprehensive Consulting Solutions, Inc., 4040 N. Calhoun Rd., Suite 105, Brookfield, WI 53005. This document is provided for informational purposes only, and the information herein is subject to change without notice. Please report any errors herein to Comprehensive Consulting Solutions. Comprehensive Consulting Solutions, Inc. does not provide any warranties covering and specifically disclaims any liability in connection with this document. All product names referenced herein are trademarks of their respective companies. Use of these names should not be regarded as affecting the validity of any trademark or service mark.
{"Source-Url": "http://comp-soln.com/app_issues.pdf", "len_cl100k_base": 4894, "olmocr-version": "0.1.48", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 22183, "total-output-tokens": 5384, "length": "2e12", "weborganizer": {"__label__adult": 0.0003490447998046875, "__label__art_design": 0.0003609657287597656, "__label__crime_law": 0.00036716461181640625, "__label__education_jobs": 0.0017337799072265625, "__label__entertainment": 4.4345855712890625e-05, "__label__fashion_beauty": 0.00012946128845214844, "__label__finance_business": 0.0034503936767578125, "__label__food_dining": 0.0003006458282470703, "__label__games": 0.00040221214294433594, "__label__hardware": 0.000820159912109375, "__label__health": 0.0003254413604736328, "__label__history": 0.00014650821685791016, "__label__home_hobbies": 0.00010162591934204102, "__label__industrial": 0.0004734992980957031, "__label__literature": 0.0001817941665649414, "__label__politics": 0.0001666545867919922, "__label__religion": 0.0002944469451904297, "__label__science_tech": 0.002941131591796875, "__label__social_life": 7.390975952148438e-05, "__label__software": 0.00989532470703125, "__label__software_dev": 0.9765625, "__label__sports_fitness": 0.00020015239715576172, "__label__transportation": 0.0003390312194824219, "__label__travel": 0.00017571449279785156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26326, 0.02176]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26326, 0.216]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26326, 0.94232]], "google_gemma-3-12b-it_contains_pii": [[0, 540, false], [540, 3539, null], [3539, 6551, null], [6551, 9663, null], [9663, 12550, null], [12550, 15285, null], [15285, 18069, null], [18069, 21129, null], [21129, 24341, null], [24341, 25791, null], [25791, 26326, null]], "google_gemma-3-12b-it_is_public_document": [[0, 540, true], [540, 3539, null], [3539, 6551, null], [6551, 9663, null], [9663, 12550, null], [12550, 15285, null], [15285, 18069, null], [18069, 21129, null], [21129, 24341, null], [24341, 25791, null], [25791, 26326, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26326, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26326, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26326, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26326, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26326, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26326, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26326, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26326, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26326, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26326, null]], "pdf_page_numbers": [[0, 540, 1], [540, 3539, 2], [3539, 6551, 3], [6551, 9663, 4], [9663, 12550, 5], [12550, 15285, 6], [15285, 18069, 7], [18069, 21129, 8], [21129, 24341, 9], [24341, 25791, 10], [25791, 26326, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26326, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
ec8b6cd8886965a36615937a4ad9b5970f7020c2
User-Customizable Web Components for Building One-Page Sites Pasquale Lisena, Jetmir Xhembulla, Giovanni Malnati Department of Control and Computer Engineering Politecnico di Torino Turin, Italy e-mail: {pasquale.lisena, jetmir.xhembulla, giovanni.malnati}@polito.it Pasquale Morra Research and Development Division Seat PagineGialle S.p.a. Turin, Italy e-mail: morra.pasquale@seat.it Abstract—Most of online website builders work by combining and customizing reusable HTML modules. This approach could rise the risk of conflicts among modules. The World Wide Web Consortium (W3C) is writing the specification of Web Components. This standard provides a browser-native solution in order to realize encapsulated Document Object Model (DOM) elements, in which the Cascading Style Sheets (CSS) and JavaScript scope is locally bound and the interaction with the document is strictly designed by the component author. Upon this standard, libraries have been built, Google’s Polymer being an example, which provide a declarative and easy way to realize Components. In this paper, we provide a solution to the module approach limit in website builders by using Web Components as modules that are customizable by the end user. Our approach uses standard web technologies that modern browsers are natively supporting. We describe how a customizable Web Component is designed and how to bind their options with the generator UI. Furthermore, we will show an application of this approach in a Landing Page generator. We demonstrate that the generator could import again the generated HyperText Markup Language (HTML) and edit it, without any intermediary data structure (i.e., eXtensible Markup Language, XML or JavaScript Object Notation, JSON). Finally, we outline further future development of this approach. Keywords—Website generation, Web Components, HTML5, modularity, end-user generation, SME. I. INTRODUCTION The spread of smartphones in the last 5 years has deeply changed the market, forcing businesses of any kind and size to conform their online presence, in order to avoid the lack of an important market share. Overwhelmed by contents, the constantly connected consumers use now to spend the slightest of their time on a Web site, mostly by mobile devices, becoming bothered about navigation trees and expecting to find on the first page – with no more “click and wait” – the information they need. For these and further reasons, one-page sites are nowadays very popular, often together with responsive design patterns. This worldwide transformation is very quick and radical and produces two consequences: on one hand, businesses - which are mostly devoid of the required skills for managing and distributing contents on their own - had to delegate these tasks to service companies; on the other hand, the media agencies - in order to reduce costs and fulfill the market demand of high customized products - have left the previous artisanal production methods in favor of a modules-based approach. Following the mission of digitalizing its wide customer portfolio of enterprises, the Italian media agency Seat PagineGialle has identified the semi-automatization of the process for creating pages as a key point in various context: we are talking about low budgets, e.g., small and medium sized enterprises (SMEs) websites [1], and time limited contents, e.g., a promotional online campaign. The new World Wide Web Consortium (W3C) standard of Web Components represents a standard solution for modularization and reusing of HyperText Markup Language (HTML) in a web page. Isolation of scope, reusability, and freedom from server-side logic are some of the advantages of this standard. This solution is – either natively or thanks to the so called "polyfills" – built directly in the browser. This paper will present a Web page generator that uses Web Components as modules. In our Web application, these Web Components should not be assembled on the code by the developer, but a friendly interface makes their combination, manipulation and customization available to the end-user. As corporate requirement, integration with other existing and different Seat PagineGialle products must be possible. The rest of the paper is structured as follows: Section II overviews briefly current trends in website generator, with strengths and limits and describes the standard of Web Components, that we use as modules in Section III. Section IV shows an application in a real production environment. Finally, conclusions are drawn in Section V. II. STATE OF THE ART A. Trends and limits in website building A notable number of online website builders is available on the market, designed for the end user with no knowledge about HTML, Cascading Style Sheets (CSS) and JavaScript. These tools have the most common and powerful feature in the modular design [2]: standalone parts of the page, called modules, could be combined – often thanks to a drag and drop interface - and customized in style, color, text content and position, so that their reuse produces each time a different look and content. In the resulting HTML document, modules are actually slices of its Document Object Model (DOM) that represent headers, footers, images gallery, text boxes and various kinds of widgets. Each module should follow the generator internal set of rules and conventions (framework) in order to avoid conflicts with its siblings: duplicated IDs, influence of CSS rules from other modules (or vice versa, overflow of their CSS outwards), not scoped scripts [3]. Each framework has its own syntax that produces a lack of interoperability between different technologies [4]. Usually, the final output of this builder is a thick tree of nested tag (mostly DIVs, the most common Document Division in HTML) that is hardly to reconvert into an editable format. These template-based solutions [1] [5] need an eXtensible Markup Language (XML) or JavaScript Object Notation (Json) structure with a list of modules and options, stored on the builder server for future editing purpose; this is an expedient, because the proper language for describing a Web page is HTML. Other applications could use modules generated on the server, like portlets: however, they need specific server environments able to deploy and serve them, hardly to integrate in custom applications as it happens in the Seat PagineGialle case. B. The Web Components standard In the context of the HTML5 revolution, the W3C is defining a standard for Web Components. This standard allows you to create new type of DOM elements and use them in a document as if they were DIVs, INPUTs and other standard HTML tags. Creating a component means writing its HTML template, defining its CSS rules and managing its properties, method and lifecycle with JavaScript. For a Web developer, using a Web Component is as simple as inserting a tag <my-component-name> in the HTML and dealing with it like any HTML native tag. The family of the Web Components W3C standards includes four new specifications about: - Custom Elements [6], that enable to define and use custom DOM elements in a document; - HTML Imports [7], for including and reusing HTML documents as dependencies; - Templates [8], inert DOM elements for describing HTML structures; - Shadow DOM [9], a method for encapsulating DOM trees and bounding their interaction with the whole document. Thanks to these technologies, we can define the structure of a component in a not-rendered <template> tag and register it as Custom Element, so that the browser becomes aware of the match between the component tag name, e.g., <descriptive-content>, and its definition. When an instance of a registered component is created in the page, the browser creates a parallel tree of DOM – called Shadow DOM – associated to the component element. This Shadow DOM contains the structure we defined in the <template>. Although not visible as a child node of the element, this structure is rendered and the user can interact with it. All that lives in the Shadow DOM has its own isolated scope and can react to events and attributes modification on its parent component. This isolation solves all problems about duplicated IDs and ingestion of external CSS; besides, it provides a bounded scope to scripts: the Shadow DOM is solidly separated from the main document [10] [11], and the only possible interactions are those explicitly allowed by the component designer. Finally, the specification introduce a standard way for importing components, with <link rel=”import”> tag. At a glance, this specification covers all the needed requirements: reusability, isolation of JS and CSS and it is a browser standard, so it is fully compatible with any other technology that runs on the server or on the client. These and other advantages of Web Components ecosystem have been investigated thoroughly in [4]. C. Polymer In the last years, many libraries have been developed with the dual aim to extend the support to older browsers by using polyfills [12], and to further simplify the implementation of the Web Components. Recently Google released the version 1.0 of the Polymer library [13], which offers a declarative way for creating components. In Polymer, a component definition is an HTML page that contains imported resources (dependencies, style, template) and a call to the Polymer function for the configuration of properties, methods and lifecycle callbacks. We choose Polymer for the clearness of component’s code and for its ease-of use. III. WEB COMPONENTS AS MODULES We can split the core of our approach in two complementary branches: the design of a component and its manipulation. A. Design of a component Polymer provides a declarative syntax for creating Web Components. We describe the general structure of a Polymer component and we provide details only in those parts that we added or that are functional to the next tasks. We refer to the example in Figure 1. The whole component is wrapped in an inert tag <dom-module>. Its id attribute is the tag name of our component. All the nodes contained in <dom-module> will be encapsulated in the Shadow DOM of the tag. We can group these nodes in three distinct sections. The first one is the style section, which defines the look of the component. Thanks to DOM isolation, there are no constraints about specificity of rules, because they will be applied only in the context of the component. Polymer also allows including other style-specific components for styling, like a CSS-reset or a common base style for all components. Moreover, the library includes support to the CSS Variables specification [14], currently at working draft state. For our purposes, we need to know that you can apply the same CSS variable and so the same value to different selectors and that this value could be set through Polymer Application programming interface (API). The second part is the <template> tag, the content of which will form the internal DOM structure of our component. Double curly brackets denote the insertion point for properties value, so that “{{text}}” will be replaced with the value of the “text” property. ``` <dom-module id="descriptive-content"> <style include="component-base"></style> <style> p { color: var(--descriptive-text-color); } </style> <template> <p>{{text}}</p> </template> <script> Polymer({ is: 'descriptive-content', behaviors: [ComponentBehavior], properties: { text: { type: String, logicType: 'textarea', value: 'Lorem ipsum....', label: 'Text', reflectToAttribute: true, customizable: true }, textColor: { type: String, logicType: 'color', value: '#ffffff', cssVariable: '--descriptive-text-color', label: 'Text color', reflectToAttribute: true, observable: 'computeStyle' }, // other properties // methods and lifecycle callback }); </dom-module> ``` Figure 1. The definition of <descriptive-content> component. ``` <descriptive-content text="Lorem ipsum..." text-color="#ffffff"> Lorem ipsum... </descriptive-content> ``` Figure 2. The usage of <descriptive-content> component. Finally, we deal the registration of the component through the Polymer function. The properties object contains the value that will be bound in the template. According to the library, a property has the following sub properties: - **type**, the JavaScript type of the property, - **value**, its default value, - **reflectToAttribute**, if true, causes attribute to be set on the host node when the property value changes, - **observer**, a method to call on property changes. In addition, we added the following properties: - **customizable**, when set to true, this property should be used as customizable option, - **logicType**, refers to a human concept rather than a coding one; each logic type has a specific User Interface (UI) input element; we support as logic types “text”, “color”, “background”, “textarea”, “image”; - **cssVariable**, means that this property is connected to a CSS variable in the <style> tag, - a human-readable label for displaying purpose. A modification of the attributes on the component host tag (Figure 2) will reflect in a modification on the properties and consequently on the component’s content or style as it is rendered by the browser (Figure 3). We specify also a behavior, which is Polymer’s way of making certain properties and methods inheritable. The ComponentBehavior is a custom behavior that manages some customization-related tasks. It shows an options button on the element that triggers, when clicked, a “settingRequested” event. Additionally, it defines the “computeStyle” method, set as observer of properties with cssVariable in order to propagate changes to the CSS variables. B. Component manipulation The generator should read the components properties and provide the user with a proper User Interface (UI) for modifying it. Manipulation starts when our page generator intercepts the “settingRequested” event from one of the components. Component customizable properties are retrieved from the source DOM element via JavaScript. For each of them, we read the logicType property and according to it, we choose a proper input element. For some types, the choice is an HTML input element: an <input type="text"> for a “text”, a <textarea> for a “textarea”. For types like “background” or “image”, we have not a suitable input tag. Once again, we use Web Components specification for creating custom <background-input> and <image-input> element. For example, we implement a complex input for background, letting the user to choose between a color, an uploaded image, an image from our gallery and a transparent background. This complex component – and all option components we defined – exposes a value property that contains the current selection, exactly like native input elements. We valorize the component with the current value of the property, and display its label. The value of the value property of the input element changes upon user interaction. Consequently, we overwrite the corresponding property on the source element: this change propagates to the component Shadow DOM and to its attributes in the HTML, thanks to the reflectToAttribute feature. In this way, the HTML node always contains in its attributes the current state of the component. Form components have a special behavior. They have been designed as containers of an array of <input-field> components, each of them exposing as customizable properties the label, the placeholder, the type (i.e., text, date, mail) and the requirement. The user can modify these options and add, move or remove the input field. IV. THE PAGE GENERATOR We currently use the described approach in xPlanner, a beta Web app for promotional campaign management for Seat PagineGialle. A succinct demo of the landing generator is available at goo.gl/LW3WGE. A. Application overview We propose a classic drag and drop Graphic User Interface (GUI), visible in Figure 4. On the right column, we show a gallery of modules, which are Web Components. When the user accesses to the tool for the first time, the left side appears blank: the user can drop his/her favorite component on it in the place they prefer. Once on the drop area, the component shows the settings button on top left corner. Pressing the button, the “settingRequested” event is dispatched and the right bar shows the available options for the active component, in the appearance of the input elements described in Section 2.B (Figure 5). Every edit on those options will reflect on source components. User can then continue to add components and modify them until it is satisfied of the result. By clicking on the “Save” button, the components are inserted in a full HTML skeleton and the final HTML is exported and stored on the server. Figure 4. The GUI of the page generator. Figure 5. The right column visualize the customizable options. We reported a common result in Figure 6. As shown, the appearance of components in this HTML has not changed: the tag continues to appear without its inner template (once again hidden in the Shadow DOM) and the attributes reveal the values of properties as the user set them. B. Application flow Figure 7 shows the complete flow of the application. The generator is in charge of importing the components that the user can add to the page. Reading the exposed options from component description itself, the generator makes possible their manipulation, together with an eventual custom sorting. At the end of the process, it generates the HTML file. When it comes to the browser, the needed components are imported and the page is rendered. The HTML itself is ready for further editing: when the user wants to edit again his/her page, the generator simply retrieves it from server, extracts the content from the body and insert it in the drop area of the GUI. The process continues in the same way. The application has completely been developed using Web Components and the Polymer technology. In this case, the intrinsic isolation of each Web Component implied some implementation issue. It happens specifically when two components need to communicate each other and are not direct siblings in the DOM, i.e., the option input and the component that input refers: the isolation forced the development to propagate each information up to the nearest common parent of both, and that means to declare a data binding in the definition of multiple components. It seems useful to make an exploration about improving this approach and on the possibility of make use of alternative frameworks for Web application development on top of the components. C. Evaluation of the usability We introduced the complete xPlanner beta application – site generator included – to a small panel of webmasters and sales agents of SeatPG. The former have a quite good background on Web design and development, while the latter have commercial skills. The feedback has been different between the two profiles. People with computer experience, and in particular in this type of tools, gave a good evaluation, because it fulfilled their expectation based on their own passed experience. On the other hand, the generator was considered complicated by people with low technical skills. In particular, they were lost in the large number of available options of each component. We are considering a different UI approach and the creation of a light version of the application. V. CONCLUSION AND FUTURE WORKS We built a Web page generator that works by combining and customizing Web Components. The feature of the standard grants native isolation to each component for CSS and Javascript, avoiding conflicts in namespace. The generated HTML is the standalone structure that can be used for viewing the page and that can be imported in the generator for further editing. Therefore, the final HTML describes the page perfectly, in a suitable way for both the browser and the generator application. No other structure needs to be stored. For the first time, the standard of Web Components have been used in a novel way: the combination and manipulation of each component is no longer in charge to the developer through the code, but it is the final user itself to have the ability of doing this through a specific User Interface, in a context of end-user programming. The page edit and assembly is managed client-side. Components are modular: they can be defined and edited by simply relying on the existing standard and are independent of any other client or server-side technology. The HTML generates and describe itself, following its own rules instead of backend logics. This allows components to correctly behave in complex scenarios. In order to further improve the approach, we intend to give users with a minimum of Web development skills the possibility to add their own components. The idea is to design a collaborative platform for SeatPG webmasters, which are constantly in touch with the sales force, in order to make them autonomous in creating suitable modules that are specific to a business category. Each webmaster should be able to design a component, defining the HTML, the CSS and the customizable properties, and share it to the internal community in such a collaborative way. Other improvements could involve the support to components with external dependencies (by using a dependency manager, e.g., npm or bower). For limiting server requests, we will add a process of concatenation of used components inside the exported HTML. This process, in the Polymer naming convention, is called Vulcanization [15]. We are also working on the definition of a color theme for the whole page, using the CSS variables. REFERENCES
{"Source-Url": "http://www.thinkmind.org/download.php?articleid=achi_2016_19_40_20117", "len_cl100k_base": 4412, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22971, "total-output-tokens": 5751, "length": "2e12", "weborganizer": {"__label__adult": 0.00024127960205078125, "__label__art_design": 0.00049591064453125, "__label__crime_law": 0.0001678466796875, "__label__education_jobs": 0.0003039836883544922, "__label__entertainment": 4.863739013671875e-05, "__label__fashion_beauty": 0.00010502338409423828, "__label__finance_business": 0.00015878677368164062, "__label__food_dining": 0.00020420551300048828, "__label__games": 0.0002180337905883789, "__label__hardware": 0.0005717277526855469, "__label__health": 0.00019991397857666016, "__label__history": 0.00013124942779541016, "__label__home_hobbies": 5.549192428588867e-05, "__label__industrial": 0.00022292137145996096, "__label__literature": 0.00010484457015991212, "__label__politics": 0.00012576580047607422, "__label__religion": 0.00025653839111328125, "__label__science_tech": 0.004810333251953125, "__label__social_life": 4.9591064453125e-05, "__label__software": 0.00742340087890625, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.0001474618911743164, "__label__transportation": 0.0002440214157104492, "__label__travel": 0.0001537799835205078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24691, 0.01015]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24691, 0.32032]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24691, 0.86396]], "google_gemma-3-12b-it_contains_pii": [[0, 5128, false], [5128, 10713, null], [10713, 14813, null], [14813, 17069, null], [17069, 20994, null], [20994, 24691, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5128, true], [5128, 10713, null], [10713, 14813, null], [14813, 17069, null], [17069, 20994, null], [20994, 24691, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24691, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24691, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24691, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24691, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24691, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24691, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24691, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24691, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24691, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24691, null]], "pdf_page_numbers": [[0, 5128, 1], [5128, 10713, 2], [10713, 14813, 3], [14813, 17069, 4], [17069, 20994, 5], [20994, 24691, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24691, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
47bdc54c655bb8b4e95b499f04cb277645b7dfe4
A Python-based Post-processing Toolset For Seismic Analyses Steve Brasier*,†, Fred Pollard† Abstract—This paper discusses the design and implementation of a Python-based toolset to aid in assessing the response of the UK’s Advanced Gas Reactor nuclear power stations to earthquakes. The seismic analyses themselves are carried out with a commercial Finite Element solver, but understanding the raw model output this produces requires customised post-processing and visualisation tools. Extending the existing tools had become increasingly difficult and a decision was made to develop a new, Python-based toolset. This comprises of a post-processing framework (aftershock) which includes an embedded Python interpreter, and a plotting package (afterplot) based on numpy and matplotlib. The new toolset had to be significantly more flexible and easier to maintain than the existing code-base, while allowing the majority of development to be carried out by engineers with little training in software development. The resulting architecture will be described with a focus on exploring how the design drivers were met and the successes and challenges arising from the choices made. Index Terms—python, numpy, matplotlib, seismic analysis, plotting 1 INTRODUCTION Nuclear power in the UK is provided by a fleet of Advanced Gas-cooled Reactors (AGRs) which became operational in the 1970’s. These are a second generation reactor design and have a core consisting of layers of interlocking graphite bricks which act to slow neutrons from the fuel to sustain the fission reaction. Although the UK does not regularly experience significant earthquakes it is still necessary to demonstrate that the reactors could be safely shut-down if a severe earthquake were to occur. The response of the graphite core to an earthquake is extremely complex and a series of computer models have been developed to simulate the behaviour. These models are regularly upgraded and extended as the cores change over their lives to ensure that the relevant behaviours are included. The models are analysed using the commercial Finite Element Analysis code LS-DYNA. This provides predicted positions and velocities for the thousands of graphite bricks in the core during the simulated earthquake. By itself this raw model output is not particularly informative, and a complex set of post-processing calculations is required to help engineers to assess aspects such as: • Can the control rods still enter the core? • Is the integrity of the fuel maintained? This post-processing converts the raw position and velocity data produced by the model into parameters describing the seismic performance of the core, assesses these parameters against acceptable limits, and presents the results in tabular or graphical form. This paper describes a recent complete re-write of this post-processing toolset. It seeks to explore some of the software and architectural decisions made and examine the impact of these decisions on the engineering users. 2 BACKGROUND The LS-DYNA solver produces about 120GB of binary-format data for each simulation, split across multiple files. The existing post-processing tool was based on Microsoft Excel, using code written in Visual Basic for Applications (VBA) to decode the binary data and carry out the required calculations and Excel’s graphing capabilities to plot the results. The original design of the VBA code was not particularly modular and its complexity had grown significantly as additional post-processing calculations were included and to accommodate developments in the models themselves. In short, there was significant "technical debt" [Cun92] in the code which made it difficult to determine whether new functionality would adversely impact the existing calculations. The start of a new analysis campaign forced a reappraisal of the existing approach as these issues meant there was low confidence that the new post-processing features required could be developed in the time or budget available. The following requirements were identified as strongly desirable in any new post-processing tool: • A far more modular and easily extensible architecture. • More flexible plotting capabilities. • A high-level, modern language to describe the actual post-processing calculations; these would be implemented by seismic engineers. • Better performance; the Excel/VBA post-processor could take 4-6 hours to complete which was inconvenient. • Possibility of moving to a Linux platform later, although starting initial development on Windows; this would allow post-processing to be carried out on a future Linux analysis server to streamline the work-flow and allow access to more powerful hardware. A re-write from scratch would clearly be a major undertaking and was considered with some trepidation and refactoring the existing code would have been a more palatable first step. However further investigation convinced us that this would not progress a significant distance towards the above goals as the Excel/VBA platform was simply too limiting. 3 Overall Architecture An initial feasibility study lead to an architecture with three distinct parts: 1. A central C++ core, aftershock, which handles the binary I/O and contains an embedded Python 2.7 interpreter. 2. A set of Python "calculation scripts" which define the actual post-processing calculations to be carried out. 3. A purpose-made Python plotting package afterplot which is based on matplotlib [Hun07]. As the entire binary dataset is too large to fit in memory at once the aftershock core operates frame-by-frame, stepping time-wise through the data. At each frame it decodes the raw binary data and calls defined functions from the calculation scripts which have been loaded. These scripts access the data for the frame through a simple API provided by aftershock which returns lists of floats. The actual post-processing calculations defined by the scripts generally make heavy use of the ndarrays provided by numpy [Wal11] to carry out efficient element-wise operations. As well as decoding the binary data and maintaining the necessary state for the scripts from frame-to-frame, the aftershock core also optimises the order in which the results files are processed to minimise the number of passes required. The split between afterplot and a set of calculation scripts results in an architecture which: a. Has sufficient performance to handle large amounts of binary data. b. Has a core which can be reused across all models and analyses. c. Provides the required high-level language for "users", i.e. the seismic engineers defining the calculations. d. Hides the complex binary file-format entirely from the users. e. Enforces modularity, separating the post-processing into individual scripts which cannot impact each other. With Python selected as the calculation scripting language a number of plotting packages immediately became options. However matplotlib [Hun07] stood out for its wide use, "publication quality figures" [Hun07] and the sheer variety and flexibility of plotting capabilities it provided. Development of the post-processing toolset could have ended at this point, leaving the script engineers to utilise matplotlib directly. However matplotlib's versatility comes with a price in complexity and the API is not particularly intuitive; requiring seismic engineers to learn the details of this did not seem to represent good value for the client. It was therefore decided to wrap matplotlib in a package afterplot to provide a custom set of very focussed plot formats. 4 Plotting Architecture afterplot provides plotting functionality via a set of plotter classes, with the user (i.e. the engineer writing a calculation script) creating an instance of the appropriate class to generate a plot. All plot classes inherit from a BasePlot class. This base class is essentially a wrapper for a matplotlib Figure object which represents a single plotting window, plus the Axes objects which represent the plots or sub-plots this contains. At present afterplot provides only four types of plotter, although these are expected to be sufficient for most current requirements: 1. LayerPlot (Figure 1): This represents values on a horizontal slice through the model using a contour-type plot but using discrete markers. 2. ChannelPlot (Figure 2): This represents the 3D geometry of a vertical column in the model by projection onto X-Z and Y-Z planes. 3. TimePlot (Figure 3): This is a conventional X-Y plot, representing time-histories as individual series with time on the X-axis. 4. WaterfallPlot (Figure 4): This provides an overview of the distribution of the plotted parameter at each time-step during a simulation. Inherently all post-processed results are associated with a three-dimensional position within the model and a time within the simulation. Some parameters or outputs may collapse one or more of these dimensions, for example if plotting a plan view of peak values through time, maximums are taken over the vertical and time axes creating a set of results with two dimensions. All plotter classes therefore accept numpy arrays with up to four dimensions (or axes in numpy terminology). The meanings and order of these dimensions are standardised as three spatial dimensions followed by time, i.e. \((x, y, z, t)\), so that different "views" of the same data can easily be generated by passing an array to different plotters. 5 Quality Advantages A key advantage of providing a custom plotting package is that best-practice can be enforced on the generated plots, such as the provision of titles or use of grid-lines. Another example is that afterplot provides a custom diverging colourmap as the default colourmap, based on the comprehensive discussion and methods presented in [Mor09]. This should be significantly easier to interpret than the default colourmap provided by matplotlib in most cases. The plotter classes can also allow alteration of presentation, e.g. axis limits, while preventing modification of data. Alteration of presentation is provided for by instance methods or GUI controls defined by the plotter classes. Modification of data is prevented simply by the lack of any interface to do this once the relevant array has been passed to the plot instance. This immutability is not intended as a security feature but simplifies quality assurance by limiting where errors can be introduced when altering presentation. A further quality assurance feature is the capture of traceability data. When a new plot is generated, the BasePlot class traverses the stack frames using the standard library’s inspect module to gather information about the paths and versions of calculation scripts and other Python modules used. This data is attached to the plots to assist in reproducing published plots or debugging issues. The use of introspection to capture this data means that this feature does not require any action by the script author. 6 INTERACTIVE GUI Providing a simple GUI was considered desirable to bridge the gap for users from the previous Excel-based toolset. The matplotlib documentation describes two methods of providing a GUI: 1. Using the cross-backend widgets provided in matplotlib.widgets, which are fairly limited. 2. Embedding the matplotlib FigureCanvas object directly into the window provided by a specific GUI toolset such as Tk. An alternative approach is used by afterplot which is simpler than the second approach but allows the use of the richer widgets provided by specific GUI toolsets. This approach uses the pyplot.figure() function to handle all of the initial set-up of the GUI, with additional widgets then inserted using the GUI toolset’s manager. This is demonstrated below by adding a Tk button to a Figure object using the TkAgg backend: ```python import Tkinter as Tk import matplotlib matplotlib.use('TkAgg') from matplotlib import pyplot class Plotter(object): def __init__(self): self.figure = pyplot.figure() window = self.figure.canvas.manager.window btn_next = Tk.Button(master=window, text='next', command=self._next) btn_next.pack(side=Tk.LEFT) self.figure.show() ``` ![](image1.png) **Fig. 1:** Example LayerPlot output ![](image2.png) **Fig. 2:** Example ChannelPlot with GUI Fig. 3: Example TimePlot output Fig. 4: Example WaterfallPlot output 7 Store and Restore Functionality to save plots to disk as images is provided by matplotlib via Figure.savefig() which can generate a variety of formats. When development of afterplot began a matplotlib.Figure object could not be pickled and therefore there was no native way to regenerate it for interactive use except for re-running the script which created it. Despite the improved performance provided by aftershock this is clearly time-consuming when only minor presentation changes are required such as altering the limits on an axis. A means to enable an entire plotter instance, including its GUI, to be stored to disk and later restored to a new fully interactive GUI was therefore strongly desirable. While the ability to pickle Figure objects has since been added to matplotlib this would not support the custom GUIs which afterplot provides. However, by following the same approach that the pickle module uses internally to handle class instances the desired store/restore functionality could be added relatively simply. Storing: 1. When a plot instance is created, the __new__ method of the BasePlot superclass binds the supplied *args and **kwargs to attributes on the plotter instance - these will include one or more ndarrays containing the actual data to be plotted. 2. To store the instance, first a type object is obtained, then this and the *args and **kwargs are pickled. Simplified code for the BasePlot class implementing storing to a given path: ```python class BasePlot(object): def __new__(cls, *args, **kwargs): obj = object.__new__(cls) obj._args, obj._kwargs = args, kwargs return obj def store(self, path): data = (type(self), self._args, self._kwargs) with open(path, 'w') as pkl: pickle.dump(data, pkl) def show(self): # .. gui code here .. ``` Restoring: 1. The type object, args and kwargs are unpickled from the file. 2. The type object is called to create a new instance, passing it the unpickled args and kwargs. Simplified restoring code, taking a path to a stored file and regenerating the plot complete with interactive GUI: ```python def restore(path): with open(path, 'r') as pkl: t_plt, args, kwargs = pickle.load(pkl) restored_plotter = t_plt(*args, **kwargs) return restored_plotter ``` Note that classes can define __getstate__ and __setstate__ methods to control how they are pickled and un-pickled and the approach described above could be implemented in these methods. The use of explicitly-named methods and functions was primarily to make the approach transparent to future developers. This approach has a number of benefits: 1. Neither the storing nor restoring code needs to know anything about the actual plot class, except that it has a show() method, hence any plotter derived from BasePlot inherits this functionality. 2. The only interface which storing and restoring needs to address is the plotter class’s signature. This is simple and robust, as code can always be added to a class’s __init__ method to handle changes in the signature such as depreciated parameters, meaning that it should essentially always be possible to make stored plots forward-compatible with later versions of afterplot. By contrast normally when a class instance is unpickled, pickle directly sets the instance’s dictionary (the __dict__ attribute) from the pickled data, meaning that changes to an instance’s internal attributes can break unpickling. 3. Restoring the interactive GUI requires no additional code - only what is needed to create the GUI when the plotter instance is first created. 4. If a stored plot is restored with a later version of afterplot any enhanced GUI functionality will automatically be available. For convenience a simple cmd script and short Python function also allow stored plots to be restored on user’s local Windows PCs and the GUI displayed by simply double-clicking the file. Alternatively a simple script can be written to batch process presentational changes such as colour bars or line thicknesses for a series of plots. Such a script uses a provided restore() function to restore the desired plots without showing the GUI, then uses the methods the plotter classes provide to alter desired presentation aspects. One complication omitted from the simplified code above is that ideally storing and restoring should be insensitive to whether parameters have been specified as positional or named arguments. Therefore the __new__() method of the BasePlot superclass uses inspect.getargspec() to convert all arguments to a dictionary of name:value. Class instances are then actually stored/restored as if all parameters were provided as keyword arguments. While this approach essentially mirrors how pickle handles class instances, implementing such complex and robust functionality in such little code is an impressive demonstration of Python’s benefits. 8 Outcomes and Lessons Learnt The overall architecture has been a success: - Performance is significantly improved. - Post-processing can easily be integrated with analysis runs if required. - Maintainability and extensibility of the calculations has been vastly improved. - Python and numpy form a vastly more usable and concise high-language for describing calculations than VBA, allowing engineers to concentrate on the logic rather than working around the language. - The aftershock core is reusable across different models which will save considerable effort for future models. • Cross-platform portability to Windows and Linux was achieved without any significant effort for the calculation scripts and plotting module, making a decision to transition part-way through the project to new Linux hardware relatively straightforward. However there were a number of challenges, some of which were expected at the outset and some which were not: Education and training: As discussed a key driver for the architecture was that the calculation scripts would be written by seismic engineers, as they were the domain experts. Some of these engineers were already familiar with Python, often from scripting environments provided by commercial analysis software, or with other high-level scripting languages such as VBA. In general users found it relatively simple to pick up and start developing procedural and simple object-orientated Python, but the heavy use of numpy for element-wise operations then required users to learn a third programming paradigm. While the basic concepts were easily understood, deciding when the use of explicit loops or element-wise operations is more appropriate requires considerably more experience. Most engineers had not written code where performance was a concern and hence basic optimisation techniques such as moving constant expressions outside of loops were not necessarily considered obvious. Inconsistencies in the API for the scientific Python stack also led to some subtle performance and functionality issues; for example the three examples below all have different answers as to which package is "best": - abs() vs. numpy.abs() - math.exp() vs. numpy.exp() - math.pi vs. scipy.pi vs. numpy.pi Development practicalities: Some significant difficulties were encountered in compiling afterplot on both Windows and Linux due to the embedded Python 2.7 interpreter, but these issues are outside the scope of this paper to discuss. Plotting functionality: The success of the afterplot plotting module is less clear at present. It has provided the desired plotting flexibility, as demonstrated by the LayerPlot and WaterfallPlot plot types which could not be easily replicated using Excel’s plotting facilities. The control of style it enforces also appears to be strongly desirable in terms of reducing the effort required to obtain publication-quality plots. However verification of the relatively complex GUI code has proved to be difficult. "Verification" in this sense does not refer to a formal proof of correctness, but to a level of independent checking consistent with that applied to the actual post-processing calculations. Part of the difficulty with this was due to the limited internal availability of developers familiar with the GUI toolset. Another aspect was the decision to provide a small number of relatively general-purpose plot classes. This made it necessary for the plot classes to accept data in different dimensions and with a variety of options, complicating the internal logic which often involves complex array striding and reshaping. It may have been simpler overall to provide a larger number of less flexible plotters with simpler interfaces and fewer internal code paths. Testing plotting code is not straightforward but matplotlib's own test suite has provided some useful techniques to automatically check images produced by test cases against known-good results. Overall, the decision to use the Python scientific software stack for this toolset has been strongly positive. Encouragingly it also appears that future developments are likely to provide features like sparse arrays and lazy evaluation which would permit the calculation scripts to be simpler and more efficient. Similarly, rationalisation of the matplotlib API is expected in future which will simplify the creation of high-quality plots from Python. REFERENCES
{"Source-Url": "http://export.arxiv.org/pdf/1412.6410", "len_cl100k_base": 4176, "olmocr-version": "0.1.48", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 16969, "total-output-tokens": 4698, "length": "2e12", "weborganizer": {"__label__adult": 0.0003287792205810547, "__label__art_design": 0.0005826950073242188, "__label__crime_law": 0.00035643577575683594, "__label__education_jobs": 0.0008974075317382812, "__label__entertainment": 0.00010007619857788086, "__label__fashion_beauty": 0.00016248226165771484, "__label__finance_business": 0.00026535987854003906, "__label__food_dining": 0.0004987716674804688, "__label__games": 0.0003712177276611328, "__label__hardware": 0.00136566162109375, "__label__health": 0.000568389892578125, "__label__history": 0.0003757476806640625, "__label__home_hobbies": 0.00013494491577148438, "__label__industrial": 0.0011510848999023438, "__label__literature": 0.00021839141845703125, "__label__politics": 0.0003101825714111328, "__label__religion": 0.000492095947265625, "__label__science_tech": 0.1353759765625, "__label__social_life": 0.00012862682342529297, "__label__software": 0.01264190673828125, "__label__software_dev": 0.84228515625, "__label__sports_fitness": 0.0003485679626464844, "__label__transportation": 0.0006737709045410156, "__label__travel": 0.0002419948577880859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22437, 0.00928]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22437, 0.70756]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22437, 0.91857]], "google_gemma-3-12b-it_contains_pii": [[0, 4461, false], [4461, 9936, null], [9936, 12387, null], [12387, 12457, null], [12457, 17987, null], [17987, 22437, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4461, true], [4461, 9936, null], [9936, 12387, null], [12387, 12457, null], [12457, 17987, null], [17987, 22437, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22437, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22437, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22437, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22437, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22437, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22437, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22437, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22437, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22437, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22437, null]], "pdf_page_numbers": [[0, 4461, 1], [4461, 9936, 2], [9936, 12387, 3], [12387, 12457, 4], [12457, 17987, 5], [17987, 22437, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22437, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
1fa6fcc2a3ec5b2eb15d69338b1634ca6eae412f
Appendix T This appendix contains additional content covered in Tutorials. You are still responsible for this content, even if it is not presented in the lectures. Slides that begin with a star (*) are (mostly) review slides that duplicate content in the lecture notes. Tutorial 0: Overview of CS 136 Course Personnel (more details on the course website) - Instructors: Tim Brecht and Dan Holtby - Course Coordinators: Travis Bartlett - (Full-Time) Instructional Support Assistants (ISA)s - Jamie Demorim, Nolan Smith and Frank Wang - Instructional Apprentices (IA)s - Rina Wehbe and Kaleb Alway *Getting help* - office hours (see website) - lab hours (Wednesdays, see course website / Piazza for hours and rooms) - tutorials (Mondays) - textbook - piazza Course announcements are made on piazza and are considered mandatory reading. CS 136 Student Request System [DEMO] This is a new system for you to submit requests. www.student.cs.uwaterloo.ca/~cs136/cgi-bin/requests/ - Remark requests - Clicker registration or marking errors - General requests Do NOT email your requests or post them on piazza (also, never try to submit your assignment via email). You may contact the course coordinator (illness, accessibility services, confidential feedback) over email: trbartlett@uwaterloo.ca Lab Hours A computer lab is booked exclusively for CS 136 use: - See Course Website for lab hours and rooms Course Staff will be present in the lab. Office Hours Office Hours will be updated on the course website: www.student.cs.uwaterloo.ca/~cs136/officeHours Many will be held in the CS Learning Centre (MC 4065) Course Website Course website is: www.student.cs.uwaterloo.ca/~cs136/ - Assignments - Marks - Office hours - Links to Piazza, Markus, Marmoset, Request System - Exam Seating *Piazza etiquette* - **read** the *official assignment post* before asking a question - **search** to see if your question has already been asked - **use** meaningful titles (e.g., “A0q7 where is the secret code?”) - **ask** *clarification questions* for assignments (do not ask *leading questions* for **GOLD** questions) - **do not** discuss strategies for **GOLD** questions - **do not** post any of your assignment code *publicly* - you can post your **commented** code *privately*, and the staff *may* provide some assistance. Seashell [DEMO] http://www.student.cs.uwaterloo.ca/seashell Marmoset [DEMO] - **Public** (*simple / basic*) tests - **vs. Private** (*full / comprehensive*) tests - A0q4 is *special* - you can see your secret tests immediately - During peak times, there are often server delays *Assignments* Assignment *questions* are colour-coded as either “black” or “gold” to indicate if any collaboration is permitted. For **BLACK** questions, **moderate collaboration** is permitted: - You can share and discuss your code with *individuals*, but **not** online or electronically (piazza, facebook, github, email, IM, USB Stick, etc.) - You can discuss assignment *strategies* openly (including online) - You can search the Internet for strategies or code examples If you submit any work that is not your own, you must still cite the origin of the work in your source code. For **GOLD** questions, **no collaboration** is permitted: - Never share or discuss your code - Do not discuss assignment *strategies* with fellow students - Do not search the Internet for strategies or code examples You may discuss your code with course staff. **Academic integrity will be strictly enforced for gold questions.** Plagiarism - We have an automatic detection system designed to detect “similar” code - Don’t share code or test cases - Keep your eyes on your own code - Don’t collaborate with other students outside of the bounds outlined in the assignment Plagiarism - Penalty - Marked 0 on the assignment - -5% to your overall grade in the course - Repeat offenders may face much harsher consequences All cases are reviewed by course staff before any penalties are applied. Reminders - **A0 must be completed** before you can receive any other assignment marks. - Register your clickers - Get the course notes (online or through media.doc) - Read the top threads in Piazza Appendix A Appendix A contains additional content, examples and language syntax details that may not be covered in the lectures. Some of this content will be covered in tutorials (time permitting). You are still responsible for this appendix, even if it is not presented in the lectures. - Appendix A.1 - Review of CS 135 Tutorial 1 • Quick review of C Syntax • Translating Racket to C • Implementing a ceiling function #include <stdio.h> // expt(x,y) calculates x to the power of y // requires: y >= 0 int expt(int x, int y) { if (y) { return x * expt(x, y - 1); } else { return 1; } } int main(void) { printf("expt(4,3) = %d\n", expt(4, 3)); } The previous program illustrates many C syntax elements. - `#include <stdio.h>` - `purpose` statement, `requires` in contract - use of `{}`, `()`, indentation, `;` - static `types`: `int x` - any non-zero value is "true" - `if`, `return` - `int main(void)` - `printf` syntax with "%d" Racket Conversion Convert the following functions into C: ```scheme (define (checkprime n k) (cond [(= k 1) 1] [(zero? (remainder n k)) 0] [else (checkprime n (sub1 k))])) (define (isprime n) (checkprime n (sub1 n))) (define (nextprime n) (cond [ (= 1 (isprime (add1 n))) (add1 n)] [else (nextprime (add1 n))])) ``` **Integer Division** **Implement** the following **interface** in C: ```c // ceiling(a,b) produces the value of a/b, rounded up to // the next largest integer // requires: a >= 0, b > 0 int ceiling(int a, int b); ``` Using the readnat module `stdio.h` includes input functions as well as output functions, but they’re a bit advanced, so we have create our own simple module `readnat.h` for reading natural numbers. ```c // TASK: Write a program that reads in an initial natural number n followed by an arbitrary number of natural numbers, and determines if there were more numbers >= n than < n // it prints out one of 3 possibilities: // -1 if there is no input (cannot read in n) // 1 if there were more numbers >= n than < n // 0 otherwise ``` Use *accumulative* recursion. *Pro Tip*: Design your tests **before** you code! Tutorial 2 - More on working with seashell - Working with the `io_test` module - More C Syntax Examples - Working with C Structures and Modules - Accumulative Recursion & `doubles` [HOW TO RESET SEASHELL] [SETTING THE RUN FILE] • Identify the file with the main you wish to run (there can be multiple clients) [RUN] • Assertion test clients • Interactive I/O programs (input from keyboard, output to console) [TEST] • Automated I/O testing (input from .in file(s), compare to .expect) The I/O Test module To test C modules interactively you need an interactive test client. This is complicated, so we’ve provided io_test.h to make it easier. Let’s look at an example from last week ```c int main(void) { add_int_test("isprime", isprime, 1); add_int_test("nextprime", nextprime, 1); add_stop_test("quit"); io_test(); } ``` This program creates an interactive program that accepts 3 commands ("isprime", "nextprime", and "quit") and then reads commands until it reaches EOF or the "quit" command. More C syntax examples ```c struct point { int x; int y; }; static const int my_private_int = 5; // requires: x > 0 static int my_private_function(int x) { assert(x > 0); // ... } ``` // rectangle.h // A module for working with rectangles in a Cartesian plane // where all (x,y) points are on an integer grid #include <stdbool.h> struct point { int x; int y; }; // A structure that uses other structures struct rectangle { struct point top_left; struct point bottom_right; }; // rectangle_perimeter(r) determines the perimeter of rectangle r int rectangle_perimeter(struct rectangle r); Accumulative recursion with double The chemical trumponium is very radioactive and toxic. It has a half-life of one year. Each year its toxic radiation levels are decreased by 50%. Write a function to determine how many years are required for levels of trumponium to be reduced to some safe threshold. !!!!!!!!! WARNING !!!!! • THINGS ARE BEING MODIFIED AFTER THIS. Tutorial 3 - Working with seashell - More C Syntax Examples - Working with C Structures - Accumulative Recursion & doubles - Global variables & mutation [HOW TO RESET SEASHELL] [SETTING THE RUN FILE] - Identify the file with the main you wish to run (there can be multiple clients) [RUN] - Assertion test clients - Interactive I/O programs (input from keyboard, output to console) [TEST] - Automated I/O testing (input from .in file(s), compare to .expect) More C syntax examples ```c struct point { int x; int y; }; static const int my_private_int = 5; // requires: x > 0 static int my_private_function(int x) { assert(x > 0); // ... } ``` Side Effects: printing vs. returning ```c int pure_functional(int n) { return n * n; } // effects: displays a message void just_a_side_effect(int n) { printf("n squared is %d\n", n * n); return; // (optional) } // effects: displays a message int has_both_side_effect_and_return_value(int n) { printf("n squared is %d\n", n * n); return n * n; } ``` Reviewing struct syntax in C Implement a `rectangle` module using the following structures: ```c struct point { int x; int y; }; struct rectangle { struct point top_left; struct point bottom_right; }; ``` Accumulative recursion with double The chemical trumponium is very radioactive and toxic. It has a half-life of one year. Each year its toxic radiation levels are decreased by 50%. Write a function to determine how many years are required for levels of trumponium to be reduced to some safe threshold. Global variables & mutation Implement a simple “guess my number” module that also keeps track of how many guesses you have taken. Tutorial 4 - C memory model - The sizeof operator - Another struct example - int limits & overflow C memory model - Every variable declaration reserves space in memory to store the variable - The variable’s name refers to the corresponding address in memory // What would this program look like in memory? int main(void) { int x = 42; // 4 bytes double y; // 8 bytes } The `sizeof` operator Each variable we declare takes up space in memory. The `sizeof` operator produces the number of bytes required to store a type. ```c printf("%zd\n", sizeof(int)); // prints 4 in Seashell ``` In Seashell, `sizeof(int)` is always 4. However, you should always use `sizeof` to find the size of a type. Why? There is an example in Seashell that shows one reason why. More Fun with Structures - Complete the implementation of a simple “stopwatch” module. - Create an assertion-based test client to test your module. - Create an I/O test client that displays the following message, using the timer functions: The time is [1:02:03], which is a new world record! **int limits** In Seashell, each `int` is stored in 4 bytes (32 bits). All `ints` must have a value in the (inclusive) range: <table> <thead> <tr> <th>INT_MIN</th> <th>$-2^{31}$</th> <th>$-2,147,483,648$</th> </tr> </thead> <tbody> <tr> <td>...</td> <td></td> <td>...</td> </tr> <tr> <td>INT_MAX</td> <td>$2^{31} - 1$</td> <td>$2,147,483,647$</td> </tr> </tbody> </table> It is **impossible** for C to represent a value outside of that range in an `int`. Overflow Overflow occurs when a calculated value would normally lie outside of the range of an `int`. C cannot store values outside of the `int` range, so the result is an “arbitrary” value inside of the range. Assuming `a` and `b` are positive, overflow will occur when: \[ a + b > \text{INT\_MAX} \] or if they are both negative, then overflow will occur when: \[ a + b < \text{INT\_MIN} \] Write a function to detect if overflow will occur in C if both `a` and `b` are positive, without calculating `(a + b)`. Tutorial 5 - Memory model & control flow - Loops... ```c while (!understand(loop_knowledge)) { ++loop_knowledge; } ``` int g = 0; int foo(int a, int b) { ++a; g += a; int c = a + b; c *= g; return c; } int bar(int d) { d = foo(d, 1); d = foo(d, 2); return d; } int main(void) { const int e = bar(0); printf("e = %d\n", e); } while ```c Statement A; while (exp) { Code Block; } Statement Z; ``` *for vs. while* Recall the `for` syntax. ``` for (setup; expression; update) { body statement(s) } ``` This `while` example ``` i = 100; // setup while (i >= 0) { // expression printf("%d\n", i); --i; // update } ``` is equivalent to ``` for (i = 100; i >= 0; --i) { printf("%d\n", i); } ``` Statement A; for (setup; exp; update) { Code Block; } Statement Z; Fun with loops Write a program that reads in an arbitrary number of natural numbers and then prints the range of the numbers ($\max - \min$). If there is no input, the range is 0. More fun with loops Caesar Cypher Write a program that first reads in natural number $N$ from input. Then it reads in characters, offsetting each character by $+N$, and then prints them out. For example, if $N$ is 1, then ‘A’ would be printed as ‘B’. It “wraps around” within the printable characters, so if $N$ is 1, then ‘~’ would be printed as space ‘ ’. Newlines are printed normally. Draw a circle with the given \texttt{radius} inside of a square box. \begin{verbatim} void circle(int radius, char border, char draw); \end{verbatim} Loop from \texttt{-radius} to \texttt{+radius} horizontally and vertically and print out \texttt{draw} if the location is inside of the circle, or space otherwise. \begin{verbatim} circle(4, '*', '#'); \end{verbatim} \begin{verbatim} ************ * # * * ##### * * ####### * * ####### * * ####### * * ####### * * ####### * * ####### * * # * ************ \end{verbatim} Tutorial 6 - Pointers - Interactive Programs Pointers A **pointer** is a variable that stores a **memory address**. In Seashell, memory addresses are 8 bytes (64 bits). So far, their usefulness is limited to: - passing a pointer to a variable to a function so the function can modify the variable - passing a pointer to a **struct** to avoid a “deep copy” onto the stack - function pointers Pointer syntax examples Pointer definition: ```c int *ptr; ``` Storing an address with the address operator (&): ```c ptr = &i; ``` The dereference (indirection) operator (*): ```c i = *ptr; *ptr = 42; ``` Pointer parameters ```c struct point { int x; int y; }; struct rectangle { struct point top_left; struct point bottom_right; }; // use const to indicate that a parameter is not modified bool valid_rectangle(const struct rectangle *r) { return r->top_left.x < r->bottom_right.x && r->top_left.y > r->bottom_right.y; } ``` // r is a pointer to a struct, so use -> operator // r->top_left is a struct (NOT a pointer), so use . operator function pointer (seashell) Complete the eval_range module function that evaluates a function over a given range. Interactive Programs You are provided with a module stack.h. Your task is to write an interactive program to test this module. You should support these commands: - + ARG should push the integer ARG onto the stack. - - should pop a value off the top of the stack. - t should print the value on top of the stack. - s should print the size of the stack. - q should quit the program. Any invalid command should cause the program to print "INVALID\n" and quit. Make sure to write some tests in Seashell to make sure the stack module is thoroughly tested. Tutorial 7 • scanf • Arrays • General Q & A scanf Always check the return value of `scanf` to ensure it is “successful” (returns 1). Three basic forms of using `scanf` (so far): ```c // read in an integer (ignore leading whitespace) retval = scanf("%d", &i); // read in a character (ignore leading whitespace) retval = scanf(" %c", &c); // read in a character (including whitespace) retval = scanf("%c", &c); ``` HEX to Decimal (seashell) Write a program that continuously reads in HEX values one character at a time, and then prints out their equivalent decimal value. Arrays - Array definition & initialization syntax ``` int my_array[4] = {1337, 4010, 420, 8675309}; ``` - Passing arrays to functions & use of `const` ``` int array_function(const int a[], int len) { ``` - Pointer notation & Array organization in memory ``` a[i] <=> *(a + i) ``` - Fixed-Length Arrays ``` int my_array[100] = {0}; const int maxlen = 100; int curlen = 0; ``` Array Min & Max (seashell) Complete the implementation for the minmax module. Tutorial 8 - Strings - arrays of `char` - null terminated `\0` - `printf` & `scanf` with "%s" - string literals Shrnk (seashell) Write a function that “shrinks” a string by removing all vowels. Monster Hunt (seashell) Implement the functions for the “Monster Hunt” game. Tutorial 9 - Efficiency - Big O notation: use $n$ to represent “input” size - loops & iteration ($\Sigma$) - recurrence relations - don’t forget time of built-in functions - Intro to Dynamic Memory - every `malloc` needs a `free` - memory can persist beyond function call Repeat (seashell) // repeat(s) generates a new string that repeats s // for example, "hello" produces "hellohello" // effects: allocates memory (caller must free) // time: O(n) char *repeat(const char *s); Remove Dups 1 (seashell) // remove_dups(a, len) generates a new array that // has all of the elements of a (in any order) // without any duplicates // effects: allocates memory (caller must free) // time: O(?) int *remove_dups(const int a[], int len, int *newlen); 1. count the number of unique values 2. generate the new array 3. fill in the new array Analyze the time efficiency. Remove Dups 2 (seashell) Implement \texttt{remove\_dups} by sorting the elements first (using the provided \texttt{merge\_sort}). Re-analyze the time efficiency. Join (seashell) // join(words, num_words, sep) generates a new string // that contains every string in words, // separated by sep // effects: allocates memory (caller must free) // time: O(???) char *join(const char *words[], int num_words, char sep); Tutorial 10 - More Dynamic Memory - realloc - “doubling” strategy for arrays - Abstract Data Types (ADTs) - Intro to Linked Lists // REVIEW: "safe" readstr() with doubling char *readstr(void) { char c; if (scanf(" %c", &c) != 1) return NULL; // ignore initial WS int maxlen = 1; int len = 1; char *str = malloc(maxlen * sizeof(char)); str[0] = c; while (1) { if (scanf("%c", &c) != 1) break; if (c == ' ' || c == ' ') break; if (len == maxlen) { maxlen *= 2; str = realloc(str, maxlen * sizeof(char)); } ++len; str[len - 1] = c; } str = realloc(str, (len + 1) * sizeof(char)); str[len] = '\0'; return str; } A freq-y ADT In this tutorial, we are going to introduce a new freq ADT that keeps track of how frequently a number has occurred. For example, in the following sequence: 5, 5, 1, 2, 3, -1, 5, 10, -1, 5, 10000, 5 -1 has occurred 2 times 3 has occurred 1 time 5 has occurred 5 times etc... freq has two core operations: // freq_increment(f, i) increments count of i by one in f // effect: f is modified // time: O(n) if this is the first occurrence of i // O(logn) otherwise void freq_increment(struct freq *f, int i); // freq_get(f, i) determines the count of i in f // time: O(logn) int freq_get(const struct freq *f, int i); How should we implement freq? (discussion) We are going to use a *sorted* dynamic array: 5, 5, 1, 2, 3, -1, 5, 10, -1, 5, 10000, 5 <table> <thead> <tr> <th>values</th> <th>-1</th> <th>1</th> <th>2</th> <th>3</th> <th>5</th> <th>10</th> <th>10000</th> </tr> </thead> <tbody> <tr> <td>counts</td> <td>2</td> <td>1</td> <td>1</td> <td>1</td> <td>5</td> <td>1</td> <td>1</td> </tr> </tbody> </table> Implement the two operations: // freq_increment(f, i) increments count of i by one in f // effect: f is modified // time: O(n) if this is the first occurrence of i // O(logn) otherwise void freq_increment(struct freq *f, int i); // freq_get(f, i) determines the count of i in f // time: O(logn) int freq_get(const struct freq *f, int i); Reverse (seashell) Implement a `reverse` function for a linked list - iteratively - recursively Note: you do not need to `malloc` any new nodes Tutorial 11 More Linked Lists & Trees A Doubly Linked List (Seashell) A doubly linked list has been partially implemented. - write `dll_print` - write `dll_reverse` Printing a BST (Seashell) Print the nodes of a BST in order along with their “index”.
{"Source-Url": "https://www.student.cs.uwaterloo.ca/~cs136/handouts/appendix-t-post.pdf", "len_cl100k_base": 5738, "olmocr-version": "0.1.53", "pdf-total-pages": 82, "total-fallback-pages": 0, "total-input-tokens": 108856, "total-output-tokens": 8897, "length": "2e12", "weborganizer": {"__label__adult": 0.0007128715515136719, "__label__art_design": 0.000583648681640625, "__label__crime_law": 0.0004241466522216797, "__label__education_jobs": 0.0222015380859375, "__label__entertainment": 0.00015974044799804688, "__label__fashion_beauty": 0.0002791881561279297, "__label__finance_business": 0.00025653839111328125, "__label__food_dining": 0.001068115234375, "__label__games": 0.0021457672119140625, "__label__hardware": 0.0011949539184570312, "__label__health": 0.0006742477416992188, "__label__history": 0.0003345012664794922, "__label__home_hobbies": 0.0002236366271972656, "__label__industrial": 0.0005316734313964844, "__label__literature": 0.000606536865234375, "__label__politics": 0.0003542900085449219, "__label__religion": 0.001033782958984375, "__label__science_tech": 0.00402069091796875, "__label__social_life": 0.000263214111328125, "__label__software": 0.00589752197265625, "__label__software_dev": 0.955078125, "__label__sports_fitness": 0.0006852149963378906, "__label__transportation": 0.0009088516235351562, "__label__travel": 0.0004074573516845703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20812, 0.01304]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20812, 0.59906]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20812, 0.7491]], "google_gemma-3-12b-it_contains_pii": [[0, 273, false], [273, 304, null], [304, 605, null], [605, 845, null], [845, 1305, null], [1305, 1457, null], [1457, 1626, null], [1626, 1803, null], [1803, 2341, null], [2341, 2402, null], [2402, 2622, null], [2622, 3100, null], [3100, 3209, null], [3209, 3543, null], [3543, 3785, null], [3785, 4006, null], [4006, 4206, null], [4206, 4532, null], [4532, 4633, null], [4633, 4893, null], [4893, 5179, null], [5179, 5529, null], [5529, 5748, null], [5748, 6363, null], [6363, 6545, null], [6545, 6862, null], [6862, 7392, null], [7392, 7595, null], [7595, 8019, null], [8019, 8324, null], [8324, 8389, null], [8389, 8543, null], [8543, 8854, null], [8854, 9057, null], [9057, 9429, null], [9429, 9653, null], [9653, 9958, null], [9958, 10089, null], [10089, 10189, null], [10189, 10470, null], [10470, 10860, null], [10860, 11156, null], [11156, 11552, null], [11552, 12072, null], [12072, 12197, null], [12197, 12447, null], [12447, 12521, null], [12521, 12831, null], [12831, 12902, null], [12902, 13084, null], [13084, 13478, null], [13478, 14018, null], [14018, 14064, null], [14064, 14414, null], [14414, 14626, null], [14626, 15094, null], [15094, 15209, null], [15209, 15591, null], [15591, 15762, null], [15762, 15809, null], [15809, 16183, null], [16183, 16341, null], [16341, 16762, null], [16762, 16841, null], [16841, 16962, null], [16962, 17045, null], [17045, 17123, null], [17123, 17409, null], [17409, 17616, null], [17616, 18004, null], [18004, 18168, null], [18168, 18421, null], [18421, 18558, null], [18558, 19152, null], [19152, 19444, null], [19444, 19834, null], [19834, 20065, null], [20065, 20411, null], [20411, 20558, null], [20558, 20597, null], [20597, 20726, null], [20726, 20812, null]], "google_gemma-3-12b-it_is_public_document": [[0, 273, true], [273, 304, null], [304, 605, null], [605, 845, null], [845, 1305, null], [1305, 1457, null], [1457, 1626, null], [1626, 1803, null], [1803, 2341, null], [2341, 2402, null], [2402, 2622, null], [2622, 3100, null], [3100, 3209, null], [3209, 3543, null], [3543, 3785, null], [3785, 4006, null], [4006, 4206, null], [4206, 4532, null], [4532, 4633, null], [4633, 4893, null], [4893, 5179, null], [5179, 5529, null], [5529, 5748, null], [5748, 6363, null], [6363, 6545, null], [6545, 6862, null], [6862, 7392, null], [7392, 7595, null], [7595, 8019, null], [8019, 8324, null], [8324, 8389, null], [8389, 8543, null], [8543, 8854, null], [8854, 9057, null], [9057, 9429, null], [9429, 9653, null], [9653, 9958, null], [9958, 10089, null], [10089, 10189, null], [10189, 10470, null], [10470, 10860, null], [10860, 11156, null], [11156, 11552, null], [11552, 12072, null], [12072, 12197, null], [12197, 12447, null], [12447, 12521, null], [12521, 12831, null], [12831, 12902, null], [12902, 13084, null], [13084, 13478, null], [13478, 14018, null], [14018, 14064, null], [14064, 14414, null], [14414, 14626, null], [14626, 15094, null], [15094, 15209, null], [15209, 15591, null], [15591, 15762, null], [15762, 15809, null], [15809, 16183, null], [16183, 16341, null], [16341, 16762, null], [16762, 16841, null], [16841, 16962, null], [16962, 17045, null], [17045, 17123, null], [17123, 17409, null], [17409, 17616, null], [17616, 18004, null], [18004, 18168, null], [18168, 18421, null], [18421, 18558, null], [18558, 19152, null], [19152, 19444, null], [19444, 19834, null], [19834, 20065, null], [20065, 20411, null], [20411, 20558, null], [20558, 20597, null], [20597, 20726, null], [20726, 20812, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20812, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 20812, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20812, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20812, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20812, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20812, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20812, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20812, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20812, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20812, null]], "pdf_page_numbers": [[0, 273, 1], [273, 304, 2], [304, 605, 3], [605, 845, 4], [845, 1305, 5], [1305, 1457, 6], [1457, 1626, 7], [1626, 1803, 8], [1803, 2341, 9], [2341, 2402, 10], [2402, 2622, 11], [2622, 3100, 12], [3100, 3209, 13], [3209, 3543, 14], [3543, 3785, 15], [3785, 4006, 16], [4006, 4206, 17], [4206, 4532, 18], [4532, 4633, 19], [4633, 4893, 20], [4893, 5179, 21], [5179, 5529, 22], [5529, 5748, 23], [5748, 6363, 24], [6363, 6545, 25], [6545, 6862, 26], [6862, 7392, 27], [7392, 7595, 28], [7595, 8019, 29], [8019, 8324, 30], [8324, 8389, 31], [8389, 8543, 32], [8543, 8854, 33], [8854, 9057, 34], [9057, 9429, 35], [9429, 9653, 36], [9653, 9958, 37], [9958, 10089, 38], [10089, 10189, 39], [10189, 10470, 40], [10470, 10860, 41], [10860, 11156, 42], [11156, 11552, 43], [11552, 12072, 44], [12072, 12197, 45], [12197, 12447, 46], [12447, 12521, 47], [12521, 12831, 48], [12831, 12902, 49], [12902, 13084, 50], [13084, 13478, 51], [13478, 14018, 52], [14018, 14064, 53], [14064, 14414, 54], [14414, 14626, 55], [14626, 15094, 56], [15094, 15209, 57], [15209, 15591, 58], [15591, 15762, 59], [15762, 15809, 60], [15809, 16183, 61], [16183, 16341, 62], [16341, 16762, 63], [16762, 16841, 64], [16841, 16962, 65], [16962, 17045, 66], [17045, 17123, 67], [17123, 17409, 68], [17409, 17616, 69], [17616, 18004, 70], [18004, 18168, 71], [18168, 18421, 72], [18421, 18558, 73], [18558, 19152, 74], [19152, 19444, 75], [19444, 19834, 76], [19834, 20065, 77], [20065, 20411, 78], [20411, 20558, 79], [20558, 20597, 80], [20597, 20726, 81], [20726, 20812, 82]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20812, 0.01136]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
ccf5150820d053d7d991b1b19a1c29eb247036a3
Secure Coding. Practical steps to defend your web apps. Copyright SANS Institute Author Retains Full Rights This paper is from the SANS Software Security site. Reposting is not permitted without express written permission. Interested in learning more? Check out the list of upcoming events offering "Defending Web Applications Security Essentials (DEV522)" at http://software-security.sans.org/events/ PROTECTING USERS: THE IMPORTANCE OF DEFENDING PUBLIC SITES GIAC (GWAPT) Gold Certification Author: Kristen Sullivan, Kristen.Sullivan@ky.gov Advisor: Rodney Caudle Accepted: June 21, 2010 Abstract One of the biggest struggles in the field of application security is trying to convince customers and clients to pay attention to the security of public facing sites that do not transmit sensitive data. In the current state of “Cyberland”, we have improved at protecting our infrastructure and creating robust firewalls. We have even started to improve the protection of sites transmitting sensitive data online with the implementation of solid input validation and other related methods. However; it’s becoming more and more evident that our weaknesses are our public facing sites which transmit no sensitive data and could serve as gateways to sensitive information and networks. Government agencies are tasked with providing services to constituents. To fulfill their mission these agencies must maintain the trust of the general public. Cross-Site Scripting is one of the most prevalent and dangerous vulnerabilities to web applications, especially government applications that hold the information and data of each and every citizen. Wisely investing security dollars on applications not transmitting sensitive data will help resolve these weak points in the enterprise. 1. Introduction In the application security industry, one of the hardest elements to communicate to customers is the need for building secure web applications even if those applications transmit minimally sensitive data. The purpose of this document is to provide a valid case for why all applications should follow a minimum standard for secure coding practices. Many assume the only applications requiring protection are those which store sensitive or confidential data, but that is a grievous misjudgment. Additionally, with tight budgets and limited security resources, it is hard to justify reasons for securing public facing sites only offering open record information. The main cause of this is a lack of understanding the risk associated. The approach to this paper will be from a governmental perspective. State and local governments all over the country are suffering from financial difficulties due to the economic crisis. News articles about government furloughs, layoffs, cut backs in services, etc. are popping up everywhere. It is important to note that when the economy starts to recover, government lags behind by at least a couple of years, so these hardships are long from being over. When the search parameter “city budget cuts” is entered into a Google search, over 8 million records can be found and likewise if “state budget cuts” is entered; over 11 million records pop up. Put simply, cuts in local and state government security budgets create new challenges in justifying the need to implement security in all of the applications the governmental bodies have to maintain. To make matters worse, awareness and training dollars for developers learning to write secure code are even less likely to be obtained. With such tight budget constraints, government organizations are making a concerted effort to offer more and more services online since easy-to-use, self-service applications are often money-saving solutions in the long term. The new services make positive headlines and empower citizens to obtain services on demand. The latest buzz is often known as “Open Government” or “Government Transparency.” For example, Government Computer News (GCN) released an issue in July, 2010 with the following cover story: “Great Gov Web Apps, 10 Examples of Innovative, Practical Web-Based Tools That Deliver Effective Services, Collaborative Environments and Data Transparency” (Government Computer News, 2010). Unfortunately, the news that Kristen Sullivan, Kristen.sullivan@ky.gov security typically generates is only negative media. There are no news articles that talk about how robust defenses saved the identities of the taxpayers. Consequently in hard times, it’s much more difficult to sell preventative measures to executives. However the need is as important as ever, perhaps even more so. Since governments serve as data custodians and store tremendously large amounts of personal and corporate information, the necessity to protect the constituents is paramount. The consequences of government applications being the root cause of a mass number of constituents being attacked by cyber criminals are serious. Government entities are being held accountable for breaches like never before. According to a survey from the Pew Research Center, 80 percent of Americans say they have little trust in government (Pew Research Center for the People & the Press, 2010). This anti-government feeling has fueled the creation of the Tea Party movement, which is supported by 19 percent of Americans according to a recent poll from CBS News (Condon, 2010). Take for instance when WikiLeaks broke the story of the Virginia Prescription Monitoring Program breach. Hackers deleted more than 8 million records and defaced the site with a note stating that for $10 million dollars, they would return the records (Harlow, 2009). It was a month before any of the victims were notified (Harlow, 2009). Incidents like this fueled tighter legislation such as the HIPAA and FTC Red Flags Rule. To this end, the U.S. Department of Health and Human Services has a list of breaches affecting the private health information of 500 or more individuals posted on their website (U.S. Department of Health and Human Services, n.d.). Many of those listed are government agencies or government funded agencies. This list is a result of section 13402(e)(4) of the HITECH Act (U.S. Department of Health and Human Services, n.d.). The Privacy Rights Clearinghouse allows a user to search the database with various parameters (The Privacy Rights Clearinghouse, n.d.). If all types of breaches are selected, both government and education options are checked and all years selected (2005-2010), 2011 data breach records are returned. Consider that these breaches are merely those made public. Additionally, a recent a study of over 500 data breach forensic investigations conducted by Verizon Business Security Solutions, found that the publicly Kristen Sullivan, Kristen.sullivan@ky.gov reported breaches are "just the tip of the iceberg" with less than 5% of the more than 500 cases covered in the Verizon study involving some form of disclosure (Claburn, 2008). It is also important to note that these lists and studies are more geared towards more obvious losses of data. This type of loss is more tangible and easily reported than an attack on users of a public facing government internet site. Although one type of compromise is very different from the other, both are of equal importance. The current lack of trust in government entities will be exacerbated until government entities ramp up the protection of citizens. 2. Background The intelligent criminal uses conventional everyday information to begin to attack their victim. A great point made by Ira Winkler is that the criminal doesn’t care how they get the information, they’re just trying to steal it however they can. Their biggest concern is how to execute the attack easily and without getting caught. (Winkler, 2005). Those in the security industry know a key factor about attacks: once the good guys have found a way to block an attack, the attacker finds a new way to get in. According to Senior vice president of marketing at ArcSight, Reed Henry, “Whitelisting, software patching and other preventive approaches are best practices and must continue when it comes to protecting against attacks, but they will always be one or two steps behind the cyber criminals” (Raywood, 2010, para. 2). It is a fact that network layers are more robust than they used to be. It is also a fact that web applications holding sensitive data are becoming more robust and standards are increasingly rigorous each year (i.e. PCI, HIPPA, IRS Publication 1075, FISMA, etc.). More and more legislation is being passed, not just in the United States, but across the world mandating regulations to secure web applications. With that in mind, the security experts have to pay special attention to the weakest links. 2.1 Weakest Links On the whole, the weakest link tends to be the public facing applications which transmit no sensitive data such as social security numbers, credit card data, etc. These public facing applications are gateways for hackers to initiate massive malicious attacks on high traffic sites. Unfortunately, many people are unaware of this threat. Part of this lack of awareness can be explained by the application owner missing the value for a hacker wanting to exploit the target, even when he/she has consulted with a security expert and has been informed of the risks. Sometimes understanding the enemy and being able to think like him/her is a key in understanding what is important to protect (Winkler, 2005). Most government executives and financial decision makers have little desire to spend their already limited security budgets on applications which (in their opinion) hold no risk to their assets or their customers’ personal data. The figure below from Gartner illustrates the gap between web application security attacks and the percentage of budget dollars allocated to application security (Jackson Higgins, 2010). Furthermore, Gartner predicts that the average percentage of overall IT budgets spent on security will decrease by 5 percent this year after a 6 percent drop the previous year (Jackson Higgins, 2010, para. 1). The Reality: security and spending are unbalanced ![Diagram showing the gap between web application security attacks and the percentage of budget dollars allocated to application security.](image_url) Kristen Sullivan, Kristen.sullivan@ky.gov Figure 1: The Reality: Security and Spending are Unbalanced These decision makers may not have any idea what hackers stand to gain from attacking their users via these types of public web applications. They might not even know that the public facing sites could also serve as pivot points for attackers to drill much deeper down into protected, internal assets. This lack of knowledge combined with robust tools which can automate simple attacks are both keys to the attacker’s success. Hackers do not have to be extremely skilled computer scientists to execute attacks as there are so many resources and tools online that make their work relatively easy. An attack on users of a public site is akin to burglars finding an unlocked door that nobody ever guards. There isn’t even a camera to catch them. Hackers are least likely to get caught executing an attack that no one will notice. This is especially important in today’s world where many servers are shared, hosting many different types of applications which serve different types of data. This makes the attack surface more complicated. One application may have great protection and may transmit sensitive data, but if it shares a server with another vulnerable application, it may still be vulnerable as well. If an attacker can break into one of the applications on a server, they may very well be able to attack everything else on a server. This is another reason it is so important to protect all web applications despite the level of sensitive data they transmit. 2.2 How Does This Specifically Relate to Government? As government budgets shrink, all expenses are scrutinized. As baby boomers retire and as organizations cut costs, positions remain vacant causing the government workforce to decrease. The remaining labor force has taken on more responsibilities and a larger workload. Nonetheless, a gap exists between the amount of work that needs to be completed and the human resources capable of handling that work. In addition to this widening service gap, consumer expectations are disproportionate to funding allocated. Our society expects on demand services 24 hours a day, 7 days a week accessible online. One way governments continue offering public services with less people and to help meet Kristen Sullivan, Kristen.sullivan@ky.gov this societal expectation is by implementing a myriad of new technologies and online services. As this economic shortfall continues, constituents will most likely see online services replace services traditionally offered at local offices, court houses, and satellite locations (i.e. drivers license renewal, vehicle registration or tax filings.). As new online services are implemented, organizations turn to media and press releases to inform the public. Often social media and networking sites are cost conscience channels of spreading the word to citizens. Government officials are using mediums such as YouTube, Facebook and Twitter to spread awareness of new services as well as events and other announcements. This increase in online services and the advertisement of these convenient resources attracts more hackers to these sites. While Government Transparency has had positive effects, the threat must be assessed with application security as the core component in the process. While an attack on a government site not holding sensitive information will not result in a direct data breach, it will cause great damage to the constituents using the sites. According to the XSSed project, the largest online archive of XSS vulnerable sites, the Ohio Department of Health site has a cross-site scripting flaw that was identified in June of 2008 which still has not been resolved (XSSed Project, n.d.). The site contains information about health departments, locations, illness advisories, data statistics, and resources. This site’s only purpose is to provide information to the public; however it is a known gateway for attackers to exploit Ohio citizens. Even if this issue has been resolved, being listed on the XSSed site will attract undue attention from malicious predators. Constituents should not need to worry about being attacked using this type of resource. These sites exist to serve and protect citizens, not expose users to malicious attacks. It is the government’s ethical responsibility to ensure each and every one of their sites is safe, secure, and does not pose a threat. 3. Protecting the Constituents: An Example Utilizing the Kentucky Derby More than 1.5 million people attend the Kentucky Derby Festival each year (Kentucky Derby, n.d.). Approximately twice the number of people who actually attend a Derby event actually go online to search for information related to the event such as hotel accommodations, tickets prices, event times, locations, etc. This number can be as high as three million people trolling the internet for public information regarding the Derby. There are many government sponsored websites that advertise and announce Derby Festival information. Chances are most people are not logging into a sensitive web application to gain this information. With this stage set, we must ask the question: What if one of the most popular of these government web applications were riddled with Cross-Site Scripting or a malicious banner advertisement? Here is a step-by-step guide to how a malicious user could compromise the Kentucky Government Derby website. **Step 1:** A malicious user has identified a web application vulnerability in a Kentucky Government web application issuing Derby information to the public where user input might exist such as a search box. **Step 2:** The malicious user injects a dangerous script into the vulnerable site. At this point, imagination is the only limitation. **Step 3:** The user visits the page(s) with the embedded malicious script and becomes the attacker’s victim just by visiting an innocuous Derby website. Below explains some of the attack options an attacker has from Step 2: Kristen Sullivan, Kristen.sullivan@ky.gov 3.1 Reading/Stealing Cookies By stealing cookies, the attacker can become another user. For example, the injected script embedded in an image tag sends the cookies to the attacker’s malicious site which has referrer logging turned on. The attacker looks for requests with a cookie value in their http logs. This gives them the page that the cookie is valid for. They can then add the cookie to their browser, taking over the user’s session. A Kentucky Derby site visitor could browse to a State Parks reservations page to book a cabin during their stay. Since the session is the same and the hacker has stolen the cookie, they could potentially hijack the session and steal the user’s credit card information. 3.2 Redirecting User One very simple and popular use of Cross-Site Scripting is the technique of directing a user to a different site; usually one run by an attacker that has the look and feel of the legitimate site. The victim is tricked into entering some kind of personal information or creating a bogus account so the attacker can then steal information and take advantage of the victim. Using the Derby example, this may be used on a page that advertises ticket sales. Instead of redirecting the user to the appropriate site where a user can buy legitimate tickets, the user is instead redirected to a rogue site in order to steal the users’ credit card and billing information. Additionally, the ticket sales site has lost revenue because of the redirect. It is often said that word of mouth is the best way of advertising a business. This is true whether the attention is positive or negative. Celebrities including Kid Rock, Angelica Houston, and even Queen Elizabeth have attended the Kentucky Derby. These people have access to the media and could very easily speak to the masses in regards to their information being stolen due to this type of vulnerability. This could have an extremely negative impact on the Commonwealth of Kentucky’s reputation in terms of the lack of security in the website and could affect future funding allocations to the agency responsible for hosting the Derby. Kristen Sullivan, Kristen.sullivan@ky.gov 3.3 Harvesting a Client’s Browser History This attack works well with the Derby example as this attack is used for targeted attacks rather than a general mass assault. One of the first things hackers do in a targeted attack is reconnaissance work. Exposure of a user’s search history and what applications they may be logged into can give an attacker a lot of profiling information in regards to their victims. This information can be used to break into their corporate networks; personal information such as banking sites and email accounts; or expose private information that might lead to cyber stalking. Many people, despite warnings on the nightly news still use weak passwords. Kevin Johnson used a technique by adding a “note” to his Facebook page asking people a list of questions that are the most common “forgot-password” questions. He masked it by advertising as a “getting to know you better” note. He was able to harvest a large number of password account information. Grossman, Hansen, Petkov, Rager, and Fogie discuss and illustrate several methods for performing this exploit (Grossman, Hansen, Petkov, Rager, and Fogie, 2007): - **JavaScript/CSS API “getComputedStyle”:** This technique allows the attacker to use link colors to identify whether or not the victim has visited particular sites. This is done via a targeted query in the address history, but most people use Amazon, eBay, and other well-known sites or email services such as Gmail, Hotmail, etc., so the hacker needs only to write a script querying the most popular sites which incorporate login pages. - **JavaScript Console Error Login Checker:** This method can be used to test whether or not a user is logged into a particular site or not. It should be noted that this is Firefox specific, the most popular web browser according to W3Schools.com (W3Schools.com, n.d.). - **In terms of stealing search engine queries,** one can build on the JavaScript/CSS API attack above. The results generated from the JavaScript/CSS API attack can give the attacker a view of the victim’s interests and therefore, the attacker can now attempt to discover what search queries a user has utilized in a search engine. SPI Dynamics (now owned by HP) came up with a tool called SearchTheft that not only helps illustrate their research in regards to this attack, but also speeds up the process of discovery. Kristen Sullivan, Kristen.sullivan@ky.gov itself. SPI Dynamics points out in their research brief, that there are multiple queries constructed based on how a user inputs their search words in the search engine form (SPI Dynamics Labs, n.d.). These queries can differ by case, by order, and by how the user initiates the search (i.e. hitting enter versus clicking on the search button.) SearchTheft is an algorithm that can significantly speed up the iterations of a particular search. They were able to generate “all variations of a six word search query, some 46,080 combinations” in 5 seconds. When an attacker is attempting to perform reconnaissance work in order to find out more about their victim, this algorithm can be a handy way to speed up the process. Many times, if a hacker can find enough information about the person they are attacking; they can soon begin password inference with better defined dictionaries, target them for phishing attacks, and then gain enough information to cyber stalking. For instance, if the victim is a developer, the attacker may find a group they have queried about a problem with a particular application. This may give the attacker a way to not only attack the victim, but enable them to attack another application. 3.4 Taking Over and Controlling User Browsers (Aka Zombies): Zombies are similar to bots, but zombies run within the browser’s context using client side code whereas bots run on the operating system at a lower level. There are various frameworks that can be used as controllers such as AttackAPI and BeEF. These frameworks can connect the user to the malicious server. The attacker can then choose what modules they want to run against that zombie which could retrieve client information, determine what Firefox extensions might be installed, request initiations, inject Javascript to run other attacks, copy clipboard information, and scan ports. In the Derby example, a website visitor may unknowingly expose their entire network mapping thus giving the attacker access to private network folders and files. - Intranet Hacking: In application security, professionals often refer to pivot points. Pivot points are ways an attacker gets help to escalate an attack to Kristen Sullivan, Kristen.sullivan@ky.gov another level. Web browsing can create pivot points for attackers to get into internal, protected systems. The following image and step-by-step description depicts how this works (Grossman et al., 2007, p. 174): ![Figure 2: Web Browsing Pivot Attack](image) **Exploit Procedures** 1. A victim visits a malicious Web page or clicks a nefarious link; embedded JavaScript malware then assumes control over their Web browser (Grossman et al., 2007, p. 174). 2. JavaScript malware loads a Java applet revealing the victim’s internal NAT IP address (Grossman et al., 2007, p. 174). 3. Then, using the victim’s Web browser as an attack platform, the JavaScript malware identifies and fingerprints Web servers on the internal network (Grossman et al., 2007, p. 174). 4. Attacks are initiated against internal or external Web sites, and compromised information is sent outside the network for collection. (Grossman et al., 2007, p. 174) 3.5 XSS Defacements Defacements on web sites have become less of a concern since other types of attacks have matured and can cause far more damage. As with other security defects, old vulnerabilities are still applicable and cannot be totally ignored. In the case of the Derby example, if a rogue user were to exploit a Cross-Site Scripting Vulnerability to deface a government sponsored Derby site, there are consequences that could include negative media, embarrassment, and a general distrust of government sponsored web sites. If there were a highly publicized defacement of a government Derby event site, the constituents might think twice before trusting or using a Department of Revenue website to file their taxes even though the state government sites may be completely different and even maintained by a different entity. 4. Convincing Executives Prior to a discussion in regards to fixing the code itself one must discover a way to convince executives, managers and developers of the importance of coding against web attacks on public applications. Though regulatory compliance helps to back up the security expert’s justification, by itself, it is not enough and one has to keep in mind that being compliant is not the same as being secure. Looking back at the history of cross-site scripting helps identify how people made the wrong assumptions about the consequences of such an attack. Microsoft and CERT (Computer Emergency Response Team) published a paper in regards to cross-site scripting on February 2, 2000. Despite the publicity of the problem and the fact that malicious hackers were using it to wreak havoc on a small scale, developers tended to ignore the issue. All of this changed in October of 2005 when the Samy Worm hit MySpace. The virus had a widespread influence as it affected more than a million users. Hackers especially saw the implications and began to use XSS in much more devastating ways. The assumption that firewalls and SSL protect web servers has been proven to be a fallacy (Grossman et al., 2007). In order to get executives to realize the importance, security professionals must build a case within their organization. Putting together a concise portfolio of case studies, whitepapers, power Kristen Sullivan, Kristen.sullivan@ky.gov point presentations, and newsletter articles from outside security consultants and analysts can illustrate the point and help executives become aware of the breaches that have occurred at partner institutions or sister agencies. For instance, the state of Ohio has made many headlines due to the numerous breaches that have occurred over the past few years. While the state of Ohio has worked very hard to dramatically improve their situation, they serve as a good example to other states. The point is to convince the decision makers that they need to be proactive and learn from the mistakes of others. Turn the argument of “it will never happen to me” into “I don’t want to be like them.” In addition, developing business value metrics and prioritizing sites will help you meaningfully quantify the business value proposition of risk mitigation and help show executives where their precious dollars should be spent. By developing a concise portfolio of application security awareness documents, compiling case studies related to comparable organization breaches, conferring with security experts on the state of risk within your organization, and implementing a business value metrics initiative of all sites you can open up a dialog with executives and decision makers about implementing preventative measures. 5. Prevention It’s important to point out that since public web servers advertise themselves to the world via port 80, there is little protection from malicious intent except for what the developers implement in the code. Web application firewalls may offer a short-term solution that minimize some of the risk; however the key to application security lies in writing secure code. One of the problems associated with this change in web application security is that developers are held to strict deadlines to create a site that functions. Often times proper security testing isn’t implemented. Developers tend not to think about how someone could break their code. There is an old story about a computer science student who built a calculator application for a class. The student wrote the code and tested it to the point they were sure they would receive an A. When it was time to present the new application to the professor, the first thing the professor did was enter A+B and the... application simply did not know how to process the input, so the application failed. Every developer can learn a lesson from this simple example. There are simple preventative measures we can employ in order to help avoid problems like the cross-site scripting examples above in the previous section. The most important thing we can do is to educate our developers and implement secure coding standards and code reviews. The SDLC (Software Development Lifecycle) as well as ITIL (Information Technology Infrastructure Library) instruct organizations to implement security from the beginning of a project kickoff through the implementation. This not only includes new projects, but enhancements to applications. It is imperative to have input from the application security professionals in an organization from the onset of a project. This is because it is very difficult to inject secure code into an application after it has been written and/or designed. It is also much more expensive to retrofit security directly before implementation. If an application is written without regards to security, the developers may have to change hundreds of lines of code. With tight deadlines in place, problems may get implemented without being fixed at all. If developers are praised after a public facing site is published sans major security issues, the management team and security personnel can point to that example as a good practice for future maintenance and development projects This practice can also help to strengthen the relationship with between the developer group and security group. For instance, if a particular Derby website hosted by the Governor of the Commonwealth of Kentucky is usually visited by 1.5 million people, it should be noted that the developers responsible for implementing such a safe site have helped to protect 1.5 million people. No development effort should be taken lightly when looking at simple business rules. It is up to ethical development standards to build every site so that it can be maintained with regard to security. Enterprise committees can help by creating minimum level secure coding standards and review processes for all web applications. These committees can even develop a library of sub-routines which each agency can utilize in their own applications. These libraries of ready-made code will help with the consistency of the security program and ease efforts of remediation by saving time for both the developers and the security specialists. If enterprise committees don’t exist at your organization, work with executives across departments and cabinets to organize a strong Kristen Sullivan, Kristen.sullivan@ky.gov team of decision makers to expose the problems of web application attacks and the need to allocate funding to address these concerns. The key is to “bake security in instead of brushing it on”. It should never be assumed that a public facing website will always hold only public data. Sites grow and evolve constantly. As business rules change, new political appointees are elected, and development groups grow or wane, systems can become repurposed or combined. This means that what yesterday was a simple application to advertise Derby Festival events may morph into something completely unexpected by the original developer. With this in mind, here are a few general prevention methods to keep applications safe. First, reliable server side input validation should be incorporated into code in a layered approach (i.e. whitelists followed by blacklists and HTML encoding). This method can prevent many currently known vulnerabilities as well as any new attack vectors that have not yet been discovered. Additionally, encryption should always be included wherever userids and passwords are utilized. It is also important to keep a detailed list (version, implementation date, etc.) of any and all open source pieces or parts (i.e. plugins, APIs, etc.) and make sure they stay patched whenever a vulnerability is published. Lastly and most importantly, never assume vendor solutions are secure. Always assess newly purchased software in house and/or by a third party. If vulnerabilities are identified, report them to the vendor service desk to seek remediation. Once the vendor publishes a patch or update to resolve the issue, follow up with another assessment. It is also advisable to always test software in your own environment or gain the appropriate permissions and legal agreements if the application exists in an environment outside of your control. 6. Conclusion In order to fight against cyber enemies who are becoming more and more efficient, the security industry has to work harder to keep up with attackers. While we have improved at protecting sites transmitting sensitive data by enhancing our infrastructure, creating robust firewalls, and implementing solid input validation, it is time to focus on the weaknesses of our public facing sites which transmit no sensitive data and could serve Kristen Sullivan, Kristen.sullivan@ky.gov as gateways to sensitive information and networks. It is time to show customers and colleagues how attacks like cross-site scripting can be far more harmful than simply stealing sensitive data. Public facing sites pose as much, if not more, of a threat due to their inconspicuous nature. The government especially has a legal and moral obligation to protect the constituents from being unsuspecting victims of malicious attacks as more and more services are offered online, despite or because of budget cuts. There needs to be a cultural change and this type of change is the hardest to implement in hard economic times. With tight budgets, making a case to executives to spend funding on preventative measures to secure public facing sites is very difficult. Security professionals need to build a case within their organization for the prioritization of application security funding. It is imperative that agencies become proactive instead of reactive when it comes to application security. Learning the hard lessons after a breach has taken place is too late. The public has watched businesses go bankrupt over the cost of an individual breach. Government is not immune to this kind of economic hemorrhaging as we saw with the Virginia Prescription Monitoring Program and the state of Ohio’s security breaches. As custodians of the citizens’ data and providers of public service, there is an ethical responsibility that can no longer be ignored. In the long term, instilling secure coding practices, processes, resources, and standards will save the government money. The culture change is based on the same principles the insurance business relies on. Though an individual may never suffer from a house fire, the majority of people still invest in fire insurance. Web application security concepts are of the same nature. Anyone who implements an insecure application is merely playing the odds just as a Derby attendant bets on horses at Churchill Downs. Kristen Sullivan, Kristen.sullivan@ky.gov 7. References Kristen Sullivan, Kristen.sullivan@ky.gov Kristen Sullivan, Kristen.sullivan@ky.gov Protecting Users: The Importance of Defending Public Sites Kristen Sullivan, Kristen.sullivan@ky.gov # Upcoming SANS App Sec Training <table> <thead> <tr> <th>SANS 2020</th> <th>Orlando, FL</th> <th>Apr 03, 2020 - Apr 10, 2020</th> <th>CyberCon</th> </tr> </thead> <tbody> <tr> <td>SANS Amsterdam May 2020</td> <td>Amsterdam, Netherlands</td> <td>May 11, 2020 - May 18, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Silicon Valley - Cupertino 2020</td> <td>Cupertino, CA</td> <td>Jun 22, 2020 - Jun 27, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Copenhagen August 2020</td> <td>Copenhagen, Denmark</td> <td>Aug 24, 2020 - Aug 29, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Network Security 2020</td> <td>Las Vegas, NV</td> <td>Sep 20, 2020 - Sep 27, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS OnDemand</td> <td>Online</td> <td>Anytime</td> <td>Self Paced</td> </tr> <tr> <td>SANS SelfStudy</td> <td>Books &amp; MP3s Only</td> <td>Anytime</td> <td>Self Paced</td> </tr> </tbody> </table>
{"Source-Url": "https://software-security.sans.org/resources/paper/reading-room/protecting-users-importance-defending-public-sites", "len_cl100k_base": 7264, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 46160, "total-output-tokens": 9119, "length": "2e12", "weborganizer": {"__label__adult": 0.0007262229919433594, "__label__art_design": 0.0006623268127441406, "__label__crime_law": 0.02337646484375, "__label__education_jobs": 0.0018758773803710935, "__label__entertainment": 0.00026106834411621094, "__label__fashion_beauty": 0.0003821849822998047, "__label__finance_business": 0.004245758056640625, "__label__food_dining": 0.0005125999450683594, "__label__games": 0.002376556396484375, "__label__hardware": 0.002880096435546875, "__label__health": 0.0018463134765625, "__label__history": 0.0004680156707763672, "__label__home_hobbies": 0.0002505779266357422, "__label__industrial": 0.0008554458618164062, "__label__literature": 0.00044345855712890625, "__label__politics": 0.0016345977783203125, "__label__religion": 0.0005064010620117188, "__label__science_tech": 0.160400390625, "__label__social_life": 0.00028204917907714844, "__label__software": 0.1553955078125, "__label__software_dev": 0.63916015625, "__label__sports_fitness": 0.0003893375396728515, "__label__transportation": 0.0006628036499023438, "__label__travel": 0.0003135204315185547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40171, 0.03028]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40171, 0.31835]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40171, 0.93473]], "google_gemma-3-12b-it_contains_pii": [[0, 405, false], [405, 1782, null], [1782, 4290, null], [4290, 6769, null], [6769, 8750, null], [8750, 10346, null], [10346, 12659, null], [12659, 14994, null], [14994, 16380, null], [16380, 18537, null], [18537, 20958, null], [20958, 23190, null], [23190, 24121, null], [24121, 26406, null], [26406, 28710, null], [28710, 31384, null], [31384, 33741, null], [33741, 35745, null], [35745, 37273, null], [37273, 38911, null], [38911, 39134, null], [39134, 40171, null]], "google_gemma-3-12b-it_is_public_document": [[0, 405, false], [405, 1782, null], [1782, 4290, null], [4290, 6769, null], [6769, 8750, null], [8750, 10346, null], [10346, 12659, null], [12659, 14994, null], [14994, 16380, null], [16380, 18537, null], [18537, 20958, null], [20958, 23190, null], [23190, 24121, null], [24121, 26406, null], [26406, 28710, null], [28710, 31384, null], [31384, 33741, null], [33741, 35745, null], [35745, 37273, null], [37273, 38911, null], [38911, 39134, null], [39134, 40171, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40171, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40171, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40171, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40171, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40171, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40171, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40171, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40171, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40171, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40171, null]], "pdf_page_numbers": [[0, 405, 1], [405, 1782, 2], [1782, 4290, 3], [4290, 6769, 4], [6769, 8750, 5], [8750, 10346, 6], [10346, 12659, 7], [12659, 14994, 8], [14994, 16380, 9], [16380, 18537, 10], [18537, 20958, 11], [20958, 23190, 12], [23190, 24121, 13], [24121, 26406, 14], [26406, 28710, 15], [28710, 31384, 16], [31384, 33741, 17], [33741, 35745, 18], [35745, 37273, 19], [37273, 38911, 20], [38911, 39134, 21], [39134, 40171, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40171, 0.07874]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
cd62d3b9b3a7995dc74c546fb5a3e3081074bfb1
IT Summit 2014 Going Mobile: Moving Scholarship Universe To The Small Screen Presenter: don-E Merson, Scholarship Universe The dilemma we all know Paper! Plastic! The solution Now you’re thinking! Paper or Plastic? I’m NEITHER. Desktop or Mobile? Programming for desktop... Program for mobile So what do you do? Responsive-It’s kind of both Now you’re thinking! This is our story... /Who (am I|are we)?/ - don-E Merson, Architect CURRENT TEAM - Ken Downs Program Manager - don-E Merson Applications Systems Analyst/Developer, Principal/Architect - Emily Romero Student, Research Team Lead - Kyra Miller Student, Application Programmer - Rahul D. Kamath Student, Application Developer - Lauralyn P. Gonzalez Student, Researcher - Rafael Garcia Student, Researcher/Developer - Sam Rodriguez Student, Researcher - Amit Kumar Student, Application Developer - Rahul P. Kamath Student, Application Developer - Mariah E. Falshaw Student, Researcher - Susana Torres Student, Researcher What is Scholarship Universe? Apps Welcome back! Your To-Do's: Deadlines: See your scholarships » About Calendar Welcome back! Update: We have made some changes! Please email OSFA-SU@email.arizona.edu if you have any questions or suggestions. What's New Check out the latest improvements, changes, and updates to the system. Coming Soon Your Feedback We are in the process of a major site redesign. Please use the link below to tell us your thoughts about what works, what doesn't, and what you'd like to see us improve next. Tell us what you think » Your Settings Manage your subscription and frequency for email notifications from Scholarship Universe. Coming Soon Admin Application (SUAA) # Reporting Application (SURA) ## Scholarship Universe Reporting Application (SURA) ### Current Activity ![Bar chart showing the number of pages per minute over the last 20 minutes.](chart.png) <table> <thead> <tr> <th>Minute on the hour</th> <th># of Pages Per Minute</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>0</td> </tr> <tr> <td>2</td> <td>1</td> </tr> <tr> <td>3</td> <td>4</td> </tr> <tr> <td>4</td> <td>0</td> </tr> <tr> <td>5</td> <td>0</td> </tr> <tr> <td>6</td> <td>0</td> </tr> <tr> <td>7</td> <td>0</td> </tr> <tr> <td>8</td> <td>0</td> </tr> <tr> <td>9</td> <td>0</td> </tr> <tr> <td>10</td> <td>0</td> </tr> <tr> <td>11</td> <td>0</td> </tr> <tr> <td>12</td> <td>0</td> </tr> <tr> <td>13</td> <td>0</td> </tr> <tr> <td>14</td> <td>0</td> </tr> <tr> <td>15</td> <td>0</td> </tr> <tr> <td>16</td> <td>0</td> </tr> <tr> <td>17</td> <td>0</td> </tr> <tr> <td>18</td> <td>0</td> </tr> <tr> <td>19</td> <td>0</td> </tr> <tr> <td>20</td> <td>0</td> </tr> </tbody> </table> ### Table of Activity <table> <thead> <tr> <th>First Name</th> <th>Last Name</th> <th>App</th> <th>Machine</th> <th>Browser</th> <th>Browser Information</th> <th>Email</th> </tr> </thead> <tbody> <tr> <td>Donald</td> <td>Merson</td> <td>susa</td> <td>SU-WEB1</td> <td>Google Chrome</td> <td>45.0.2454.101</td> <td><a href="mailto:dmerson@email.arizona.edu">dmerson@email.arizona.edu</a></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td>Mozilla/5.0</td> <td>(Windows NT 6.3)</td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td>WC3104; Apple</td> <td>Webkit/537.36</td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td>(KHTML, like</td> <td>Gecko)</td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td>Chrome/45.0.2454.101</td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td>Safari/537.36</td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td>Mozilla/5.0</td> <td>(X11; Ubuntu/18.04)</td> <td></td> </tr> </tbody> </table> Automated Email System (AES) Example URL for Campaign: <https://scholarshipuniverse.arizona.edu/suha/?utm_source=COMP&utm_medium=Email&utm_content=##CampaignAor##&utm_campaign=Reminder&vid=##vid##&Login Now!> Version A Subject A: ##FirstName## College of Medicine: Phoenix Scholarship Body A: Dear ##FirstName##, According to our records, you indicated that you're a student in the College of Medicine: Phoenix. Their general scholarship application is currently available in Scholarship Universe and you may be eligible to apply. The deadline for this scholarship is... Yenta What business is yenta in? Yours. Machine Matcher (SUMM) - Web Crawler - Machine Learning - RegWhy – a better reg ex engine - Bringing to students very soon. WE ARE CRAWLING THE WEB SO YOU WON’T HAVE TOO We have cred... Models of Excellence Award - 2015 Scholarship Universe was recognized as a Models of Excellence by University Business for being an innovative and effective inter-departmental initiative that bolsters student success. University Business is the leading publication for senior managers at colleges and universities throughout the United States, reaching 75,000 leaders who manage enrollment, technology, business, finance, facilities and academic affairs. Campus Technology Innovators Awards - 2012 Campus Technology is one of higher education’s top information sources. Out of 354 nominees, Scholarship Universe was recognized as an extraordinary technology solution and it was one of only two projects to win the national innovator award for Student Systems and Services! ADDY Award – 2011 AAF Tucson recognizes the best in advertising through several awards and award programs, including the national ADDY Awards competition. Scholarship Universe won a bronze ADDY Award for our website, banners, tent and ads! Our story... - Incorporated Responsive Design - But not everywhere... - I will explain why - How we did it - What we learned Who this talk is for.. - General development audience - Talk about what to do - Will show general idea of how - Talk to me for more geeky details Your first step is... - Measure - We used Google Analytics What to measure.. - Bounce Rate What to measure.. - Bounce Rate - Caveats What to measure... - Mobile Percentage - Session Duration <table> <thead> <tr> <th>Device Category</th> <th>Acquisition</th> <th>Behavior</th> </tr> </thead> <tbody> <tr> <td></td> <td>Sessions</td> <td>% New Sessions</td> </tr> <tr> <td>Desktop</td> <td>4,401</td> <td>34.65%</td> </tr> <tr> <td>Mobile</td> <td>399</td> <td>43.36%</td> </tr> <tr> <td>Tablet</td> <td>148</td> <td>44.59%</td> </tr> </tbody> </table> What we have measured.. - Over million page views in first six months - 8.23% increase over previous year - Bounce rate dropped 36.71% - Allowed us to validate our work Create new types of measurements - Not everyone clicks on campaign link - AES tracks every page for any one who received an email <table> <thead> <tr> <th>Date of Login</th> <th># Unique Logins</th> <th>Campaign Sent</th> <th>Frequency</th> <th>Population</th> <th># Sent</th> <th>% Effective</th> </tr> </thead> <tbody> <tr> <td>06/26/15</td> <td>31</td> <td>06/26/15</td> <td>One Time</td> <td>Active Student With College</td> <td>346</td> <td>0.089%</td> </tr> <tr> <td>06/27/15</td> <td>15</td> <td>06/26/15</td> <td>One Time</td> <td>Active Student With College</td> <td>346</td> <td>0.043%</td> </tr> <tr> <td>06/28/15</td> <td>14</td> <td>06/26/15</td> <td>One Time</td> <td>Active Student With College</td> <td>346</td> <td>0.040%</td> </tr> <tr> <td>06/29/15</td> <td>20</td> <td>06/26/15</td> <td>One Time</td> <td>Active Student With College</td> <td>346</td> <td>0.057%</td> </tr> <tr> <td>06/30/15</td> <td>26</td> <td>06/26/15</td> <td>One Time</td> <td>Active Student With College</td> <td>346</td> <td>0.075%</td> </tr> </tbody> </table> But remember... “Not everything that counts can be counted, and not everything that can be counted counts.” -Albert Einstein Step 2: Steal someone else’s work... "Good artists copy, great artists steal." - Pablo Picasso What we took... - Bootstrap - UA design group menus Things to consider - Play nice with the UA - Know what **YOU** do well - Proven track record - Large community - Commitment to compatibility - Build for change Step 3- Pick your battles - What has the highest impact to customer? - What has the lowest time to deliver? Remember...be this guy Premature optimization is the root of all evil - Donald Knuth Don’t be this guy... OPTIMIZE ALL THE THINGS Our first battle plan - Upgrade SUSA Master page - Pandora-got out of the box! - Scholarships List Page - Create columns that hide in the table - Left SUDA and SUAA the same Step 4 – Test it out “No battle plan survives contact with the enemy.” Colin Powell © Lifehack Quotes Only test ... - The scenarios you want to work in production - But not in production Mobile adds new challenges - Screen size - Browsers - Form Factors - You can’t duplicate every scenario - But there are tools to help Lots of emulators Google Chrome Developer Tools Emulators approximate - Test on real devices if possible - Can borrow from library! **Laptops and Netbooks** All our libraries have laptops and MacBooks which are available for 72 hour checkout to UA students. All our libraries have netbooks which are available for 120 hour (5 days) checkout to UA students. **iPads and Android Tablets** The Main Library has iPads, iPad Minis, and Android tablets available for 72 hour checkout to UA students. The Science-Engineering and Fine Arts Libraries have iPads available for 72 hour checkout to UA students. Test what people use <table> <thead> <tr> <th>Operating System</th> <th>Sessions</th> <th>% Sessions</th> </tr> </thead> <tbody> <tr> <td>Windows</td> <td>35,452</td> <td>56.21%</td> </tr> <tr> <td>Macintosh</td> <td>21,753</td> <td>34.49%</td> </tr> <tr> <td>iOS</td> <td>3,462</td> <td>5.49%</td> </tr> <tr> <td>Android</td> <td>1,339</td> <td>2.12%</td> </tr> <tr> <td>Chrome OS</td> <td>852</td> <td>1.35%</td> </tr> <tr> <td>Linux</td> <td>193</td> <td>0.31%</td> </tr> <tr> <td>Windows Phone</td> <td>21</td> <td>0.03%</td> </tr> </tbody> </table> Think hard about it... <table> <thead> <tr> <th>Operating System</th> <th>Sessions</th> <th>% Sessions</th> </tr> </thead> <tbody> <tr> <td>1. Windows</td> <td>35,452</td> <td>56.21%</td> </tr> <tr> <td>2. Macintosh</td> <td>21,753</td> <td>34.49%</td> </tr> <tr> <td>3. iOS</td> <td>3,462</td> <td>5.49%</td> </tr> <tr> <td>4. Android</td> <td>1,339</td> <td>2.12%</td> </tr> <tr> <td>5. Chrome OS</td> <td>852</td> <td>1.35%</td> </tr> <tr> <td>6. Linux</td> <td>193</td> <td>0.31%</td> </tr> <tr> <td>7. Windows Phone</td> <td>21</td> <td>0.03%</td> </tr> </tbody> </table> Screen Sizes Step 5-Deliver - What do you call a product where customers are reporting no problems? - Not Deployed! - Deploy! Step 6 - Solicit feedback Redesign Feedback Form 1. How often do you use Scholarship Universe? - Once a day or more - Once or twice a week - Once or twice a month - A few times each semester - A few times each year - Less than once a year - This is my first time using Scholarship Universe 2. Compared with your previous experience with Scholarship Universe, how much more or less do you like the layout as it currently appears? - I like it much more now - I like it a little more now - I like it about the same - I like it less now - I like it much less now - I can’t tell what’s changed or am not sure 3. Compared with the last time you used Scholarship Universe, is information more or less difficult to find now? - Information is much easier to find now - Information is easier to find now - Finding information is about the same as before - Information is much harder to find now - Not sure 4. What is one thing you’d like to improve in Scholarship Universe? Character limit: 500 Winning! - Some people will: - Like it - Hate it - Try to find out why - What does the data say? - If you can ->do focus groups Step 7 - Evaluate - Look at big picture - Don’t get caught by endowment effect - Restart process Does it all have to be mobile? - How will people use it? - How much information will people need? When mobile is not the answer - Wide grids with lots of data - Displaying full documents on the screen for users General Principles - Is it possible to make this smaller but it is still usable? General Principles - Is it possible to make this smaller and it is still usable? - Does it pull so much data that a mobile phone with balk? General Principles - Is it possible to make this smaller and it is still usable? - Does it pull so much data that a mobile phone would balk? - Are the people going to use this page “on the road?” Some of apps are 100% mobile - SUDA - SUAA - Sometimes you need... - A soft hardness What kind of issues have we found - Tables - We call them Grids - I will compromise => Tabular Data - Really long names - Too much data - What to hide or show - Too much JavaScript - HTML in the database Tabular Data (Grids) - Problem - Too many columns to display on mobile - Solutions - Hide less important columns - Allow scrolling to right - Change to card view - New Problem - Don’t have sort ability - Had to create filter/sorting component Too many words... Problem - The values from certain fields can vary - Example - Scholarship Names - Solution - Use bootstrap column classes - Col-lg-(number) - Row Resizing the screen - Problem - How to display data so it flows down the screen - Solution - Once again we use default bootstrap classes - Media breaks - Use groupings - Complaints - We can really do better by adding more media breaks Conflicts - Problem - JavaScript frameworks break each other - Bootstrap tooltips - Broke due to kendo and jquery conflict - Solution - Pay attention to the order your javascript - .noconflict - Quit adding more JavaScript frameworks Menus - Problem - Dropdown menus on mobile - Don’t use multi-layer menus Menus - Solution - Implemented slickmenu - Implement UA group design colors HTML in the database - Problem - Specifically styles in the database HTML in the database - Problem - Specifically styles in the database - Solution - Update statements - If class=>update definition - Never again! Some things we are going to try - Our framework allows us to display different pages programmatically - .mobile - Uses user-agent - Need to do a better job at minification - Once again have a built in framework - Mostly just try to suck a little less everyday Some personal notes on mobile - If you don’t have a mobile friendly site - You can lose people - Too much mobile - You can lose people - ESPN - Don’t use their product anymore - Unnamed U of A site I no longer use In Conclusion - Mobile migration is not a one time thing - Use the steps given here - Set up measurements - Steal stuff you are not good at - Pick Your battles - Test it out - Deliver it - Solicit Feedback - Evaluate - Learn from our mistakes - Deploy a little bit at a time - Bring your own bags to the grocery store That's all Folks! Questions???
{"Source-Url": "http://itsummit.arizona.edu/sites/default/files/2015/making-scholarship-universe-work-smaller-screens-mobile-devices.pdf", "len_cl100k_base": 4398, "olmocr-version": "0.1.53", "pdf-total-pages": 72, "total-fallback-pages": 0, "total-input-tokens": 86388, "total-output-tokens": 6497, "length": "2e12", "weborganizer": {"__label__adult": 0.00037217140197753906, "__label__art_design": 0.0011119842529296875, "__label__crime_law": 0.0003154277801513672, "__label__education_jobs": 0.03240966796875, "__label__entertainment": 0.0001883506774902344, "__label__fashion_beauty": 0.00027108192443847656, "__label__finance_business": 0.0034885406494140625, "__label__food_dining": 0.0004200935363769531, "__label__games": 0.0007424354553222656, "__label__hardware": 0.0008406639099121094, "__label__health": 0.0003178119659423828, "__label__history": 0.0003581047058105469, "__label__home_hobbies": 0.0002409219741821289, "__label__industrial": 0.0003843307495117187, "__label__literature": 0.0003910064697265625, "__label__politics": 0.00022411346435546875, "__label__religion": 0.0004315376281738281, "__label__science_tech": 0.0026912689208984375, "__label__social_life": 0.00038814544677734375, "__label__software": 0.05853271484375, "__label__software_dev": 0.89453125, "__label__sports_fitness": 0.00034618377685546875, "__label__transportation": 0.0004572868347167969, "__label__travel": 0.00045013427734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15113, 0.018]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15113, 0.05551]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15113, 0.86441]], "google_gemma-3-12b-it_contains_pii": [[0, 124, false], [124, 148, null], [148, 155, null], [155, 164, null], [164, 231, null], [231, 250, null], [250, 277, null], [277, 296, null], [296, 315, null], [315, 366, null], [366, 387, null], [387, 1020, null], [1020, 1056, null], [1056, 1138, null], [1138, 1138, null], [1138, 1696, null], [1696, 1721, null], [1721, 3974, null], [3974, 4551, null], [4551, 4593, null], [4593, 4593, null], [4593, 4593, null], [4593, 4766, null], [4766, 5798, null], [5798, 5926, null], [5926, 6073, null], [6073, 6133, null], [6133, 6166, null], [6166, 6209, null], [6209, 6823, null], [6823, 6993, null], [6993, 7945, null], [7945, 8072, null], [8072, 8169, null], [8169, 8222, null], [8222, 8383, null], [8383, 8492, null], [8492, 8579, null], [8579, 8626, null], [8626, 8801, null], [8801, 8906, null], [8906, 8992, null], [8992, 9127, null], [9127, 9145, null], [9145, 9175, null], [9175, 9734, null], [9734, 10161, null], [10161, 10613, null], [10613, 10729, null], [10729, 11780, null], [11780, 11916, null], [11916, 12014, null], [12014, 12113, null], [12113, 12227, null], [12227, 12309, null], [12309, 12450, null], [12450, 12647, null], [12647, 12733, null], [12733, 12944, null], [12944, 13206, null], [13206, 13378, null], [13378, 13626, null], [13626, 13876, null], [13876, 13952, null], [13952, 14033, null], [14033, 14105, null], [14105, 14260, null], [14260, 14529, null], [14529, 14750, null], [14750, 15083, null], [15083, 15101, null], [15101, 15113, null]], "google_gemma-3-12b-it_is_public_document": [[0, 124, true], [124, 148, null], [148, 155, null], [155, 164, null], [164, 231, null], [231, 250, null], [250, 277, null], [277, 296, null], [296, 315, null], [315, 366, null], [366, 387, null], [387, 1020, null], [1020, 1056, null], [1056, 1138, null], [1138, 1138, null], [1138, 1696, null], [1696, 1721, null], [1721, 3974, null], [3974, 4551, null], [4551, 4593, null], [4593, 4593, null], [4593, 4593, null], [4593, 4766, null], [4766, 5798, null], [5798, 5926, null], [5926, 6073, null], [6073, 6133, null], [6133, 6166, null], [6166, 6209, null], [6209, 6823, null], [6823, 6993, null], [6993, 7945, null], [7945, 8072, null], [8072, 8169, null], [8169, 8222, null], [8222, 8383, null], [8383, 8492, null], [8492, 8579, null], [8579, 8626, null], [8626, 8801, null], [8801, 8906, null], [8906, 8992, null], [8992, 9127, null], [9127, 9145, null], [9145, 9175, null], [9175, 9734, null], [9734, 10161, null], [10161, 10613, null], [10613, 10729, null], [10729, 11780, null], [11780, 11916, null], [11916, 12014, null], [12014, 12113, null], [12113, 12227, null], [12227, 12309, null], [12309, 12450, null], [12450, 12647, null], [12647, 12733, null], [12733, 12944, null], [12944, 13206, null], [13206, 13378, null], [13378, 13626, null], [13626, 13876, null], [13876, 13952, null], [13952, 14033, null], [14033, 14105, null], [14105, 14260, null], [14260, 14529, null], [14529, 14750, null], [14750, 15083, null], [15083, 15101, null], [15101, 15113, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 15113, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15113, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15113, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15113, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15113, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15113, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15113, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15113, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15113, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15113, null]], "pdf_page_numbers": [[0, 124, 1], [124, 148, 2], [148, 155, 3], [155, 164, 4], [164, 231, 5], [231, 250, 6], [250, 277, 7], [277, 296, 8], [296, 315, 9], [315, 366, 10], [366, 387, 11], [387, 1020, 12], [1020, 1056, 13], [1056, 1138, 14], [1138, 1138, 15], [1138, 1696, 16], [1696, 1721, 17], [1721, 3974, 18], [3974, 4551, 19], [4551, 4593, 20], [4593, 4593, 21], [4593, 4593, 22], [4593, 4766, 23], [4766, 5798, 24], [5798, 5926, 25], [5926, 6073, 26], [6073, 6133, 27], [6133, 6166, 28], [6166, 6209, 29], [6209, 6823, 30], [6823, 6993, 31], [6993, 7945, 32], [7945, 8072, 33], [8072, 8169, 34], [8169, 8222, 35], [8222, 8383, 36], [8383, 8492, 37], [8492, 8579, 38], [8579, 8626, 39], [8626, 8801, 40], [8801, 8906, 41], [8906, 8992, 42], [8992, 9127, 43], [9127, 9145, 44], [9145, 9175, 45], [9175, 9734, 46], [9734, 10161, 47], [10161, 10613, 48], [10613, 10729, 49], [10729, 11780, 50], [11780, 11916, 51], [11916, 12014, 52], [12014, 12113, 53], [12113, 12227, 54], [12227, 12309, 55], [12309, 12450, 56], [12450, 12647, 57], [12647, 12733, 58], [12733, 12944, 59], [12944, 13206, 60], [13206, 13378, 61], [13378, 13626, 62], [13626, 13876, 63], [13876, 13952, 64], [13952, 14033, 65], [14033, 14105, 66], [14105, 14260, 67], [14260, 14529, 68], [14529, 14750, 69], [14750, 15083, 70], [15083, 15101, 71], [15101, 15113, 72]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15113, 0.15789]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
3eec5674e1a4d0b1807eba0d95a3ca5101f7dbc3
6.858 Fall 2014 Lab 5: Browser security **Handed out:** Lecture 14 **Due:** Two days after Lecture 17 (5:00pm) **Introduction** This lab will introduce you to browser-based attacks, as well as to how one might go about preventing them. When working on the exercises, you may find the following hints and tools useful: - Firefox's JavaScript console and DOM inspector, both accessible from the Tools menu. The JavaScript console lets you see which exceptions are being thrown and why. The DOM Inspector lets you peek at the structure of the page and the properties and methods of each node it contains. (If the DOM Inspector isn't installed, make sure it's selected when you install Mozilla Firefox.) You might also want to try Firebug. In Firefox 16, these tools are accessible from the Web Developer menu item under the Tools menu. - You may need to use CSS to make your attacks invisible to the user. You should know what basic syntax like `<style>.warning{display:none}</style>` means, and you should feel free to use stealthy attributes like `style="display: none; visibility: hidden; height: 0; width: 0; position: absolute"` in the HTML of your attacks. Beware that frames and images may behave strangely with `display: none`, so you might want to use `visibility: hidden` instead. For instance, to create a hidden iframe, try `<iframe style="visibility: hidden" ...>`. - You can use the LiveHTTPHeaders browser extension to see the cookies and form data that a browser sends. - If you need to encode certain special characters in URL parameters, such as newlines, percents, or ampersands, take a look at `encodeURIComponent` and `decodeURIComponent`. - **Beware of Race Conditions:** Depending on how you write your code, all four of these attacks could potentially have race conditions. Attacks that fail on the grader's browser during grading will receive less than full credit. To ensure that you receive full credit, you should wait after making an outbound network request rather than assuming that the request will be sent immediately. You may find the `load` event on the `iframe` element helpful. **Network setup** For this lab, you will be crafting attacks in your web browser that exploit vulnerabilities in the zoobar web application. To ensure that your exploits work on our machines when we grade your lab, we need to agree on the URL that refers to the zoobar web site. For the purposes of this lab, your zoobar web site must be running on `http://localhost:8080/`. If you have been using your VM's IP address, such as `http://192.168.177.128:8080/`, it will not work in this lab. If you are using KVM or VirtualBox, the instructions we provided in lab 1 already ensure that port 8080 on `localhost` is forwarded to port 8080 in the virtual machine. If you are using VMware, we will use `ssh`'s port forwarding feature to expose your VM's port 8080 as `http://localhost:8080/`. First find your VM IP address. You can do this by going to your VM and typing `ifconfig`. (This is the same IP address you have been using for past labs.) Then configure SSH port forwarding as follows (which depends on your SSH client): - For Mac and Linux users: open a terminal on your machine (not in your VM) and run $ ssh -L localhost:8080:localhost:8080 httpd@VM-IP-ADDRESS httpd@VM-IP-ADDRESS's password: 6858 - For Windows users, this should be an option in your SSH client. In PuTTY, follow these instructions. Use 8080 for the source port and localhost:8080 for the remote port. The forward will remain in affect as long as the SSH connection is open. Setting up the web server Before you begin working on these exercises, please use Git to commit your Lab 3 solutions, fetch the latest version of the course repository, and then create a local branch called lab5 based on our lab5 branch, origin/lab5. Do not merge your lab 2 and 3 solutions into lab 5. Here are the shell commands: ``` httpd@vm-6858:~$ cd lab httpd@vm-6858:~/lab$ [lab3 c54dd4d] my solution to lab3 1 files changed, 1 insertions(+), 0 deletions(-) httpd@vm-6858:~/lab$ git pull Already up-to-date. httpd@vm-6858:~/lab$ git checkout -b lab5 origin/lab5 Switched to a new branch 'lab5' httpd@vm-6858:~/lab$ make ``` For those using the provided .zip files, please download lab5.zip from the MIT OpenCourseWare site. Note that lab 5's source code is based on the initial web server from lab 1. It does not include privilege separation or Python profiles. Now you can start the zookws web server, as follows. ``` httpd@vm-6858:~$ ./zookws ``` Open your browser and go to the URL http://localhost:8080/. You should see the zoobar web application. If you don't, go back and double-check your steps. If you cannot get the web server to work, get in touch with course staff before proceeding further. Crafting attacks You will craft a series of attacks against the zoobar web site you have been working on in previous labs. These attacks exploit vulnerabilities in the web application's design and implementation. Each attack presents a distinct scenario with unique goals and constraints, although in some cases you may be able to re-use parts of your code. We will run your attacks after wiping clean the database of registered users (except the user named "attacker"), so do not assume the presence of any other users in your submitted attacks. You can run our tests with make check; this will execute your attacks against your server, and tell you whether your exploits seem to be working correctly or not. As in previous labs, keep in mind that the checks performed by make check are not exhaustive, especially with respect to race conditions. Exercises 1, 3, and 4, as well as the challenge exercise, require that the displayed site look a certain way. The `make check` script is not quite smart enough to compare how the site looks like with and without your attack, so you will need to do that comparison yourself (and so will we, during grading). When `make check` runs, it generates reference images for what the attack page is supposed to look like (`answer-XX.ref.png`) and what your attack page actually shows (`answer-XX.png`), and places them in the `lab5-tests/` directory. Make sure that your `answer-XX.png` screenshots look like the reference images in `answer-XX.ref.png`. To view these images from `lab5-tests/`, either copy them to your local machine, or run `python -m SimpleHTTPServer 8080` and view the images by visiting `http://localhost:8080/lab5-tests/`. Note that `SimpleHTTPServer` caches responses, so you should kill and restart it after a `make check` run. We will grade your attacks with default settings using the current version of Mozilla Firefox on Ubuntu 12.04 (as installed on, e.g., the Athena workstations) browser at the time the project is due. We chose this browser for grading because it is widely available and can run on a variety of operating systems. There are subtle quirks in the way HTML and JavaScript are handled by different browsers, and some attacks that work or do not work in Internet Explorer or Chrome (for example) may not work in Firefox. In particular, you should use the Mozilla way of adding listeners to events. We recommend that you test your code on Firefox before you submit, to ensure that you will receive credit for your work. For exercises 1 and 3, you will need a server-side script to automatically email information captured by your client-side JavaScript code to the TAs for grading. We have provided this script for you. Please review the instructions at `http://css.csail.mit.edu/6.858/2014/labs/sendmail.php` and use that URL in your attack scripts to send emails. You may send as many emails as you like while working on the project, but please do not attack or abuse the email script. --- **Exercise 1: Cookie Theft.** Construct an attack that will steal a victim's cookie for the `zoobar` site when the victim's browser opens a URL of your choosing. (You do not need to do anything with the victim's cookie after stealing it, for the purposes of this exercise, although in practice an attacker could use the cookie to impersonate the victim, and issue requests as if they came from the victim.) - Your solution is a URL starting with `http://localhost:8080/zoobar/index.cgi/users?` - The grader will already be logged in to the `zoobar` site before loading your URL. - Your goal is to steal the document cookie and email it to yourself using the email script. - Except for the browser address bar (which can be different), the grader should see a page that looks exactly as it normally does when the grader visits `http://localhost:8080/zoobar/index.cgi/users`. No changes to the site appearance or extraneous text should be visible. Avoiding the red warning text is an important part of this attack. (It's ok if the page looks weird briefly before correcting itself.) **Hint:** You will need to find a cross-site scripting vulnerability in the `/zoobar/index.cgi/users` page, and then use it to inject Javascript code into the browser. What input parameters from the HTTP request does the resulting `/zoobar/index.cgi/users` page include? Which of them are not properly escaped? **Hint:** To steal the cookie, read about how cookies are accessed from Javascript. Please write your attack URL in a file named `answer-1.txt`. Your URL should be the only thing on the first line of the file. space and + instead of %. Here is a [URL encoding reference](#) and a handy [conversion tool](#). You can also use quoting functions in the python urllib module or the JavaScript encodeURIComponent function to URL encode strings. **Exercise 2: Cross-Site Request Forgery.** Construct an attack that transfers zoobars from a victim to the attacker, when the victim's browser opens an HTML document that you construct. Do not exploit cross-site scripting vulnerabilities (where the server reflects back attack code), such as the one involved in exercise 1 above, or logic bugs in transfer.py that you fixed in lab 3. - Your solution is a short HTML document named `answer-2.html` that the grader will open using the web browser. - Be sure that you do **not** load the `answer-2.html` file from http://localhost:8080/..., because that would place it in the same origin as the site being attacked, and therefore defeat the point of this exercise. - The grader (victim) will already be logged in to the zoobar site before loading your page. - Your goal is to transfer 10 zoobars from the grader's account to the "attacker" account. - The browser should be redirected to [http://css.csail.mit.edu/6.858/2014/](http://css.csail.mit.edu/6.858/2014/) as soon as the transfer is complete (so fast the user might not notice). - The location bar of the browser should not contain the zoobar server's name or address at any point. This requirement is important, and makes the attack more challenging. - **Hint:** One way to construct the attack is to develop `answer-2.html` in small steps that incrementally meet all the requirements. - **Hint:** You might find the `target` attribute of the HTML form element useful in making your attack contained in a single page. For exercise 2, you should test if your attack works by opening your `answer-2.html` file in your browser, and seeing if you achieve the desired result while meeting the requirements for the attack. For exercise 2, you will need to synthesize an HTTP POST request from your HTML page. To do so, consider creating an HTML form whose `action` attribute points to `.../index.cgi/transfer`, and which contains `input` fields with the necessary names and values. Look at the source of the HTML that's generated by `index.cgi/transfer` to get an idea of what this form should look like. You can submit a form by using JavaScript to invoke the `click` method on the submit button, or the `submit` method on the form itself. **Exercise 3: Side Channels and Phishing.** Construct an attack that will steal a victim's zoobars, if the user is already logged in (using the attack from exercise 2), or will ask the victim for their username and password, if they are not logged in. The attack scenario is that the victim opens an HTML document that you constructed. - Your solution is an HTML document named `answer-3.html` that the grader will open using the web browser. - As with the previous exercise, be sure that you do **not** load the `answer-3.html` file from http://localhost:8080/. - The grader will run the code once while logged in to the zoobar site before loading your page. - The grader will run the code a second time while not logged in to the zoobar site before loading your page. - When the browser loads your document, the document should sniff out whether the user is logged into the zoobar site: - If the user is not logged in, present an HTML document visibly identical to the zoobar login page, by copying the HTML from the real zoobar login page (this should be self-contained in the HTML file you turn in): - When the "Log in" button is pressed, send the username and password (separated by a comma) using the email script. - Once the email is sent, log the user into the real zoobar website (the hostname should change to localhost:8080). - The behavior for the Register button is left unspecified. - If the user is logged in, then forward to the attack from exercise 2. - Hint: The same-origin policy generally does not allow your attack page to access the contents of pages from another domain. What types of files can be loaded by your attack page from another domain? Does the zoobar web application have any files of that type? How can you infer whether the user is logged in or not, based on this? - Hint: develop your attack in steps, incrementally addressing all of the above requirements. --- **Exercise 4: Profile Worm.** Create a worm that will transfer 1 zoobar from the victim to the attacker, and spread to the victim's profile, when the victim views the profile of another infected user. The scenario is that the first victim views the attacker's profile, and the worm spreads onward from there. - Your solution is a profile that, when viewed, transfers 1 zoobar from the current user to a user called "attacker" (that's the actual username) and replaces the profile of the current user with itself (i.e., the attack profile code). - Your malicious profile should display the message **Scanning for viruses...** when viewed, as if that was the entirety of the viewed profile. - To grade your attack, we will cut and paste the submitted profile code into the profile of the "attacker" user, and view that profile using the grader's account. We will then view the grader's profile with more accounts, checking for both the zoobar transfer and the replication of profile code. - The transfer and replication should be reasonably fast (under 15 seconds). During that time, the grader will not click anywhere. - During the transfer and replication process, the browser's location bar should remain at http://localhost:8080/zoobar/index.cgi/users?user=username, where **username** is the user whose profile is being viewed. The visitor should not see any extra graphical user interface elements (e.g., frames), and the user whose profile is being viewed should appear to have 10 zoobars, and no transfer log entries. These requirements make the attack harder to spot for a user, and thus more realistic, but they make the attack also harder to pull off. - You will not be graded on the corner case where the user viewing the profile has no zoobars to send. - Hint: Start by writing a simple HTML profile and uploading it, just to familiarize yourself with how an HTML profile works in zoobar. Next, develop the solution profile in small steps (e.g., first arrange that the malicious profile code transfers 1 zoobar to the attacker, and then make it spread to the visitor's profile). - Hint: This [MySpace vulnerability](https://en.wikipedia.org/wiki/MySpace_vulnerability) may provide some inspiration. Please write your profile in a file named **answer-4.txt**. --- For exercise 4, you may need to create an **iframe** and access data inside of it. You can use the DOM methods `document.createElement` and `document.body.appendChild` to do so. Getting access to form fields in an iframe differs by browser, and only works for frames from the domain (according to the same-origin policy). In Firefox, you can do `iframe.contentDocument.forms[0].zoobars.value = 1;`. Another approach may be to use `XMLHttpRequest` instead of an iframe. **Challenge: Password Theft.** Create an attack that will steal the victim's username and password, even if the victim is diligent about entering their password only when the URL address bar shows [http://localhost:8080/zoobar/index.cgi/login](http://localhost:8080/zoobar/index.cgi/login). - Your solution is a short HTML document named `answer-chal.html` that the grader will open using the web browser. - The grader will not be logged in to the zoobar web site before loading your page. - Upon loading your document, the browser should immediately be redirected to [http://localhost:8080/zoobar/index.cgi/login](http://localhost:8080/zoobar/index.cgi/login). The grader will enter a username and password, and press the "Log in" button. - When the "Log in" button is pressed, send the username and password (separated by a comma) using the email script. - The login form should appear perfectly normal to the user. No extraneous text (e.g., warnings) should be visible, and assuming the username and password are correct, the login should proceed the same way it always does. For this final attack, you may find that using `alert()` to test for script injection does not work; Firefox blocks it when it's causing an infinite loop of dialog boxes. Try other ways to probe whether your code is running, such as `document.loginform.login_username.value=42`. **Deliverables** Make sure you have the following files: `answer-1.txt`, `answer-2.html`, `answer-3.html`, `answer-4.txt`, and if you are doing the challenge, `answer-chal.html`, containing each of your attacks. Feel free to include any comments about your solutions in the `answers.txt` file (we would appreciate any feedback you may have on this assignment). Run `make submit`. The resulting `lab5-handin.tar.gz` will be graded. You're done! **Acknowledgments** Thanks to Stanford's [CS155](https://www.stanford.edu/class/cs155) course staff for the original version of this assignment.
{"Source-Url": "https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-858-computer-systems-security-fall-2014/labs/MIT6_858F14_lab5.pdf", "len_cl100k_base": 4233, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 16692, "total-output-tokens": 4677, "length": "2e12", "weborganizer": {"__label__adult": 0.0008916854858398438, "__label__art_design": 0.001209259033203125, "__label__crime_law": 0.005329132080078125, "__label__education_jobs": 0.060882568359375, "__label__entertainment": 0.00019073486328125, "__label__fashion_beauty": 0.0004417896270751953, "__label__finance_business": 0.0005469322204589844, "__label__food_dining": 0.0008549690246582031, "__label__games": 0.001495361328125, "__label__hardware": 0.0023441314697265625, "__label__health": 0.0010347366333007812, "__label__history": 0.0006222724914550781, "__label__home_hobbies": 0.0003733634948730469, "__label__industrial": 0.0010309219360351562, "__label__literature": 0.0007367134094238281, "__label__politics": 0.0006747245788574219, "__label__religion": 0.00087738037109375, "__label__science_tech": 0.0537109375, "__label__social_life": 0.000492095947265625, "__label__software": 0.0419921875, "__label__software_dev": 0.822265625, "__label__sports_fitness": 0.0006384849548339844, "__label__transportation": 0.0008258819580078125, "__label__travel": 0.0004134178161621094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18499, 0.02619]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18499, 0.17023]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18499, 0.89411]], "google_gemma-3-12b-it_contains_pii": [[0, 3231, false], [3231, 5654, null], [5654, 9394, null], [9394, 12734, null], [12734, 16178, null], [16178, 18499, null], [18499, 18499, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3231, true], [3231, 5654, null], [5654, 9394, null], [9394, 12734, null], [12734, 16178, null], [16178, 18499, null], [18499, 18499, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18499, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18499, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18499, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18499, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 18499, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18499, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18499, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18499, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18499, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18499, null]], "pdf_page_numbers": [[0, 3231, 1], [3231, 5654, 2], [5654, 9394, 3], [9394, 12734, 4], [12734, 16178, 5], [16178, 18499, 6], [18499, 18499, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18499, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
ccc60c8e078db881d9be51e625d416c3f2dfac09
LAVA: Large-scale Automated Vulnerability Addition Engin Kirda Andrea Mambretti Wil Robertson Northeastern University Brendan Dolan-Gavitt NYU Tandon Patrick Hulin, Tim Leek, Fredrich Ulrich, Ryan Whelan MIT Lincoln Laboratory This Talk • In this talk, we explore how to **automatically add large numbers of bugs to programs** • Why would we want to do this? • Computer programs don't have enough bugs • We want to put backdoors in other people's programs This Talk • In this talk, we explore how to *automatically add large numbers of bugs to programs* • Why would we want to do this? • Computer programs don't have enough bugs • We want to put backdoors in other people's programs In this talk, we explore how to **automatically add large numbers of bugs to programs** Why would we want to do this? - Computer programs don't have enough bugs - We want to put backdoors in other people's programs Finding vulnerabilities in software automatically has been a major research and industry goal for the last 25 years **Academic** - An Empirical Study of the Reliability of UNIX Utilities - Burton P. Miller - Lars Frederiksen - Bryan So - KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs - Cristian Cadar, Daniel Dunbar, Dawson Engler - Stanford University - KLEE (2005) **Commercial** - Driller: Augmenting Fuzzing Through Selective Symbolic Execution - Nick Systermans, John Gosan, Christopher Saleh, Andrew Dyczia, Ruiya Wang, Erico Erbetta, Tae Manhakimchi, Cristopher Kraemer, Giovanni Vigna - U.C. Santa Barbara - Driller (2015) **LAVA: Large-Scale Automated Vulnerability Addition** Vulnerability Discovery - Finding vulnerabilities in software automatically has been a major research and industry goal for the last 25 years. **Academic** - An Empirical Study of the Reliability of UNIX Utilities - Barton P. Miller - Lars Frederiksen - Bryan S. So **Commercial** - KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs - Cristian Cadar, Daniel Dunbar, Dawson Engler - Stanford University - Driller: Augmenting Fuzzing Through Selective Symbolic Execution - Nick Strategos, John Groos, Christopher Salls, Andrew Driehuis, Ravee Wang, Accopi Ershova, Tar Oikarinen, Christopher Kruegel, Giovanni Vigna - UC Santa Barbara **Does this work??** LAVA: Large-Scale Automated Vulnerability Addition Debugging the Bug Finders • Lots of work that claims to find bugs in programs • **Lack of ground truth** makes it very difficult to evaluate these claims • If Coverity finds 22 bugs in my program, is that good or bad? • What are the false positive and **false negative** rates? Some existing bug corpora exist, but have many problems: - Synthetic (small) programs - Don't always have triggering inputs - Fixed size – tools can “overfit” to the corpus What About Real Vulnerabilities? - Real vulnerabilities with proof-of-concept exploits are essentially what we want. - But there just aren't that many of them. And finding new ones is expensive! <table> <thead> <tr> <th>Software</th> <th>Price Range</th> </tr> </thead> <tbody> <tr> <td>Adobe Reader</td> <td>$5,000–$30,000</td> </tr> <tr> <td>MAC OSX</td> <td>$20,000–$50,000</td> </tr> <tr> <td>Android</td> <td>$30,000–$60,000</td> </tr> <tr> <td>Flash or Java Browser Plug-ins</td> <td>$40,000–$100,000</td> </tr> <tr> <td>Microsoft Word</td> <td>$50,000–$100,000</td> </tr> <tr> <td>Windows</td> <td>$60,000–$120,000</td> </tr> <tr> <td>Firefox or Safari</td> <td>$60,000–$150,000</td> </tr> <tr> <td>Chrome or Internet Explorer</td> <td>$80,000–$200,000</td> </tr> <tr> <td>iOS</td> <td>$100,000–$250,000</td> </tr> </tbody> </table> Forbes, 2012 Debugging the Bug Finders • Existing corpora are fixed size and static – it's easy to optimize to the benchmark • Instead we would like to automatically create bug corpora • Take an existing program and automatically add new bugs into it • Now we can measure how many of our bugs they find to estimate effectiveness of bug-finders Goals • We want to produce bugs that are: • **Plentiful** (can put 1000s into a program easily) • **Distributed** throughout the program • Come with a **triggering input** • Only manifest for a **tiny fraction of inputs** • Are likely to be **security-critical** Sounds Simple... But Not - Why not just change all the `strncpys` to `strcpys`? - Turns out this breaks most programs for every input – trivial to find the bugs - We won't know how to trigger the bugs – hard to prove they're "real" and security-relevant - This applies to most local, random mutations Our Approach: DUAs - We want to find parts of the program's input data that are: - **Dead**: not currently used much in the program (i.e., we can set to arbitrary values) - **Uncomplicated**: not altered very much (i.e., we can predict their value throughout the program's lifetime) - **Available** in some program variables These properties try to capture the notion of *attacker-controlled data* - If we can find these **DUAs**, we will be able to add code to the program that uses such data to trigger a bug New Taint-Based Measures • How do we find out what data is dead and uncomplicated? • Two new taint-based measures: • *Liveness*: a count of how many times some input byte is used to decide a branch • *Taint compute number*: a measure of how much computation been done on some data Dynamic Taint Analysis • We use *dynamic taint analysis* to understand the effect of input data on the program • Our taint analysis requires some specific features: • Large number of labels available • Taint tracks *label sets* • Whole-system & fast (enough) • Our open-source dynamic analysis platform, **PANDA**, provides all of these features \[ c = a + b ; \ a: \{w,x\} ; \ b: \{y,z\} \\ c \leftarrow \{w,x,y,z\} \] [https://github.com/moyix/panda](https://github.com/moyix/panda) Taint Compute Number (TCN) // a, b, n are inputs 1: int c = a + b; 2: if (a != 0xdeadbeef) 3: return; 4: for (int i = 0; i < n; i++) 5: c += s[i]; TCN measures how much computation has been done on a variable at a given point in the program. Liveness ``` // a, b, n are inputs 1: int c = a+b; 2: if (a != 0xdeadbeef) return; 3: for (int i=0; i<n; i++) 4: c+=s[i]; ``` b: bytes {0..3} n: bytes {4..7} a: bytes {8..11} <table> <thead> <tr> <th>Bytes</th> <th>Liveness</th> </tr> </thead> <tbody> <tr> <td>{0..3}</td> <td>0</td> </tr> <tr> <td>{4..7}</td> <td>n</td> </tr> <tr> <td>{8..11}</td> <td>1</td> </tr> </tbody> </table> Liveness measures how many branches use each input byte An Attack Point (ATP) is any place where we may want to use attacker-controlled data to cause a bug. Examples: pointer dereference, data copying, memory allocation, ... In current LAVA implementation we just modify pointer dereferences to cause buffer overflow. LAVA: Large-Scale Automated Vulnerability Addition **Approach: Overview** 1. **Clang** - Instrument source with taint queries - Run instrumented program on inputs - Find attacker-controlled data and attack points - Inject bug into program source, compile and test with modified input 2. **PANDA record** - PANDA record + taint analysis 3. **Injectable bugs** 4. **Input corpus** 5. **Effects** LAVA Bugs • Any (DUA, ATP) pair where the DUA occurs before the attack point is a potential bug we can inject • By modifying the source to add new data flow the from DUA to the attack point we can create a bug DUA + ATP = 🐜 LAVA Bug Example • PANDA taint analysis shows that bytes 0-3 of `buf` on line 115 of `src/encoding.c` is attacker-controlled (dead & uncomplicated) • From PANDA we also see that in `readcdf.c` line 365 there is a read from a pointer – if we modify the pointer value we will likely cause a bug in the program ```c encoding.c 115: } else if (looks_extended(buf, nbytes, *ubuf, ulen)) { ``` ```c readcdf.c 365: if (cdf_read_header(&info, &h) == -1) ``` LAVA Bug Example • PANDA taint analysis shows that bytes 0-3 of `buf` on line 115 of `src/encoding.c` is attacker-controlled (dead & uncomplicated) • From PANDA we also see that in `readcdf.c` line 365 there is a read from a pointer – if we modify the pointer value we will likely cause a bug in the program When the input file data that ends up in buf is set to 0x6c6176c1, we will add 0x6c6176c1 to the pointer info, causing an out of bounds access ### Evaluation: How Many Bugs? <table> <thead> <tr> <th>Name</th> <th>Version</th> <th>Num Src Files</th> <th>Lines C code</th> <th>N(DUA)</th> <th>N(ATP)</th> <th>Potential Bugs</th> <th>Validated Bugs</th> <th>Yield</th> <th>Inj Time (sec)</th> </tr> </thead> <tbody> <tr> <td>file</td> <td>5.22</td> <td>19</td> <td>10809</td> <td>631</td> <td>114</td> <td>17518</td> <td>774</td> <td>38.7%</td> <td>16</td> </tr> <tr> <td>readelf</td> <td>2.25</td> <td>12</td> <td>21052</td> <td>3849</td> <td>266</td> <td>276367</td> <td>1064</td> <td>53.2%</td> <td>354</td> </tr> <tr> <td>bash</td> <td>4.3</td> <td>143</td> <td>98871</td> <td>3832</td> <td>604</td> <td>447645</td> <td>192</td> <td>9.6%</td> <td>153</td> </tr> <tr> <td>tshark</td> <td>1.8.2</td> <td>1272</td> <td>2186252</td> <td>9853</td> <td>1037</td> <td>1240777</td> <td>354</td> <td>17.7%</td> <td>542</td> </tr> </tbody> </table> - We ran four open-source programs each on a single input and generated candidate bugs. - Because validating all possible bugs would take too long, we instead validated a random sample of 2000 per program. - **Result:** extrapolating from the yield numbers, a single run gives us up to \(~200,000\) real bugs. ### Evaluation: What Influences Yield? <table> <thead> <tr> <th>$mTCN$</th> <th>$[0, 10)$</th> <th>$[10, 100)$</th> <th>$[100, 1000)$</th> <th>$[1000, +\text{inf}]$</th> </tr> </thead> <tbody> <tr> <td>$[0, 10)$</td> <td>51.9%</td> <td>22.9%</td> <td>17.4%</td> <td>11.9%</td> </tr> <tr> <td>$[10, 100)$</td> <td>–</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>$[100, +\text{inf}]$</td> <td>–</td> <td>–</td> <td>–</td> <td>0</td> </tr> </tbody> </table> - TCN strongly affects yield - No bugs that involved TCN greater than 10 were useable - Liveness has a weaker correlation with yield – even fairly live data can be sometimes be used if TCN is low Evaluation: Can Tools Find Them? • We took two open-source bug-finding tools and tried to measure their success at finding LAVA bugs • A coverage-guided fuzzer (FUZZER) • A symbolic execution and constraint solving tool (SES) • (Actual names withheld since this is just a preliminary study) Results: Specific Value <table> <thead> <tr> <th>Program</th> <th>Total Bugs</th> <th>Unique Bugs FUZZER</th> <th>Unique Bugs SES</th> <th>Found Combined</th> </tr> </thead> <tbody> <tr> <td>uniq</td> <td>28</td> <td>7</td> <td>0</td> <td>7</td> </tr> <tr> <td>base64</td> <td>44</td> <td>7</td> <td>9</td> <td>14</td> </tr> <tr> <td>md5sum</td> <td>57</td> <td>2</td> <td>0</td> <td>2</td> </tr> <tr> <td>who</td> <td>2136</td> <td>0</td> <td>18</td> <td>18</td> </tr> <tr> <td>Total</td> <td>2265</td> <td>16</td> <td>27</td> <td>41</td> </tr> </tbody> </table> Less than 2% of injected bugs found **Results: Range-Triggered Bugs** <table> <thead> <tr> <th>Tool</th> <th>Bug Type</th> <th>Range</th> <th>FUZZER</th> <th>SES</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>2⁰</td> <td>2⁷</td> <td>2¹⁴</td> </tr> <tr> <td>Fuzzer</td> <td></td> <td>0</td> <td>0</td> <td>9%</td> </tr> <tr> <td>SES</td> <td></td> <td>8%</td> <td>0</td> <td>9%</td> </tr> </tbody> </table> Note that the names of tools under evaluation are being used by LAVA-1, used the repository with a fuzzed version of the input verified. The first corpus we created, none of which trigger our injected bugs. Additionally, we constructed tiny example buggy programs and used them to verify that we were able to use each tool at care, high-profile tools. For each tool, we expended significant effort, either in terms of careful setup and use of tools, or other. FUZZER and SES are both state-of-the-art, high-profile tools. For each tool, we expended significant **TABLE III: Percentage of bugs found in** <table> <thead> <tr> <th>Range</th> <th>Bug Type</th> <th>Tool</th> <th>Percentage</th> </tr> </thead> <tbody> <tr> <td>2⁰</td> <td>Fuzzer</td> <td>0</td> <td>0</td> </tr> <tr> <td>2⁷</td> <td>Fuzzer</td> <td>0</td> <td>0</td> </tr> <tr> <td>2¹⁴</td> <td>SES</td> <td>8%</td> <td>9%</td> </tr> <tr> <td>2²¹</td> <td>SES</td> <td>8%</td> <td>9%</td> </tr> <tr> <td>2²⁸</td> <td>SES</td> <td>8%</td> <td>9%</td> </tr> </tbody> </table> **Frequency** **Fig. 11: Normalized ATP trace location** **Fig. 12: Fraction of trace with perfectly normal or realistic** **Frequency** **Histogram of rdfs$V3** **Histogram of rdfs$V2** • The burning question in everyone's mind now: are these bugs realistic? • This is hard to measure, in part because realism is not a well-defined property! • Our evaluation looks at: • How injected bugs are distributed in the program • What proportion of the trace has normal data flow • Ultimately, the best test of realism will be whether it helps bug-finding software get better Results: Realism Fig. 10: Normalized DUA trace location Fig. 11: Normalized ATP trace location LAVA: Large-Scale Automated Vulnerability Addition Limitations and Caveats • General limitations: • Some types of vulnerabilities probably can't be injected using this method – e.g., weak crypto bugs • More work is needed to see if these bugs can improve bug-finding software • Implementation limits: • Currently only works on C/C++ programs in Linux • Only injects buffer overflow bugs • Works only on source code Future Work • Continuous on-line competition to encourage self-evaluation • Use in security competitions like Capture the Flag to re-use and construct challenges on-the-fly • Improve and assess realism of LAVA bugs • More types of vulnerabilities (use after free, command injection, ...) • More interesting effects (prove exploitability!) Conclusions • Presented a new technique that is capable of quickly injecting massive numbers of bugs • Demonstrated that current tools are not very good at finding these bugs • If these bugs prove to be good stand-ins for real-world vulnerabilities, we can get huge, on-demand bug corpora Questions?
{"Source-Url": "http://moyix.net/lava_slides.pdf", "len_cl100k_base": 4345, "olmocr-version": "0.1.49", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 64703, "total-output-tokens": 5337, "length": "2e12", "weborganizer": {"__label__adult": 0.0003592967987060547, "__label__art_design": 0.0002846717834472656, "__label__crime_law": 0.0007791519165039062, "__label__education_jobs": 0.0006895065307617188, "__label__entertainment": 7.37905502319336e-05, "__label__fashion_beauty": 0.00013911724090576172, "__label__finance_business": 0.0001392364501953125, "__label__food_dining": 0.0002727508544921875, "__label__games": 0.0006647109985351562, "__label__hardware": 0.0010156631469726562, "__label__health": 0.0004456043243408203, "__label__history": 0.0001809597015380859, "__label__home_hobbies": 9.27448272705078e-05, "__label__industrial": 0.00032591819763183594, "__label__literature": 0.0001895427703857422, "__label__politics": 0.00019049644470214844, "__label__religion": 0.00036263465881347656, "__label__science_tech": 0.0191650390625, "__label__social_life": 0.00011521577835083008, "__label__software": 0.01136016845703125, "__label__software_dev": 0.96240234375, "__label__sports_fitness": 0.0002815723419189453, "__label__transportation": 0.0003046989440917969, "__label__travel": 0.00015938282012939453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14119, 0.03209]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14119, 0.30186]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14119, 0.81752]], "google_gemma-3-12b-it_contains_pii": [[0, 230, false], [230, 465, null], [465, 698, null], [698, 915, null], [915, 1671, null], [1671, 2442, null], [2442, 2724, null], [2724, 2898, null], [2898, 3722, null], [3722, 4057, null], [4057, 4331, null], [4331, 4639, null], [4639, 5159, null], [5159, 5452, null], [5452, 5955, null], [5955, 6205, null], [6205, 6565, null], [6565, 6829, null], [6829, 7244, null], [7244, 7471, null], [7471, 7925, null], [7925, 8235, null], [8235, 8378, null], [8378, 9509, null], [9509, 10163, null], [10163, 10462, null], [10462, 11123, null], [11123, 12560, null], [12560, 12949, null], [12949, 13097, null], [13097, 13473, null], [13473, 13817, null], [13817, 14109, null], [14109, 14119, null]], "google_gemma-3-12b-it_is_public_document": [[0, 230, true], [230, 465, null], [465, 698, null], [698, 915, null], [915, 1671, null], [1671, 2442, null], [2442, 2724, null], [2724, 2898, null], [2898, 3722, null], [3722, 4057, null], [4057, 4331, null], [4331, 4639, null], [4639, 5159, null], [5159, 5452, null], [5452, 5955, null], [5955, 6205, null], [6205, 6565, null], [6565, 6829, null], [6829, 7244, null], [7244, 7471, null], [7471, 7925, null], [7925, 8235, null], [8235, 8378, null], [8378, 9509, null], [9509, 10163, null], [10163, 10462, null], [10462, 11123, null], [11123, 12560, null], [12560, 12949, null], [12949, 13097, null], [13097, 13473, null], [13473, 13817, null], [13817, 14109, null], [14109, 14119, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14119, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14119, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14119, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14119, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14119, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14119, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14119, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14119, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14119, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14119, null]], "pdf_page_numbers": [[0, 230, 1], [230, 465, 2], [465, 698, 3], [698, 915, 4], [915, 1671, 5], [1671, 2442, 6], [2442, 2724, 7], [2724, 2898, 8], [2898, 3722, 9], [3722, 4057, 10], [4057, 4331, 11], [4331, 4639, 12], [4639, 5159, 13], [5159, 5452, 14], [5452, 5955, 15], [5955, 6205, 16], [6205, 6565, 17], [6565, 6829, 18], [6829, 7244, 19], [7244, 7471, 20], [7471, 7925, 21], [7925, 8235, 22], [8235, 8378, 23], [8378, 9509, 24], [9509, 10163, 25], [10163, 10462, 26], [10462, 11123, 27], [11123, 12560, 28], [12560, 12949, 29], [12949, 13097, 30], [13097, 13473, 31], [13473, 13817, 32], [13817, 14109, 33], [14109, 14119, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14119, 0.17692]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
25307becfafe9124cdae22ebe5d6181e6978558a
[REMOVED]
{"Source-Url": "http://cdn.oreillystatic.com/oreilly/booksamplers/packt/9781784396626_Sample.pdf", "len_cl100k_base": 5449, "olmocr-version": "0.1.51", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 32656, "total-output-tokens": 6190, "length": "2e12", "weborganizer": {"__label__adult": 0.0003552436828613281, "__label__art_design": 0.00025773048400878906, "__label__crime_law": 0.00017523765563964844, "__label__education_jobs": 0.00041961669921875, "__label__entertainment": 5.054473876953125e-05, "__label__fashion_beauty": 0.00011152029037475586, "__label__finance_business": 0.0002065896987915039, "__label__food_dining": 0.0002663135528564453, "__label__games": 0.0004153251647949219, "__label__hardware": 0.0003690719604492187, "__label__health": 0.0001627206802368164, "__label__history": 0.00010251998901367188, "__label__home_hobbies": 5.245208740234375e-05, "__label__industrial": 0.0001544952392578125, "__label__literature": 0.00019407272338867188, "__label__politics": 0.0001041889190673828, "__label__religion": 0.00023818016052246096, "__label__science_tech": 0.0006747245788574219, "__label__social_life": 5.3048133850097656e-05, "__label__software": 0.0063934326171875, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.00019097328186035156, "__label__transportation": 0.00021016597747802737, "__label__travel": 0.00016021728515625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26561, 0.00326]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26561, 0.12855]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26561, 0.93343]], "google_gemma-3-12b-it_contains_pii": [[0, 145, false], [145, 1380, null], [1380, 2226, null], [2226, 4335, null], [4335, 4893, null], [4893, 6271, null], [6271, 8369, null], [8369, 10103, null], [10103, 11973, null], [11973, 14058, null], [14058, 15316, null], [15316, 17156, null], [17156, 18437, null], [18437, 20500, null], [20500, 22497, null], [22497, 24507, null], [24507, 26317, null], [26317, 26561, null]], "google_gemma-3-12b-it_is_public_document": [[0, 145, true], [145, 1380, null], [1380, 2226, null], [2226, 4335, null], [4335, 4893, null], [4893, 6271, null], [6271, 8369, null], [8369, 10103, null], [10103, 11973, null], [11973, 14058, null], [14058, 15316, null], [15316, 17156, null], [17156, 18437, null], [18437, 20500, null], [20500, 22497, null], [22497, 24507, null], [24507, 26317, null], [26317, 26561, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26561, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26561, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26561, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26561, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26561, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26561, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26561, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26561, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26561, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 26561, null]], "pdf_page_numbers": [[0, 145, 1], [145, 1380, 2], [1380, 2226, 3], [2226, 4335, 4], [4335, 4893, 5], [4893, 6271, 6], [6271, 8369, 7], [8369, 10103, 8], [10103, 11973, 9], [11973, 14058, 10], [14058, 15316, 11], [15316, 17156, 12], [17156, 18437, 13], [18437, 20500, 14], [20500, 22497, 15], [22497, 24507, 16], [24507, 26317, 17], [26317, 26561, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26561, 0.0]]}
olmocr_science_pdfs
2024-12-04
2024-12-04
1eb107ff793980632f953874e1aa426f27a71c81
Chapter 12. What’s Next Believe it or not, you’ve just started to dig into Logo. You’ve seen a lot. You have hopefully explored the many procedures and commands described in these pages. But keep one important thought in mind. The procedures offered in this book and on the diskette that comes with it are how others solved a problem. That doesn’t mean that these solutions are the best way to do things. They may not be the easiest or the most efficient way. These are just places for you to start on your own adventures. How many of your own rabbit trails did you discover? Even though you may have found a bunch, there is so much more to discover. But at least now you have a few tools to use now. Before we go, here are a few more things to think about. From Two to Three Dimensions Here's a challenge for you. Draw pattern of a soccer ball on the screen. The first thing you see, looking at a soccer ball, is a bunch of hexagon shapes. When some 3rd and 4th grade computer club members were asked to draw this pattern on the screen, they thought it would be easy. TO SOCCER.BALL :DIS REPEAT 6 [REPEAT 6 [FD :DIS RT 60] FD :DIS LT 60] END The boys thought that all they had to do was draw a series of hexagons. But the center was a pentagon, not a hexagon. Try their SOCCER.BALL procedure, the one above. It's not quite right, is it? The girl's team was the first to figure out that they could not make the soccer pattern on the screen as it appears on the ball. They had to flatten it out. At first, they thought this procedure was wrong. But then they discovered it was really correct. TO SOCCER :DIS REPEAT 5 [REPEAT 6 [FD :DIS RT 60] FD :DIS LT 72] END The girls printed twelve of their patterns, colored them, taped them together, and made their own soccer ball. When they were finished, they decided it made a better pinata. So they filled it with candy and had a party. When this story was published in former YPLA newspaper, Turtle News, different groups and classes around the world had fun with it. Here’s one young person’s response. His challenge was to reduce the pattern to the fewest number of parts. Two was the best he could do. ``` TO START ADAM'S.SOCCER.BALL WAIT 100 CS M PR [THIS IS THE FIRST SIDE.] WAIT 100 CS CT M2 PR [TAPE THIS SIDE TO THE FIRST SIDE.] END TO ADAM'S.SOCCER.BALL CT PR [This Logo procedure is from Adam Johnson,] PR [age 12, The Computer Learning Center,] PR [Caldwell, Idaho 83605.] END ``` TO M RT 90 PU FD 180 PD H FD 60 LT 90 P K H L P K2 H P2 P K H L P K2 H P2 P END This is the M printout. TO K FD 60 RT 72 FD 60 LT 180 END TO H REPEAT 6 [FD 60 RT 60] END TO L FD 60 RT 60 FD 60 RT 42 END TO P LT 90 REPEAT 5 [FD 60 RT 72] END Rabbit Trail 25. Folded Paper Fun Making the soccer ball out of paper is just one of many things you can do with Logo and folded paper. The computer club that made the first flattened soccer ball pattern found that you can make all sorts of three dimensional objects from folded paper. How about a simple cube? This takes you from the two dimensional square to a three dimensional cube. TO CUBE :D :X1 :Y1 HT PU SETXY :X1 :Y1 SETH 0 PD REPEAT 4 [SQUARE :D SETX :X1 + :D MAKE "X1 XCOR] MAKE "X1 XCOR - (:D * 3 ) MAKE "Y1 YCOR - :D PU SETXY :X1 :Y1 PD REPEAT 3 [SQUARE :D FD :D] END TO CUBES :D :X1 :Y1 CUBE :D :X1 :Y1 CUBE :D :X1 + ( :D * 4 ) :Y1 END TO SQUARE :D REPEAT 4 [FD :D RT 90] END The group first cut out a number of cardboard squares. Then they taped them together to see what kind of shapes they could make. The next step was to transfer the pattern to the computer. Making 3-D shapes from triangles really got interesting TO TETRAHEDRON :D RT 30 TRI :D MOVER :D TRI :D MOVE :D TRI :D END TO MOVIER :D RT 60 FD :D LT 60 END TO MOVELE :D LT 60 FD :D RT 60 END TO TRIR :D RT 60 FD :D TRI :D END TO TRI :D REPEAT 3 [FD :D RT 120] END TO OCTAHEDRON :D LT 30 TRI :D RT 30 TETRAHEDRON :D LT 60 TRI :D TRIR :D TRIF :D END TO TRIF :D FD :D RT 60 TRI :D END TETRAHEDRON and OCTAHEDRON are just the beginning of what you can do with Logo and a printer. Go ahead. Try these. Print them. Fold them up. And then design your own 3-D figures. The whole idea is to explore, to discover what you don’t know and then go find the answers. "No, Morf. This is not about rotating or rolling your favorite Biplane. This is about moving shapes through space. Remember this picture? You saw it in Chapter 2 and also in Chapter 9 of this book, only it didn’t have the dotted-line square in the picture then. The original idea was to use string and make a triangle; first from 12:00 o’clock to 3:00 o’clock, the center, and back to 12:00 o’clock. Next you moved to 1:00 o’clock and made the triangle shown in the picture. Then you moved to 2:00 o’clock and so one, all the way around the clock. The triangles moved around an axis that was at the center of the string clock. When you did the same thing with squares, they moved around on an axis at the center of the square. Can you write Logo procedures that shows the shapes rotating around an axis? Sure you can! Here’s a start. TO START CS PR [THESE PROCEDURES ALLOW YOU TO SEE] PR [TWO AND THREE DIMENSIONAL FIGURES] PR [ROTATE AROUND AN AXIS.] PR " PR [ROTATE: ROTATE FOUR SQUARES AROUND] PR [A CENTRAL POINT.] PR " PR [ROTATE.CUBE: ROTATE A CUBE AROUND A] PR [CENTRAL POINT.] PR " PR [CENTER.ROTATE: ROTATE A SQUARE AROUND] PR [A POINT IN THE CENTER OF THE PLANE.] PR " PR [ROTATE.SQUARE: ROTATE A SQUARE AROUND] PR [A POINT OUTSIDE THE SQUARE.] END TO ROTATE ROTATE.REC 0 END TO ROTATE.REC :ANGLE CS DRAW.ANG :ANGLE ROTATE.REC :ANGLE + 3 END TO DRAW.ANG :ANGLE RECT 100 ( 100 * COS :ANGLE ) RECT 100 ( 100 * COS ( :ANGLE + 90 ) ) RECT 100 ( 100 * COS ( :ANGLE + 180 ) ) RECT 100 ( 100 * COS ( :ANGLE + 270 ) ) END TO ROTATE.SQUARE ROTATE.SQ 0 END TO ROTATE.SQ :ANGLE CS DRAW.SQUARE ( 100 * ( COS :ANGLE ) ) ~ ( 50 * ( COS ( :ANGLE + 90 ) ) ) ROTATE.SQ ( :ANGLE + 5 ) END TO DRAW.SQUARE :WIDTH :CENTER PU HOME RT 90 FD :CENTER LT 90 PD CENTERRECT 100 :WIDTH END TO ROTATE.CUBE HT ROT.CUBE 0 END TO ROTATE.CUBE :ANGLE CS DRAW.SQUARE ( 100 * ( COS :ANGLE ) ) ( 50 * ( COS ( :ANGLE + 90 ) ) ) DRAW.SQUARE ( 100 * ( COS ( :ANGLE + 90 ) ) ) ( 50 * ( COS ( :ANGLE + 180 ) ) ) DRAW.SQUARE ( 100 * ( COS ( :ANGLE + 180 ) ) ) ( 50 * ( COS ( :ANGLE + 270 ) ) ) DRAW.SQUARE ( 100 * ( COS ( :ANGLE + 270 ) ) ) ( 50 * ( COS ( :ANGLE + 360 ) ) ) ROT.CUBE :ANGLE + 5 END TO CENTERRECT :LENGTH :WIDTH CENTER.CORNER :LENGTH :WIDTH RT 180 RECT :LENGTH :WIDTH RT 10 CENTER.CORNER :LENGTH :WIDTH END TO CENTER.ROTATE CENTER.ROTATE.RECT 0 END TO CENTER.ROTATE.RECT :ANGLE CLEARSCREEN DRAW.CENTER.RECT ( 100 * ( COS :ANGLE ) ) CENTER.ROTATE.RECT ( :ANGLE + 1 ) END TO DRAW.CENTER.RECT :WIDTH CENTER.RECT 100 :WIDTH END TO RECT :LENGTH :WIDTH REPEAT 2 [FD :LENGTH RT 90 FD :WIDTH RT 90] END TO CENTER.CORNER :LENGTH :WIDTH PU LT 90 FD :WIDTH / 2 LT 90 FD :LENGTH / 2 PD END TO CENTER.RECT :LENGTH :WIDTH CENTER.CORNER :LENGTH :WIDTH RT 180 RECT :LENGTH :WIDTH RT 180 CENTER.CORNER :LENGTH :WIDTH END What's Next This procedure shows squares and cubes rotating around an axis. It also gives you another example of trigonometry at work. How about a challenge or two? 1. These procedures show the "side view" of the shapes rotating around a vertical axis. Write procedures to show a top view of the shapes rotating around the axis — as if you were looking down on the shapes. 2. Write procedures for other shapes, such as a rotating triangle. Use different types of triangles. 3. What about moving a pyramid through space? Well, maybe you had better leave that job to the next project. How About Turtle CAD When some junior high students saw the work that the third grade students had done creating and folding soccer ball patterns, they wondered if it would be possible to work in three dimensions on the Logo screen. They were thoroughly familiar with the two dimensions of the x-y coordinate system. Could this be expanded to serve three dimensions: X, Y, and Z? Yes, it can. In addition, the resulting procedure offers a good look at property lists, an often confusing feature of Logo. In the procedure listed on the following pages, (ECONOBOX.LGO on the diskette) the basic unit is the coordinate point as defined by the POINT procedure. Points have letter names and x, y, and z coordinates to position them in three-dimensional space. Take a look at the DIAMOND and CAR procedures. To display a 3-D object, you must define each point in space and each line. Once you have defined all the required points by name and position, you can construct shapes like this one. The FIGURE procedure takes the shape name and a list of two-point lists; for example, [[A B][A C][A E][B F][B D][C D][C G]...]. The two-point lists represent the line segments of the shape with each letter representing an endpoint. Here’s a simple procedure to develop a pyramid. Each point is defined along with the lines linking those points. TO PYRAMID POINT "A [0 0 0] POINT "B [0 0 50] POINT "C [50 0 50] POINT "D [50 0 0] POINT "E [25 50 25] FIGURE "PYRAMID [[A B][B C][C D][D A][A E][B E][C E][D E]] END The procedure allows you to create as many shapes as you want. Each can be as complex as you want. But only one can be manipulated at a time. Once you have defined your shape, you can expand it or contract it, rotate it, magnify it, shrink it, and then restore it to its original shape. To expand a shape, use EXPAND. Tell the procedure which shape to expand, which axis the expansion will operate on, and how much to expand it. MAGNIFY is very similar to EXPAND. However, you don't specify an axis since the figure is magnified in all directions. ROTATE operates on a plane: xy, xz, or yz. Specify the shape, the plane, and the degrees of rotation you want to see. As you move your shape through space, the turtle remembers the position of your shape and moves it from its last position. When you want to start over with a new shape, or start from your shape's original position, use RESTORE "<figure name>. To get you started, there are three examples provided in the procedure below: a diamond, a pyramid, and an econobox-like car. The entire procedure has been printed here so you can follow it a bit easier. Type DIAMOND or CAR to see a front view of the figures. Then rotate the figures using commands such as: ``` ROTATE "<figure name> "XY 45 ROTATE "<figure name> "YZ 30 ``` Now you're ready to start off on your own. What's Next TO DIAMOND POINT "A [60 0 60] POINT "B [0 51.961 51.961] POINT "C [30 51.961 103.92] POINT "D [90 51.961 103.92] POINT "E [120 51.961 51.961] POINT "F [90 51.961 0] POINT "G [30 51.961 0] POINT "I [30 81.961 51.961] POINT "J [45 81.961 77.941] POINT "K [75 81.961 77.941] POINT "L [90 81.961 51.961] POINT "M [75 81.961 25.98] POINT "H [45 81.961 25.98] FIGURE "GEM [[A B] [A C] [A D] [A E] [A F] [A G] ~ [B C] [B I] [C J] [C D] [D K] [D E] [E L] [E F] ~ [F M] [F G] [G H] [G B] [I J] [J K] [K L] [L M] ~ [M H] [H I]] END TO CAR POINT "A [0 0 0] POINT "B [0 20 0] POINT "C [40 20 0] POINT "D [40 0 0] POINT "E [0 20 40] POINT "F [0 40 40] POINT "G [40 40 40] POINT "H [40 20 40] POINT "I [0 20 80] POINT "J [0 40 80] POINT "K [40 40 80] POINT "L [40 20 80] POINT "M [0 0 120] POINT "N [0 20 120] POINT "O [40 20 120] POINT "P [40 0 120] POINT "Q [0 10 0] ROTATE "AUTO "XZ 90 POINT "R [10 10 0] POINT "S [10 20 0] POINT "T [40 10 0] POINT "U [30 10 0] POINT "V [30 20 0] FIGURE "AUTO [[A B] [A D] [A M] [B C] [S R] [R Q] ~ [B E] [C D] [T U] [U V] [C H] [D P] [P O] [P M] ~ [O L] [O N] [N M] [N I] [I J] [I L] [J F] [J K] [K L] ~ [K G] [G H] [G F] [F E] [E H]] END TO POINT :POINTNAME :COORDS MAKE :POINTNAME :COORDS PPROP :POINTNAME "POINT "TRUE PPROP :POINTNAME "ORIG :COORDS END TO FIGURE :SHAPENAME :LP IF (GPROP :SHAPENAME "POINT) = "TRUE ~ [(PR :SHAPENAME [IS ALREADY A POINT ~ NAME.]) STOP]) MAKE :SHAPENAME :LP PPROP :SHAPENAME "FIGURE "TRUE MAKE "SX (WORD :SHAPENAME "PTS" MAKE :SX [] MAKE "N9 1 REPEAT COUNT :LP ~ [TSF :N9 TSL :N9 MAKE "N9 :N9 + 1] MAKE "MATRIX [1 0 0 0 1 0 0 0 1] DRAW:SHAPENAME (PR :SHAPENAME [IS NOW A SHAPE.]) END TO DRAW:FIGURE MAKE "S9 THING (WORD :FIGURE "PTS") REPEAT COUNT :S9 [MAKE "P9 FIRST :S9 ~ MAKE :P9 (LIST (ITEM 1 :MATRIX) * ~ (ITEM 1 THING :P9) + (ITEM 2 :MATRIX) * ~ (ITEM 2 THING :P9) + (ITEM 3 :MATRIX) * ~ (ITEM 3 THING :P9) (ITEM 4 :MATRIX) * ~ (ITEM 1 THING :P9) + (ITEM 5 :MATRIX) * ~ (ITEM 2 THING :P9) + (ITEM 6 :MATRIX) * ~ (ITEM 3 THING :P9) (ITEM 7 :MATRIX) * ~ (ITEM 1 THING :P9) + (ITEM 8 :MATRIX) * ~ (ITEM 2 THING :P9) + (ITEM 9 :MATRIX) * ~ (ITEM 3 THING :P9)) MAKE "S9 BF :S9] CS ST MAKE "S9 THING :FIGURE REPEAT COUNT :S9 [PU SETXY LIST BL THING ~ (FIRST FIRST :S9) PD SETXY LIST BL THING ~ (LAST FIRST :S9) MAKE "S9 BF :S9] END TO RESTORE :FIGURE IF NOT (GPROP :FIGURE "FIGURE) = "TRUE ~ [(PR :FIGURE [IS NOT A SHAPE.] STOP] MAKE "N9 THING (WORD :FIGURE "PTS") REPEAT COUNT :N9 [MAKE FIRST :N9 ~ GPROP (FIRST :N9) "ORIG MAKE "N9 BF :N9] MAKE "MATRIX [1 0 0 0 1 0 0 0 1] DRAW:FIGURE END TO ROTATE :FIGURE :AXIS :AMT IF NOT MEMBERP :AXIS [XY XZ YZ] [PR ~ [THE AXIS MUST BE XY, XZ, OR YZ.] STOP] IF NOT (GPROP :FIGURE "FIGURE) = "TRUE ~ [(PR :FIGURE [IS NOT A SHAPE.]) STOP] IF :AXIS = "XY [MAKE "MATRIX (LIST ~ (COS :AMT) 0 - (SIN :AMT) 0 (SIN :AMT) ~ (COS :AMT) 0 0 0 1)] IF :AXIS = "XZ [MAKE "MATRIX (LIST ~ (COS :AMT) 0 0 - (SIN :AMT) 0 0 (SIN :AMT) ~ 0 (COS :AMT) 0)] IF :AXIS = "YZ [MAKE "MATRIX (LIST 1 0 0 0 ~ (COS :AMT) 0 0 - (SIN :AMT) 0 (SIN :AMT) ~ (COS :AMT))] DRAW:FIGURE END TO MAGNIFY :FIGURE :AMT IF NOT (GPROP :FIGURE "FIGURE) = "TRUE ~ [(PR :FIGURE [IS NOT A SHAPE.]) STOP] MAKE "MATRIX (LIST :AMT 0 0 :AMT 0 0 0 :AMT) DRAW :FIGURE END TO EXPAND :FIGURE :AXIS :AMT IF NOT MEMBERP :AXIS [X Y Z] [PR ~ [THE AXIS MUST BE X, Y, OR Z.] STOP] IF NOT (GPROP :FIGURE "FIGURE) = "TRUE ~ [(PR :FIGURE [IS NOT A SHAPE.]) STOP] IF :AXIS = "X [MAKE "MATRIX ~ (LIST :AMT 0 0 0 1 0 0 0 1)] IF :AXIS = "Y [MAKE "MATRIX ~ (LIST 1 0 0 0 :AMT 0 0 0 1)] IF :AXIS = "Z [MAKE "MATRIX ~ (LIST 1 0 0 0 1 0 0 0 :AMT)] DRAW :FIGURE END Understanding Property Lists For the longest time, Property Lists were a major source of confusion — until Logy and Morf discovered a way to read them. Put the PROPERTY of :PROPERTY, which has the value of :VALUE, with the name :NAME. It helped when they saw PPROP written as a procedure. TO PPROP :NAME :PROPERTY :VALUE MAKE (WORD :NAME CHAR 32 :PROPERTY) :VALUE END Another way to look at the PPROP procedure is to use some more familiar terms. PPROP "TEXAS "CAPITAL "AUSTIN PPROP "TEXAS "ABBREVIATION "TX PPROP "TEXAS "CITIES [HOUSTON DALLAS AMARILLO EL PASO] PPROP "TEXAS "REGION "HILL.COUNTRY What’s Next Put the property of CAPITAL with the value of AUSTIN with the name, TEXAS. Put the ABBREVIATION TX with TEXAS. Put the cities of Houston, Dallas, Amarillo, and El Paso with Texas. In the POINT procedure used on the previous pages, you have PPROP :POINTNAME "POINT "TRUE PPROP :POINTNAME "ORIG :COORDS Put the property of POINT with the value of TRUE with the name :POINTNAME. Put the property of ORIG with the value of COORDS with the name :POINTNAME. Take a look at the RESTORE procedure to see how this is used. It’s not nearly as scary as you think! Now that you have the properties defined, what can you do with them? For one thing, you can recall them using the GPROP procedure. That’s exactly what’s done in the RESTORE procedure. TO GPROP :NAME :PROPERTY OUTPUT THING (WORD :NAME CHAR 32 :PROPERTY) END This outputs the THING (the value) defined in the PPROP procedure. For example: GPROP "TEXAS "CAPITAL results in AUSTIN. GPROP (FIRST :N9) "ORIG results in the original coordinates for :NAME. The THREED procedures are one good examples of property lists. Play around with them on your own. You’re bound to find other uses. Bury and Unbury BURY is one of those Logo primitives that is often ignored. But it can be very useful. Let’s try something. 1. Load any procedure. 2. Type BURYALL and press Enter. 3. Type EDALL and press Enter. Where’d the procedures go? 4. Try to run the buried procedure. What happened? 5. Now load another procedure. 6. Type UNBURYALL and press Enter. 7. Type EDIT ALL and press Enter. Both the procedures are now visible in the Editor, aren’t they? What this means is that you can bury certain conditions and then erase everything else. If you are writing an adventure game, you can have your character carry things from one situation, one procedure, to another. Experiment with these commands. You’ll find lots of uses for them. What's Next Why not write some color procedures and bury those? For example: TO BLACK SETPC [0 0 0] END BURY "BLACK SETSC BLACK The screen color turns black. If you do a number of these, you don’t have to remember color numbers anymore. Use the names. Why not bury the tools you use regularly? See the Appendix for some tool and utility ideas. Logo and Artificial Intelligence The whole idea of artificial intelligence gets very confusing. It makes you wonder, just what is intelligence? And how can it be artificial? One of the things that makes humans intelligent is the ability to learn. And if we can teach a computer to learn, then maybe it’s intelligent? This is not the place to discuss whether computers can or ever will be able to really learn. Leave that to the computer scientists and philosophers. For our purposes, computers of today don’t really learn. It is the software that computers run that make them appear to learn. So, if you look at it, the learning is really kind of artificial. Maybe we can call that artificial intelligence. Have you ever played the game States and Capitals? Someone names a state. You have to name the capital of that state. This edition of the game shows how the computer can appear to be learning. TO STATEQUIZ GREET QUIZ END TO GREET CLEARTEXT PRINT [WHAT'S YOUR NAME?] PRINT [] MAKE "NAME READLIST PRINT [] PRINT SENTENCE "HI :NAME PRINT [] END TO QUIZ ASK.CAPITAL IF CHECKANSWER [QUIZ] END TO ASK.CAPITAL MAKE "ITEM ((RANDOM COUNT :SLIST) + 1) PRINT SENTENCE [WHAT'S THE CAPITAL OF] FIRST SELECT :ITEM :SLIST END TO CHECKANSWER MAKE "CHECK READLIST PRINT [] IF EMPTYP :CHECK [OP "TRUE" IF :CHECK = [QUIT] [OP "FALSE" TEST :CHECK = LAST SELECT :ITEM :SLIST IFTRUE [PRINT (SENTENCE [THAT'S RIGHT,] :NAME)] IFFALSE [PRINT (SENTENCE "SORRY :NAME "IT'S LAST SELECT :ITEM :SLIST]) PRINT [] OP "TRUE END TO SELECT :N :LIST TEST :N = 1 IFTRUE [OUTPUT FIRST :LIST] IFFALSE [OUTPUT SELECT (:N - 1) (BUTFIRST :LIST)] END TO TEACH CLEARTEXT TYPE [WHAT IS THE STATE?] MAKE "QUEST READLIST PRINT [] (TYPE SENTENCE [WHAT IS] :QUEST ['S CAPITAL?]) MAKE "ANSWER READLIST MAKE "GROUP [] MAKE "GROUP LPUT :QUEST :GROUP MAKE "GROUP LPUT :ANSWER :GROUP CLEARTEXT PRINT [] PRINT [SHALL I ADD THEM TO THE LIST?] TYPE [(Y OR N) ?] IF RC = "Y [MAKE "SLIST LPUT :GROUP :SLIST] PRINT [] PRINT [< < < < NEW LIST > > > >] PRINT [] SHOW :SLIST PRINT [] TYPE [ADD MORE? (Y OR N)] IF RC = "Y [TEACH] [QUIZ] END TO INIT MAKE "SLIST [] END By now you should have little if any trouble figuring out how this procedure works. Sure, it might take some time. But you can do it. The main feature of this game is "lists within lists within lists." First, there is the list of States and Capitals — SLIST. If you ever want to erase this list and start over, type INIT. Secondly, there is a list that matches each state with its capital — GROUP. Thirdly, there is a list of each state generated from the variable :QUEST and one for each capital that comes from :ANSWER. Together, these look like this: MAKE "SLIST [[[Oklahoma] [Oklahoma City]] [[New York] [Albany]] [[Texas][Austin]] [[Massachusetts][Boston]] [[California][Sacramento]]] What’s Next Logo "learns” new states and capitals from the TEACH procedure. The first thing that TEACH does is ask you to create the variables, :QUEST and :ANSWER. It then creates a new empty list named GROUP. MAKE "GROUP [] Next, it adds the state (:QUEST) to the :GROUP list. MAKE "GROUP LPUT :QUEST :GROUP LPUT and FPUT are interesting commands. They are used to add words or other lists to a list. For example: LPUT "Logo [MSW] results in the list [MSW Logo]. FPUT "MSW [Logo] also results in the list [MSW Logo]. In the case of States and Capitals, LPUT tells Logo to add :QUEST at the end of the list :GROUP. Once you have the state listed, you need to add the capital. MAKE "GROUP LPUT :ANSWER :GROUP This line adds :ANSWER as the second list within the list :GROUP. MAKE "SLIST LPUT :GROUP :SLIST And finally, this line adds the list of two lists to the master list :SLIST. What's Next Time to experiment. Change the TEACH procedure to add a third element to the GROUP list, maybe the county of the capital or the population of the state? How would you change the other procedures to ask about that third element? Go ahead — try it. It’s really not that hard. Rather than look for the LAST element of the GROUP list, you might want to look for the LAST BUTLAST element, or the FIRST BUTFIRST element, or select an ITEM from a list that matches something else. More AI Applications Simulations Before we go too far, there is another realm to explore, the realm of simulations. These are ways to get the computer to act out your "What if?" wishes. Several years ago, Logy and Morf visited Tombstone, Arizona, the town made famous by the gunfight at the OK Corral. That’s where Wyatt Earp and his brothers shot it out with the Clantons. Everything about the fight has been well-documented including where every one involved was standing, who shot first, who killed who. Billy Clanton was the first to draw his pistol and fire. But because he was such a poor shot, Wyatt didn’t shoot Billy. He shot Billy’s brother with his first shot. Well, Morf asked, "What if Billy hadn’t missed?" This raised an interesting question. And so the two friends sat down at the computer and plotted the whole thing out. They made a simulation to answer their questions. You’ve heard about aircraft simulators. These are computer-controlled airplane cockpits that allow pilots to train in all sorts of emergency situations without ever leaving the ground. You may have gone to arcades where they have driving or flying games. These are simulators. And, of course, there are the flight simulator software programs you can buy. --- **Logo Science** Another interesting use of simulators is in analyzing behavior. For example, how would a mouse find its way out of a maze. One way that researchers discovered was that every time this one mouse came to a wall, it turned right. It eventually found its way out. You’ll find a behavior simulation on the disk that came with this book, BEHAVIOR.LGO. This offers three animal simulations: Find By Smell, Find by Sight, Chase and Evade. The "Find" simulations are rather straightforward. The Chase and Evade simulation is fun. Will Find.By.Sight catch Avoid.By.Smell before Avoid can get out of the playing area? Here’s a challenge! BEHAVIOR.LGO uses one turtle to simulate the actions of two. Change the procedures to actually use two turtles. Another interesting simulation can be found in CELLS.LGO. This is an example of Logo used in medical education. The START procedure lists a message that was posted on CompuServe’s Logo Forum. CELLS.LGO is the response. Three groups of cells are drawn randomly on the screen. The turtle always seeks out the red cells on which additional cells are grown. Your Challenge Create an AVOID procedure. Currently, the turtle will move right over the green and blue cells to find the red ones. Your job is to create a procedure that makes the turtle move around the green and blue cells while still seeking the red cells. Yes, it can be done. Give it a try. What’s Next Seems we just got started and here we are. And there are so many other things to do. And what about all the other games and projects? Well, there has to be something left for you to explore on your own. You can start with the procedures on the diskette that came with this book. This is the part of Logo that Logy and Morf like best, exploring new ways to do things, finding new and better ways to make things work. In fact, by now you can do just about anything you want with Logo. Doesn’t that make you feel great! You’ll find a number of interesting Logo sights on the Internet. There’s Logy and Morf’s Home Page at http://www.cyberramp.net/~jmul Send e-mail to jmul@cyberramp.net. Other addresses include: http://www.softronix.com for information on MSW Logo and other products from George Mills. There is a Logo news group at comp.lang.logo. There’s also an on-line Logo mailing list, actually a discussion group. To subscribe, send subscribe logo-l to majordomo@gsn.org Most important! Whatever you do, enjoy your very own GREAT LOGO ADVENTURE!
{"Source-Url": "http://www.educa.fmf.uni-lj.si:80/logo/doc/Feb.97/chpt12.pdf", "len_cl100k_base": 7637, "olmocr-version": "0.1.53", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 53908, "total-output-tokens": 9393, "length": "2e12", "weborganizer": {"__label__adult": 0.0006046295166015625, "__label__art_design": 0.007221221923828125, "__label__crime_law": 0.0006384849548339844, "__label__education_jobs": 0.11578369140625, "__label__entertainment": 0.0008072853088378906, "__label__fashion_beauty": 0.0003871917724609375, "__label__finance_business": 0.0014848709106445312, "__label__food_dining": 0.0006871223449707031, "__label__games": 0.006908416748046875, "__label__hardware": 0.00394439697265625, "__label__health": 0.0008034706115722656, "__label__history": 0.00140380859375, "__label__home_hobbies": 0.0014066696166992188, "__label__industrial": 0.0012645721435546875, "__label__literature": 0.00513458251953125, "__label__politics": 0.0004470348358154297, "__label__religion": 0.0013828277587890625, "__label__science_tech": 0.260986328125, "__label__social_life": 0.0007886886596679688, "__label__software": 0.1793212890625, "__label__software_dev": 0.4072265625, "__label__sports_fitness": 0.0004820823669433594, "__label__transportation": 0.0006074905395507812, "__label__travel": 0.00036072731018066406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25081, 0.05178]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25081, 0.46273]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25081, 0.90643]], "google_gemma-3-12b-it_contains_pii": [[0, 764, false], [764, 1849, null], [1849, 2455, null], [2455, 2701, null], [2701, 2988, null], [2988, 3708, null], [3708, 4247, null], [4247, 5055, null], [5055, 5779, null], [5779, 6424, null], [6424, 7051, null], [7051, 8149, null], [8149, 9289, null], [9289, 10481, null], [10481, 11371, null], [11371, 12143, null], [12143, 13119, null], [13119, 14203, null], [14203, 14806, null], [14806, 15759, null], [15759, 16704, null], [16704, 17646, null], [17646, 18329, null], [18329, 19130, null], [19130, 19940, null], [19940, 20842, null], [20842, 21885, null], [21885, 23042, null], [23042, 23708, null], [23708, 24539, null], [24539, 25081, null], [25081, 25081, null]], "google_gemma-3-12b-it_is_public_document": [[0, 764, true], [764, 1849, null], [1849, 2455, null], [2455, 2701, null], [2701, 2988, null], [2988, 3708, null], [3708, 4247, null], [4247, 5055, null], [5055, 5779, null], [5779, 6424, null], [6424, 7051, null], [7051, 8149, null], [8149, 9289, null], [9289, 10481, null], [10481, 11371, null], [11371, 12143, null], [12143, 13119, null], [13119, 14203, null], [14203, 14806, null], [14806, 15759, null], [15759, 16704, null], [16704, 17646, null], [17646, 18329, null], [18329, 19130, null], [19130, 19940, null], [19940, 20842, null], [20842, 21885, null], [21885, 23042, null], [23042, 23708, null], [23708, 24539, null], [24539, 25081, null], [25081, 25081, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 25081, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25081, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25081, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25081, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25081, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25081, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25081, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25081, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25081, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 25081, null]], "pdf_page_numbers": [[0, 764, 1], [764, 1849, 2], [1849, 2455, 3], [2455, 2701, 4], [2701, 2988, 5], [2988, 3708, 6], [3708, 4247, 7], [4247, 5055, 8], [5055, 5779, 9], [5779, 6424, 10], [6424, 7051, 11], [7051, 8149, 12], [8149, 9289, 13], [9289, 10481, 14], [10481, 11371, 15], [11371, 12143, 16], [12143, 13119, 17], [13119, 14203, 18], [14203, 14806, 19], [14806, 15759, 20], [15759, 16704, 21], [16704, 17646, 22], [17646, 18329, 23], [18329, 19130, 24], [19130, 19940, 25], [19940, 20842, 26], [20842, 21885, 27], [21885, 23042, 28], [23042, 23708, 29], [23708, 24539, 30], [24539, 25081, 31], [25081, 25081, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25081, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
f45ca4f19ec2a055897caa7439d0427b46949fd9
Interconnection of Game Worlds and Physical Environments in Educational Settings Raphael Zender, Ulrike Lucke University of Rostock Institute of Computer Science Chair for Computer Architecture Joachim-Jungius-Str. 9 18059 Rostock +49 – 381 – 498 – 7550 firstname.lastname@uni-rostock.de Dennis Maciuszek, Alke Martens University of Rostock Institute of Computer Science Chair for e-Learning and Cognitive Systems Albert-Einstein-Str. 21 18059 Rostock +49 – 381 – 498 – 7650 firstname.lastname@uni-rostock.de ABSTRACT This article describes the use of a service-oriented architecture to bridge the gap between different environments in the context of game-based learning. The basic mechanism is a unified service bus infrastructure. This is verified by a combination of face-to-face scenarios with online tools and virtual worlds. The added value of this concept is shown based on an exemplary scenario for game-based learning. Results of a prototypical evaluation are stated to verify the validity of the approach. Furthermore, the applications and possible extensions of the developed system are discussed. Categories and Subject Descriptors C.2.4 [Distributed Systems]: Distributed applications K.8.0 [Personal Computing – General]: Games General Terms Management, Design Keywords eLearning, Game-based Learning, Virtual World, Service-Oriented Architecture. 1. INTRODUCTION Game-based scenarios are an innovative approach to enhance the learning process in terms of an improved learning outcome. They can be realized in different educational settings. Currently, the most popular scenario is computer-aided face-to-face learning. This includes the extension of spoken and written content in the classroom by digital media. Usually, the lecturer is required to manually upload his material to a website or learning platform. – This marks a shift to online learning, where learners are able to individually access and work through digital content. Thus, the synchronous, lecture-based learning process is turned to an asynchronous, document-based scenario that is decoupled in time and place. – In virtual learning, this is enriched with the social appearance of teachers and other learners. 3D worlds are a popular example, but are expensive to create and to handle because of their complexity. – This drawback is compensated by digital lecture recordings (eLectures) containing the slides, annotations, audio, and video of the lecturer. Thus, they combine social presence with easy development. – All of these settings can be enriched with various communication and collaboration facilities. However, they remain decoupled from each other. Coming from the perspective of game-based learning, the mentioned approaches allow for the realization of game-based scenarios or systems – but as described so far, they are merely innovative teaching and training scenarios. The idea to integrate aspects of games and learning is not new [1]. Nowadays, game-based learning takes place at kindergarten and at schools, being a natural part of the related pedagogies [1]. When it comes to adult education, the notion of game is not commonly used in the context of learning. Adult learning is no playing games; it has to be more serious. In contrast to that, it is highly desired to get the effects of successful games also in teaching and training (e. g. motivation, flow, and enthusiasm). However more difficult to answer (and topic of controversial discussions) is the question what exactly game-based strategies in computer teaching and training scenarios can be [2]. Classical strategies for educational settings in adult education are case-based training, narrative storylines, and role play. They are for example used in university education in medicine [3], in law, and in business sciences, to name only a few. These approaches exist for a very long time [4], they are neither related to computers nor to game-based learning. However, they can be perceived to be classical forms of realizations of computer-based teaching and training. A closer look and a comparison with basic game strategies [1] reveals that the three mentioned approaches also occur in games development: games are often based on “cases”, i.e. descriptions of a specific situation or environment (e. g. the starting situation of Settlers of Catan), game content is often provided in a narrative manner (e. g. storytelling), and quite a few games are role-playing games (e. g. World of Warcraft, Dungeons and Dragons). Thus, these approaches lend themselves for game-based learning. In a virtual world setting, participants (learners) can be given a specific training case, which can come in the shape of a game or in a playful manner. The content can be provided by the virtual teacher or tutor in a narrative way, and moreover, the participants can take over different roles. The task of the teacher is to orchestrate and organize the didactical (instructional) structure of the training unit (e.g., the course) and to decide whether or not a game-based approach is appropriate. The most important aspect in this context is what kind of knowledge will likely be transferred from the game-based situation to the real world [16]—this is reflected in the learning goals. Usually, the resulting course will be a blend—a technical blend, but also an instructional blend between traditional and game-based education. Blended learning is an approach to combine different settings in time-share mode. The learning process is pre-divided into phases, each with its own organization. Face-to-face learning phases are basically used for a guided lecture preparation and wrap-up. These are mixed with intensive self-learning phases in online or virtual environments. This separation into sequential processes makes a compromise between ease of implementation and pedagogical needs. However, a direct interconnection of educational settings is desirable for learners and feasible from a technical point of view. This allows for instance a seamless combination of synchronous scenarios during the lecture (with interaction between teacher and learner) and asynchronous scenarios before and after (individual or collaborative preparation and wrap-up)—regardless of the used platform. This enhances the learning comfort, increases the scope and quality of a lecture, and advances mobility and equality of opportunities for learners and lecturers. Current developments in this area can be divided into two groups: - There exist a number of dedicated point-to-point-connections between different platforms and tools, e.g., to automatically integrate lectures into learning management systems [5][6] or to mashup virtual worlds with other platforms [7][8]. They suffer from limited extensibility and complex maintenance. - Systematic integrations of different platforms are rare, for education [9][10] and for gaming [11][12]. Usually, they follow an approach of fundamental platform decomposition for a later flexible re-combination of modules. Here, object-, service-, or peer-to-peer-based architectures come into play. However, this is hard to realize with existing tools. Considering state-of-the-art design principles and sustainability of developments, a systematic integration is desirable. Services have proven to be a valid mechanism for enhancement of existing platforms [13], if applied in a coarse-grained manner. Thus, we decided to develop a system for service-based interconnection of different educational settings in terms of a generalized architecture for flexible, bi-directional interchange of media between various environments. The remainder of this article is organized in three main sections. First, the implemented game-based setting is described. Second, an overview on the infrastructure used to realize this scenario is given. Third, results of an evaluation of the system are provided. The article concludes with a summary and future research directions. 2. SETTING After some preparation, we began introducing game worlds into regular university courses in the summer term of 2009. The idea was that computer science students might find a video game context motivational, but also that game architectures would provide us with new facilities for simulation of educational content as well as communication tools for computer-supported cooperative learning (CSCL). For instance, we are teaching artificial intelligence (AI) theory, which can be difficult to access for the student if there is no real-world application. Autonomously moving robots are a suitable target for applying AI algorithms. If one does not have real robots or a suitable obstacle course at hand, or not enough robots, or if they are too expensive for student experiments, using a game world becomes an alternative [14]. In a 3D game, e.g., a sim game like The Sims, the player can create intelligent agents and have them act autonomously in complex, simulated and visualized environments. In addition, we wanted to be more flexible when it comes to learning locations and learning times. During projects, students should not be required to be present on campus all of the time. There should be facilities to collaborate on a task or to have a group meeting whenever needed. In the future, we wish to share lectures with other universities, teaching to both, presence and distance learners. Players of massively multiplayer online role-playing games (MMORPG), such as World of Warcraft, are used to entering another world whenever and from wherever they like. Collaboration with other players, who might be anywhere in the world, is something they take for granted and enjoy. To benefit from these advantages, we introduced game worlds into university education. We used the multi-user virtual environment (MUVE) Second Life. Strictly speaking, Second Life is not a game. It may look like an MMORPG, but it does not come with game rules, challenges, or game objectives. Instead, it just provides the basic physics engine and the 3D virtual world visualization, and leaves the rest to the users. Users can create games within Second Life, just as it is possible to create virtual educational settings. From a teaching point of view, we are currently making use of Second Life in three different types of learning scenarios. The first use is that of a virtual laboratory [15]. We built an AI lab in Second Life—a virtual building in which students could place objects: virtual characters like robots, laboratory mice, monsters, or their own avatars, but also furniture and equipment, e.g., a maze. In the actual programming tasks tied to learning objectives in our AI course, the students outfit their characters with intelligent, autonomous behavior by first conceiving cognitive models, and then implementing these using the Linden Scripting Language. They can then execute the code, and watch a simulation of their models over space (the three dimensions of the virtual world) and time. To make the series of intended experiments more motivating, we enriched it with a background story involving escaped laboratory mice chased by a genetically engineered monster. A lab exercise was therefore also a game-like experience in which students programmed the behavior of the mice or that of the hunter. Next time we give this course, we intend to extend the game part. Notice that this scenario can be computer-aided face-to-face learning—a teacher in class states a modeling task, and the students access the virtual lab via their laptops—or online learning, when a student is doing homework in the virtual lab, accessed from anywhere—or even virtual learning, when two students in different places are working together. The second use is a virtual-learning scenario: a virtual meeting room. When the instructional part of the AI course was finished, students were given a project. They had to design and implement a game using Second Life. Each team was to create a level of the game in one of the five floors of the AI lab building. It had to feature at least one intelligent agent as an adversary of the player, as well as some decoration. To arrive at a consistent game, our ten students had two meetings, moderated by a teacher, in which they presented their concepts to the others, agreed on a storyline as well as on interfaces between the levels. For this, we used the ground floor of the lab building, and equipped it with seats for the avatars of the students, as well as virtual whiteboards, poster walls, and the like to act as cognitive tools [16] to make orientation and interaction in the multi-user game world more intuitive. Still, it was not the same as a face-to-face meeting. For instance, predefined avatar gestures are limited, and the group had to improvise ways of determining whose turn it was to speak. As it turned out, strong involvement was required on the part of the teacher to keep discussions focused and goal-directed. Technically, we mostly used text chat for communication. Speech fit smaller groups better. In three courses altogether, the AI course, one course on eLearning and one on Service-Oriented Architectures (SOA), we used the game world to a third end: as a virtual lecture hall. A face-to-face lecture is a social situation in which students and the teacher communicate and experience a shared involvement in a subject matter. This is difficult to recreate in basic online learning, where students just download lecture notes from the Web. We recorded and made available live videos of our lectures, but this was still a restricted experience. Game worlds can provide a more social and intuitive setting. From the outside, our lecture room in Second Life is a replica of Rostock’s major landmark, the ‘teapot’, an oddly formed building at the beach, the centre of touristic activity. Inside, our virtual teapot looks like a conference hall, with a table, chairs, and a large canvas for multimedia presentations. Outside, a number of features, like a mysterious teleporting device, a movable boat, or a flying seagull increase the feeling of being in a game. Inside, a student will meet avatars of fellow students as well as the teacher’s avatar. During its owner’s lecture, it would stand in front of the canvas, where students can watch the slide presentation and listen to the talk, as the teacher’s voice is mapped onto the avatar. The teacher operates the presentation from the real-world lecture hall – a multimedia lab equipped with recording and communication technology –, which is also filled with real students. The main challenges in this scenario are how to best deliver the face-to-face lecture to the distance learners, and how to establish communication between real-world and game-world locations. Notice the multimodal and cross-media aspects of this scenario. The present teacher can reach the distance learners, for instance, by talking or making gestures. This is realized as text, speech, avatar animation, displayed video, etc. The teacher might point out something on the slides using a virtual laser pointer or text marker within the video stream. Or, which is more complicated to realize, he or she might direct the avatar to point a finger to a certain spot on the canvas. All in all, the requirements collected in our three pilot courses point to the question: In a blended, virtual learning scenario using game worlds, how do we best connect a present teacher and teaching material situated in a physical environment to a game ‘played’ by distance learners? 3. INFRASTRUCTURE This section is focused on the technical infrastructure for realization of the game-based scenario. It consists of three parts: a general overview on the interconnection architecture, and a more detailed look on the integration of on-site as well as virtual environments. 3.1 General Architecture The basic paradigm implemented by our infrastructure is the Service-Oriented Architecture (SOA), which offers valuable potential not only for traditional software design but also in pervasive or immersive scenarios [17]. Relevant functionality of a system is identified on the basis of a comprehensive process model and provided as a service. Figure 1 provides an overview on the proposed architecture. A SOA infrastructure is the core ![Figure 1: Service-based interconnection architecture with adapters to various platforms and with exemplary services](image-url) component of the system, represented by the University Service Bus. Here, various services can be deployed [18]. In our game-based scenario, services for the dissemination of learning content (uni-directional) as well as communication services (bi-directional) gain the most importance. Several target platforms can be integrated. Up to now, we developed connectors to on-site facilities for face-to-face teaching/learning, to traditional eLearning platforms, and to virtual worlds. Other types of environments are possible, too. The focus of this article is on face-to-face and virtual environments, as depicted in the left-hand side of the figure. While the integration of additional platforms is comparatively easy, the implementation of the University Service Bus requires some efforts. It is responsible for: - delivery of messages between service providers and consumers, including dynamic routing based on given requirements and conditions - transformation of messages between different formats, protocols, and technologies - monitoring features for surveillance of service usage and proper system functionality - orchestration of available services, based on a given process model, in order to fulfill higher-level tasks (optional) - security, i.e. encryption of messages, authentication of service providers and consumers, authorization of consumers, accounting of invoked functionality (optional) In the given prototype, we included fundamental delivery and transformation mechanisms. A monitoring component is still missing. Optional components for orchestration and security are considered for later extensions, since they are not essential in our current scenarios of use. A special challenge was to bridge the gap between existing technologies combined in today’s heterogeneous infrastructures. We developed and implemented a mechanism for multi-level interoperability on network and service layer, realized in a so called General Purpose Access Point [19]. On network layer, communication functions are mediated between different technologies by plug-ins for Ethernet, WLAN, Bluetooth, and so on. On service layer, an abstraction language unifies the capabilities of different technologies like Web Services, Jini, and so on. The University Service Bus manages the resulting unified services. As an example, this allows us to transparently exchange messages between avatars in the virtual world Second Life (based on Ethernet and Web Services) and on-site users with mobile phones (based on Bluetooth SDP), as shown in Figure 2. ### 3.2 Physical Environment Typically, the on-site media equipment (notebooks/PCs, cameras, beamers, canvasses, speakers, microphones, DVD players, and many more) is directly managed by the lecturer or administrator. A central media controller associates the available data/control interfaces of devices. Exemplary commands for a lecture might be “connect output of rack PC with input of right canvas”, “connect output of front camera with input of left canvas”, “connect output of headset with input of speakers”, and “set volume of speakers to 60%”. These proprietary mechanisms of the controller had to be extended by a SOA interface for use within the service network. Therefor, a Java-based wrapper was developed that on the one hand sends commands to the media controller and on the other hand deploys this functions as a Web Service. Additionally, multimedia services for recording, streaming, and archiving of lectures have been developed. As an example, different audio and video sources can be compiled and provided as a stream for external retrieval. It is possible to simultaneously record and stream a lecture. The physical environment acts as a provider of these services. For the user, it is fully transparent if a stream is live or from archive. Moreover, messaging services as shown in Figure 2 can be applied to live scenarios. Every device in the physical environment that is capable of receiving text messages provides this information, which is gathered by our infrastructure and provided as a service. The individual communication enables a wide range of collaboration scenarios, e. g. learning groups. Numerous other services (like entertainment or context) were developed, but are outside the focus of this article. ### 3.3 Virtual Environment Most platforms provide an API to extend their functionality. Depending on the nature of the software, internal code modifications can be possible, too. This applies to several educational systems, but not to the virtual world Second Life used in our scenario. That’s why we had to develop a so called surrogate that extends Second Life using its API and acts as a consumer/provider of services on its behalf. Objects in the virtual world (like the canvas in our virtual meeting/lecture room, the headsets of our avatars) are enhanced with specific functionality regarding our media and messaging services by use of the Linden Scripting Language. Via the --- **Figure 2:** Seamless exchange of messages between Second Life and mobile phones across different network and service technologies. surrogate, HTTP requests from virtual objects are translated to service messages for our SOA infrastructure, and vice versa. The HTTP request contains the name of the acting avatar, the requested service and its method, and optional parameters. Possible reactions on a response are a textual notification (e.g. for system acknowledgements or user messages) or changes in object appearance or behavior (e.g. for lecture streaming). The integration of Second Life closely interweaves the tangibility of real-life objects with elements in a virtual 3D world. Such an intuitive interaction with the environment exhausts the cognitive capabilities of students to a much larger extent than in traditional classroom settings, where learners are typically acting in a much more passive and less individual way. 4. EVALUATION During summer term 2009 we evaluated the developed infrastructure for distribution of eLectures in three courses with all together 25 students. Half of them were involved in the development of a virtual game as described in section 2, the other students only used the eLecture services. In a first inquiry we asked the students for initial attitudes regarding innovative eLearning materials and settings. The answers ranged from open-minded (15%), to deliberate (35%), to sceptical (50%) – in contrast to conventional slides and scripts which were generally rated as important. The final attitude after finishing our tests changed, where eLectures gained the same relevance as slides and scripts (almost 100%), and also live transmissions and virtual lectures were rated as mostly important (75%). This demonstrates that the results of our evaluation are not biased by a general affinity of the students towards eLearning mechanisms or material. During the course we asked the students for their opinion on the virtual gaming scenario. The game itself was considered as “a welcome breaking-up” and “motivational”. One student wrote that the game story, at first, seemed to be unnecessary to him, but demonstrated that teachers were highly motivated. Another student confirmed that applying the AI theory to practical problems – and thus, the directly perceived experience – was strongly positive. Nevertheless, several negative comments were given on the features of Second Life, for instance: “It’s annoying to spend my time with the originalities of the Second Life engine and the Linden Scripting Language, rather than focussing on AI algorithms. But, as a computer scientist, we should get along with this. So, it’s interesting to try this out.” The frequency of using the service-based dissemination of eLectures did continuously rise during the tests (compared to traditional linking of material in an eLearning platform), though students initially signaled no willingness to make use of these mechanisms. Comparing different transmission techniques, streaming was recognized as appropriate especially for participating and frequent re-working of a lecture, while download was considered as helpful mostly for targeted revision of lectures prior to exams. As Figure 3 shows, the technical solution was rated mostly average to good. In general, the eLecture integration into Second Life polls worse than the compared version for the learning platform Stud.IP. As weak points mainly the image and sound quality as well as the interaction facilities were mentioned. This helped us to fine-tune the resolution and sampling rate in the streaming server, and to define some guidelines for a lecturer how to deal with annotation features of the recording software and with interactive elements in the setting. Moreover, the students provided a number of suggestions for improvement especially of the Second Life solution and the overall organization of the lecture. We’re going to repeat and intensify the test in a broader scenario throughout the upcoming terms. 5. CONCLUSION AND FURTHER WORK The article described our development of a service-based infrastructure for educational scenarios. After successful tests of the prototype in a number of singular events we evaluated the system in two regular lectures as well as in a game-based setting in summer semester 2009. We obtained positive results regarding the acceptance by the students and the benefits associated with the services. Current work covers the completion of the University Service Bus with monitoring and security mechanisms as well as the extension of available services. Moreover, we put in a variety of efforts to integrate context information into the services for an ![Figure 3: Student ratings for the quality of integration between eLectures and online/virtual platforms for the gaming scenario](image-url) adaptive system behavior. Our final goal is the realization of a pervasive university [20]. Future directions of our work include: gesture recognition of the lecturer and mapping to the avatar in the virtual world; integration with technologies for ambient assisted environments (e.g. for physically exhausting games); evaluation in broader scenarios including users from outside university (school, industry, interested public); and integration of open-source 3D virtual worlds like Open-Sim or Project Wonderland. Moreover, we intend to transfer our application scenario from intra-institutional to cross-institutional cooperation. Facing the spatial distance, we expect a higher motivation of the participants because of missing physical meeting facilities and thus the need to interact virtually. Especially, the game-based strategy seems to be well-promising to bridge the gap between different learning styles, educational standards, or cultures. 6. ACKNOWLEDGMENTS We are deeply grateful for the implementations of a number of undergraduate students, providing the technical basement for the scenario described here. This work is partially supported by the German Research Foundation under grant GRK-1424. 7. REFERENCES
{"Source-Url": "http://www.researchgate.net/profile/Ulrike_Lucke/publication/221391527_Interconnection_of_game_worlds_and_physical_environments_in_educational_settings/links/0f31753afbb937f449000000.pdf", "len_cl100k_base": 5268, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18647, "total-output-tokens": 6857, "length": "2e12", "weborganizer": {"__label__adult": 0.000934123992919922, "__label__art_design": 0.0035076141357421875, "__label__crime_law": 0.0012865066528320312, "__label__education_jobs": 0.337646484375, "__label__entertainment": 0.0005512237548828125, "__label__fashion_beauty": 0.0006089210510253906, "__label__finance_business": 0.0009250640869140624, "__label__food_dining": 0.0012331008911132812, "__label__games": 0.00669097900390625, "__label__hardware": 0.0035247802734375, "__label__health": 0.0023365020751953125, "__label__history": 0.0022735595703125, "__label__home_hobbies": 0.00034618377685546875, "__label__industrial": 0.001407623291015625, "__label__literature": 0.0013589859008789062, "__label__politics": 0.0008749961853027344, "__label__religion": 0.0015811920166015625, "__label__science_tech": 0.2310791015625, "__label__social_life": 0.0005440711975097656, "__label__software": 0.0294952392578125, "__label__software_dev": 0.367919921875, "__label__sports_fitness": 0.0012369155883789062, "__label__transportation": 0.0019025802612304688, "__label__travel": 0.0009160041809082032}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31640, 0.03532]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31640, 0.52176]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31640, 0.9321]], "google_gemma-3-12b-it_contains_pii": [[0, 4777, false], [4777, 11688, null], [11688, 16387, null], [16387, 21505, null], [21505, 26215, null], [26215, 31640, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4777, true], [4777, 11688, null], [11688, 16387, null], [16387, 21505, null], [21505, 26215, null], [26215, 31640, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31640, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31640, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31640, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31640, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31640, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31640, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31640, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31640, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31640, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31640, null]], "pdf_page_numbers": [[0, 4777, 1], [4777, 11688, 2], [11688, 16387, 3], [16387, 21505, 4], [21505, 26215, 5], [26215, 31640, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31640, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
c2e569f178f6e78590569513417ef3d9b6e170c7
Network Layer: Routing CS 356 – University of Texas at Austin Dr. David A. Bryan Interplay between routing and forwarding routing algorithm determines end-end-path through network forwarding table determines local forwarding at this router <table> <thead> <tr> <th>header value</th> <th>output link</th> </tr> </thead> <tbody> <tr> <td>0100</td> <td>3</td> </tr> <tr> <td>0101</td> <td>2</td> </tr> <tr> <td>0111</td> <td>2</td> </tr> <tr> <td>1001</td> <td>1</td> </tr> </tbody> </table> value in arriving packet’s header 1 2 3 Underlying Algorithms • Looking at BRIEFLY. Should be familiar from (possibly multiple) earlier classes. • Some excellent videos (University of Washington) on Dijkstra's algorithm, LS routing, and DV routing: – Djikstra's Algorithm: https://www.youtube.com/watch?v=elUoBUwIlpQ – DV Routing: https://www.youtube.com/watch?v=ylzAefKENXY – How to flood Updates: https://www.youtube.com/watch?v=eziOAAdaun_s – LS Routing: https://www.youtube.com/watch?v=2_0AwfQWKUk • See also pp. 363-383 of *Kurose and Ross*, Section 4.5.1 and 4.5.2 • Will do one HW with a very simple DV example to be sure you go work it 😊 Graph abstraction graph: $G = (N,E)$ $N = \text{set of routers} = \{ u, v, w, x, y, z \}$ $E = \text{set of links} = \{ (u,v), (u,x), (v,x), (v,w), (x,w), (x,y), (w,y), (w,z), (y,z) \}$ (Suspend disbelief for a bit…yes, we can't really do these algorithms over the actual $N$ and $E$ of the Internet…too big, but for now assume we do…) aside: graph abstraction is useful in other network contexts, e.g., P2P, where $N$ is set of peers and $E$ is set of TCP connections Graph abstraction: costs c(x, x') = cost of link (x, x') e.g., c(w, z) = 5 cost could always be 1, or inversely related to bandwidth, or inversely related to congestion cost of path (x_1, x_2, x_3, ..., x_p) = c(x_1, x_2) + c(x_2, x_3) + ... + c(x_{p-1}, x_p) **key question:** what is the least-cost path between u and z? **routing algorithm:** algorithm that finds that least cost path Routing algorithm classification **Q: global or decentralized information?** **global:** • all routers have complete topology, link cost info • “link state” algorithms **decentralized:** • router knows physically-connected neighbors, link costs to neighbors • iterative process of computation, exchange of info with neighbors • “distance vector” algorithms **Q: static or dynamic?** **static:** • routes change slowly over time **dynamic:** • routes change more quickly – periodic update – in response to link cost changes Dijkstra's algorithm • net topology, link costs known to all nodes – accomplished via “link state broadcast” – all nodes have same info • iterative: after k iterations, know least cost path to k destinations • Should have seen this in (probably more than one) class already • Result is tree with least-cost paths • This tree is used to make a forwarding table from source node to route traffic Dijkstra's algorithm: example result resulting shortest-path tree from u: resulting forwarding table in u: <table> <thead> <tr> <th>destination</th> <th>link</th> </tr> </thead> <tbody> <tr> <td>v</td> <td>(u,v)</td> </tr> <tr> <td>x</td> <td>(u,x)</td> </tr> <tr> <td>y</td> <td>(u,x)</td> </tr> <tr> <td>w</td> <td>(u,x)</td> </tr> <tr> <td>z</td> <td>(u,x)</td> </tr> </tbody> </table> Again, if you don't remember Dijkstra's algorithm, take a look at the pseudocode on page 367 and this video from University of Washington: https://www.youtube.com/watch?v=elUoB_UwIlpQ Convince yourself you understand it and can perform it for small networks! Distance Vector algorithm - Link State Algorithms required us to exchange information about the entire network topology, then determine best path. This isn't possible for a large ISP's network, much less entire Internet. - Distance Vector Algorithms instead allow us to obtain best paths without knowing about the topology of the broader network. - Still keep a table of cost to each node in N, but we never really know about the structure to get there... - In real world we will need to reduce N... Distance vector algorithm Bellman-Ford equation (dynamic programming) let \[ d_x(y) := \text{cost of least-cost path from } x \text{ to } y \] then \[ d_x(y) = \min_v \{ c(x,v) + d_v(y) \} \] - cost from neighbor \( v \) to destination \( y \) - cost to neighbor \( v \) - \( \min \) taken over all neighbors \( v \) of \( x \) Bellman-Ford example For calculating cost from $u$ to $z$, we can use path through $w$, $v$, or $x$ clearly, $d_v(z) = 5$, $d_x(z) = 3$, $d_w(z) = 3$ B-F equation says: $$d_u(z) = \min \{ c(u,v) + d_v(z), c(u,x) + d_x(z), c(u,w) + d_w(z) \}$$ $$= \min \{2 + 5, 1 + 3, 5 + 3\} = 4$$ - Node achieving minimum is next hop in shortest path, so used in forwarding table - For the moment assume each node knows their own costs Distance vector algorithm • \( D_x(y) \) = estimate of least cost from \( x \) to \( y \) – \( x \) maintains distance vector \( D_x = [D_x(y): y \in N] \) • node \( x \): – knows cost to each neighbor \( v \): \( c(x,v) \) – maintains its neighbors’ distance vectors. For each neighbor \( v \), \( x \) maintains \( D_v = [D_v(y): y \in N] \) key idea: - from time-to-time, each node sends its own distance vector estimate to neighbors - when x receives new DV estimate from neighbor, it updates its own DV using B-F equation: \[ D_x(y) \leftarrow \min_v \{c(x,v) + D_v(y)\} \text{ for each node } y \in N \] - The estimate \( D_x(y) \) converge to the actual least cost \( d_x(y) \) (under a few reasonable conditions we won't discuss here) Distance vector algorithm iterative, asynchronous: each local iteration caused by: • local link cost change • DV update message from neighbor distributed: • each node notifies neighbors only when its DV changes – neighbors then notify their neighbors if necessary each node: wait for (change in local link cost or msg from neighbor) compute estimates if DV to any dest has changed, notify neighbors Example of DV Algorithm • Should look at worked example from book. • Very simple one in your home work. • Can see a worked example in a video here: https://www.youtube.com/watch?v=ylzAefKE NXY Hierarchical routing our routing study thus far - idealization • all routers identical • network “flat” … not true in practice scale: with millions of destinations: • can't store all dest's in routing tables! • routing table exchange would swamp links! administrative autonomy • internet = network of networks • each network admin may want to control routing in its own network Hierarchical routing - aggregate routers into regions, “autonomous systems” (AS) - routers in same AS run same routing protocol - “intra-AS” routing protocol - routers in different AS can run different intra-AS routing protocol - Administrators can define own mechanism internally - (if you are directly connected on the same switch don't even need to use the routers) **gateway router:** - at “edge” of its own AS - has link to router in another AS Reminder: Connected by switch... Q: how are interfaces actually connected? A: we’ll learn about that in chapter 5, 6. A: wired Ethernet interfaces connected by Ethernet switches For now: don’t need to worry about how one interface is connected to another (with no intervening router) Router is then connected to other routers If in same organization (controlled by same group) then same AS... Interconnected ASes - Forwarding table configured by both intra- and inter-AS routing algorithm - *intra*-AS sets entries for internal dests - *inter*-AS & *intra*-AS sets entries for external dests Inter-AS tasks • suppose a router in AS1 receives datagram destined outside of AS1: ▪ router should forward packet to gateway router, but which one? AS1 must: 1. learn which destinations are reachable through AS2, which through AS3 2. propagate this reachability info to all routers in AS1 job of inter-AS routing! Example: setting forwarding table in router 1d - suppose AS1 learns (via inter-AS protocol) that subnet $x$ reachable via AS3 (gateway 1c), but not via AS2 - **inter-AS** protocol propagates reachability info to all internal routers - router 1d determines from intra-AS routing info that its interface $I$ is on the least cost path to 1c - installs forwarding table entry $(x, I)$ (used intra- and inter-AS routing to make final decision) Example: choosing among multiple ASes - now suppose AS1 learns from inter-AS protocol that subnet $x$ is reachable from AS3 and from AS2. - to configure forwarding table, router 1d must determine which gateway it should forward packets towards for dest $x$ - this is also job of inter-AS routing protocol! Example: choosing among multiple ASes • now suppose AS1 learns from **inter**-AS protocol that subnet \( x \) is reachable from AS3 *and* from AS2. • to configure forwarding table, router 1d must determine towards which gateway it should forward packets for dest \( x \) • this is also job of **inter**-AS routing protocol! • *hot potato routing:* send packet towards lowest cost of two routers. | learn from inter-AS protocol that subnet \( x \) is reachable via multiple gateways | use routing info from intra-AS protocol to determine costs of least-cost paths to each of the gateways | hot potato routing: choose the gateway that has the smallest least cost | determine from forwarding table the interface \( I \) that leads to least-cost gateway. Enter \((x,I)\) in forwarding table | Intra-AS tasks - suppose router in AS1 receives datagram destined *inside* of AS1: **AS1 must:** 1. decide which router (1a, 1b, 1c, 1d) will get the datagram to the final destination host. *job of intra-AS routing!* Protocols for Routing • Intra-AS routing • also known as *interior gateway protocols (IGP)* – We'll look at RIP and OSPF • Inter-AS routing • inter-domain routing protocols – We'll look at BGP Intra-AS Routing Protocols • also known as *interior gateway protocols (IGP)* • most common intra-AS routing protocols: – RIP: Routing Information Protocol (uses DV) – OSPF: Open Shortest Path First (uses LS) – Some others: • IGRP: Interior Gateway Routing Protocol (Cisco proprietary) • IS-IS: ISO protocol very similar to OSPF RIP (Routing Information Protocol) - included in BSD-UNIX distribution in 1982 - distance vector algorithm - distance metric: # hops (max = 15 hops), each link has cost 1 - DVs exchanged with neighbors every 30 sec in response message (aka advertisement) - each advertisement: list of up to 25 destination subnets (in IP addressing sense) - Clearly only works up to certain sized networks – this is Intra-AS! from router A to destination subnets: <table> <thead> <tr> <th>subnet</th> <th>hops</th> </tr> </thead> <tbody> <tr> <td>u</td> <td>1</td> </tr> <tr> <td>v</td> <td>2</td> </tr> <tr> <td>w</td> <td>2</td> </tr> <tr> <td>x</td> <td>3</td> </tr> <tr> <td>y</td> <td>3</td> </tr> <tr> <td>z</td> <td>2</td> </tr> </tbody> </table> RIP: example routing table in router D <table> <thead> <tr> <th>destination subnet</th> <th>next router</th> <th># hops to dest</th> </tr> </thead> <tbody> <tr> <td>w</td> <td>A</td> <td>2</td> </tr> <tr> <td>y</td> <td>B</td> <td>2</td> </tr> <tr> <td>z</td> <td>B</td> <td>7</td> </tr> <tr> <td>x</td> <td>--</td> <td>1</td> </tr> <tr> <td>....</td> <td>....</td> <td>....</td> </tr> </tbody> </table> RIP: example Routing table in router D <table> <thead> <tr> <th>destination subnet</th> <th>next router</th> <th># hops to dest</th> </tr> </thead> <tbody> <tr> <td>w</td> <td>A</td> <td>2</td> </tr> <tr> <td>y</td> <td>B</td> <td>2</td> </tr> <tr> <td>z</td> <td>A</td> <td>7</td> </tr> <tr> <td>x</td> <td>--</td> <td>1</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table> A-to-D advertisement ``` <table> <thead> <tr> <th>dest</th> <th>next hops</th> </tr> </thead> <tbody> <tr> <td>w</td> <td>1</td> </tr> <tr> <td>x</td> <td>1</td> </tr> <tr> <td>z</td> <td>4</td> </tr> <tr> <td>...</td> <td>...</td> </tr> </tbody> </table> ``` OSPF (Open Shortest Path First) • "open": publicly available (IETF standard) • uses link state algorithm – LS packet dissemination – topology map at each node – route computation using Dijkstra’s algorithm • OSPF advertisement carries one entry per neighbor • advertisements flooded to entire AS – carried in OSPF messages directly over IP (rather than TCP or UDP) Hierarchical OSPF • For very large networks, can use OSPF hierarchically • two-level hierarchy: local area, backbone. – link-state advertisements only in area – each node has detailed area topology; only know direction (shortest path) to nets in other areas. • area border routers: “summarize” distances to nets in own area, advertise to other Area Border routers. • backbone routers: run OSPF routing limited to backbone. • boundary routers: connect to other AS’ s. Hierarchical OSPF - backbone router - boundary router - area border routers - internal routers areas: - area 1 - area 2 - area 3 Internet inter-AS routing: BGP • **BGP (Border Gateway Protocol):** the de facto inter-domain routing protocol – “glue that holds the Internet together” • **BGP provides each AS a means to:** – **eBGP:** obtain subnet reachability information from neighboring ASs. – **iBGP:** propagate reachability information to all AS-internal routers. – determine “good” routes to other networks based on reachability information and policy. • allows subnet to advertise its existence to rest of Internet: “**I am here**” BGP basics - **BGP session**: two BGP routers (“peers”) exchange BGP messages: - advertising *paths* to different destination network prefixes (“path vector” protocol) - exchanged over semi-permanent TCP connections - when AS3 advertises a prefix to AS1: - AS3 *promises* it will forward datagrams towards that prefix - AS3 can aggregate prefixes in its advertisement BGP basics: distributing path information - using eBGP session between 3a and 1c, AS3 sends prefix reachability info to AS1. - 1c can then use iBGP do distribute new prefix info to all routers in AS1 - 1b can then re-advertise new reachability info to AS2 over 1b-to-2a eBGP session - when router learns of new prefix, it creates entry for prefix in its forwarding table. Path attributes and BGP routes • advertised prefix includes BGP attributes – prefix + attributes = “route” • two important attributes: – **AS-PATH**: contains ASs through which prefix advertisement has passed: e.g., AS 67, AS 17 – **NEXT-HOP**: indicates specific internal-AS router to next-hop AS. (may be multiple links from current AS to next-hop-AS) • gateway router receiving route advertisement uses **import policy** to accept/decline – e.g., never route through AS x – **policy-based** routing BGP route selection - router may learn about more than 1 route to destination AS, selects route based on: 1. local preference value attribute: policy decision (cost) 2. shortest AS-PATH 3. closest NEXT-HOP router: hot potato routing 4. additional criteria BGP messages - BGP messages exchanged between peers over TCP connection - BGP messages: - **OPEN**: opens TCP connection to peer and authenticates sender - **UPDATE**: advertises new path (or withdraws old) - **KEEPALIVE**: keeps connection alive in absence of UPDATES; also ACKs OPEN request - **NOTIFICATION**: reports errors in previous msg; also used to close connection BGP routing policy - A, B, C are provider networks - X, W, Y are customer (of provider networks) - X is dual-homed: attached to two networks - X does not want to route from B via X to C - .. so X will not advertise to B a route to C BGP routing policy (2) - No way! B gets no “revenue” for routing CBAW since neither W nor C are B’s customers - B wants to force C to route to w via A - B wants to route *only* to/from its customers! Why different Intra-, Inter-AS routing? **policy:** - inter-AS: admin wants control over how its traffic routed, who routes through its net. - intra-AS: single admin, so no policy decisions needed **scale:** - hierarchical routing saves table size, reduced update traffic **performance:** - intra-AS: can focus on performance - inter-AS: policy may dominate over performance
{"Source-Url": "http://www.cs.utexas.edu:80/users/dbryan/CS356S16/Slides/5-B%20356%20Network%20Layer%20-%20Routing.pdf", "len_cl100k_base": 4551, "olmocr-version": "0.1.53", "pdf-total-pages": 41, "total-fallback-pages": 0, "total-input-tokens": 65521, "total-output-tokens": 5783, "length": "2e12", "weborganizer": {"__label__adult": 0.0003788471221923828, "__label__art_design": 0.0003893375396728515, "__label__crime_law": 0.0005435943603515625, "__label__education_jobs": 0.00766754150390625, "__label__entertainment": 0.00016307830810546875, "__label__fashion_beauty": 0.000209808349609375, "__label__finance_business": 0.0004107952117919922, "__label__food_dining": 0.0004258155822753906, "__label__games": 0.0008478164672851562, "__label__hardware": 0.0091705322265625, "__label__health": 0.0006628036499023438, "__label__history": 0.0006103515625, "__label__home_hobbies": 0.0002701282501220703, "__label__industrial": 0.000918865203857422, "__label__literature": 0.0004398822784423828, "__label__politics": 0.00029754638671875, "__label__religion": 0.000545501708984375, "__label__science_tech": 0.372802734375, "__label__social_life": 0.0002646446228027344, "__label__software": 0.08514404296875, "__label__software_dev": 0.515625, "__label__sports_fitness": 0.0005307197570800781, "__label__transportation": 0.0015935897827148438, "__label__travel": 0.0003616809844970703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16033, 0.01213]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16033, 0.33856]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16033, 0.84716]], "google_gemma-3-12b-it_contains_pii": [[0, 83, false], [83, 475, null], [475, 1109, null], [1109, 1583, null], [1583, 1976, null], [1976, 2509, null], [2509, 2909, null], [2909, 3449, null], [3449, 3950, null], [3950, 4284, null], [4284, 4737, null], [4737, 5089, null], [5089, 5491, null], [5491, 5914, null], [5914, 6108, null], [6108, 6493, null], [6493, 6953, null], [6953, 7351, null], [7351, 7555, null], [7555, 7875, null], [7875, 8319, null], [8319, 8628, null], [8628, 9421, null], [9421, 9647, null], [9647, 9846, null], [9846, 10191, null], [10191, 10793, null], [10793, 11212, null], [11212, 11788, null], [11788, 12162, null], [12162, 12646, null], [12646, 12777, null], [12777, 13298, null], [13298, 13676, null], [13676, 14053, null], [14053, 14568, null], [14568, 14833, null], [14833, 15217, null], [15217, 15455, null], [15455, 15656, null], [15656, 16033, null]], "google_gemma-3-12b-it_is_public_document": [[0, 83, true], [83, 475, null], [475, 1109, null], [1109, 1583, null], [1583, 1976, null], [1976, 2509, null], [2509, 2909, null], [2909, 3449, null], [3449, 3950, null], [3950, 4284, null], [4284, 4737, null], [4737, 5089, null], [5089, 5491, null], [5491, 5914, null], [5914, 6108, null], [6108, 6493, null], [6493, 6953, null], [6953, 7351, null], [7351, 7555, null], [7555, 7875, null], [7875, 8319, null], [8319, 8628, null], [8628, 9421, null], [9421, 9647, null], [9647, 9846, null], [9846, 10191, null], [10191, 10793, null], [10793, 11212, null], [11212, 11788, null], [11788, 12162, null], [12162, 12646, null], [12646, 12777, null], [12777, 13298, null], [13298, 13676, null], [13676, 14053, null], [14053, 14568, null], [14568, 14833, null], [14833, 15217, null], [15217, 15455, null], [15455, 15656, null], [15656, 16033, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16033, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16033, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16033, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16033, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 16033, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16033, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16033, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16033, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16033, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16033, null]], "pdf_page_numbers": [[0, 83, 1], [83, 475, 2], [475, 1109, 3], [1109, 1583, 4], [1583, 1976, 5], [1976, 2509, 6], [2509, 2909, 7], [2909, 3449, 8], [3449, 3950, 9], [3950, 4284, 10], [4284, 4737, 11], [4737, 5089, 12], [5089, 5491, 13], [5491, 5914, 14], [5914, 6108, 15], [6108, 6493, 16], [6493, 6953, 17], [6953, 7351, 18], [7351, 7555, 19], [7555, 7875, 20], [7875, 8319, 21], [8319, 8628, 22], [8628, 9421, 23], [9421, 9647, 24], [9647, 9846, 25], [9846, 10191, 26], [10191, 10793, 27], [10793, 11212, 28], [11212, 11788, 29], [11788, 12162, 30], [12162, 12646, 31], [12646, 12777, 32], [12777, 13298, 33], [13298, 13676, 34], [13676, 14053, 35], [14053, 14568, 36], [14568, 14833, 37], [14833, 15217, 38], [15217, 15455, 39], [15455, 15656, 40], [15656, 16033, 41]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16033, 0.12613]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
168bcb33b1eb879a286cad2d6c257f97c737071c
The Bazaar Approach to Physics Education Seth Rosenberg [UMd/CCNY] Dick Furnstahl [The OSU] • Problems with “conventional” PER-based reform → particularly for upper-level courses • Development models for computer software: “The Cathedral and the Bazaar” – evolvable systems (cf. physics research!) – importance of open-source philosophy – implementation (sociology) • Tools for Bazaar-style development • Prototype project A recent project at Ohio State funded by the NSF attempted to introduce “reform” elements into an upper-level course for physics majors previously taught in a traditional, pure-lecture format. These reforms included using a studio classroom and greatly increasing the active component of lectures (such as with informal groupwork). The PI was Dick Furnstahl, an active researcher in nuclear theory. [Apologies for Furnstahl’s absence from AAPT meeting: quarter system at OSU and 2 classes 5 days a week!] Seth Rosenberg helped with development and assessment of the project. While there have been positive outcomes from the project, there have also been many difficulties and frustrations in trying to develop, implement, and assess the reform curriculum. These problems are common to many physics education research and development projects but are particularly acute in upper-level courses. Here we propose analogies with computer software development models. A comparison of two extremes was made by Eric Raymond, who labeled them the cathedral and the bazaar (note the spelling!) [RA1]. We associate conventional textbook-development and PER approaches with the cathedral and argue here for more bazaar-style development. We will present an overview and examples. An emphasis is on the proven tools for open-source software development that might be taken over for PER development. Limitations of PER: Upper Level Courses - Cycle time is long - Alpha/beta testing population is very small - Limited and/or outdated connections to contemporary research - The need to reinvent wheels - Unrecognized subtleties in the physics - Curriculum development and assessment is a huge time sink for an active research physicist - Community acceptance of PER Conventional approaches to curriculum development, implementation, and assessment have many limitations, particularly when applied to upper division courses. - **Cycle time is very long** - for upper level courses, it is typically a full year to revisit the same content $\rightarrow$ comparable time between successive trials of an experiment or assessment. - many years to produce *publishable* results in form of tutorials, text, or papers (by conventional standards) - **Alpha/beta testing population is very small** - often just local physics majors $\rightarrow$ small numbers - **Limited and/or outdated connections to contemporary research** - materials tend to provide snapshots of research at best, and then are static (with infrequent revisions at best) - this is particularly unfortunate given the wealth of accessible modern experiments (e.g., amo QM expts.) - but more extensive connections requires interaction with researchers (typically absent from development of curriculum) • The need to reinvent wheels – particularly clear with software-based work: we wanted a student feedback system that could accommodate equations. None of the existing “products” on the market could do that, but we couldn’t build on them because they were “closed source”. Had to start entirely from scratch rather than adding value to existing products. – having to reinvent wheels hinders innovation by erecting a large barrier to participation by most of the people who could make significant contributions – claim: applies to non-software materials (e.g., tutorials) as well • Recognizing subtleties in the physics – quantum mechanics, for example, is tricky! Researchers generating curriculum materials often miss subtleties or miss opportunities for better explanations. *The best physics is in texts by the best physicists (e.g., Purcell).* – the difficulty here is the small number of developers (conventional beta testing is too inefficient and incomplete and happens too late in the development) • Curriculum development and assessment is a huge time sink for an active research physicist – discourages efforts → waste of expertise! – or else very limited (or one-time) assessment is done • Community acceptance of PER – when conventional physics “researchers” are not involved in PER, they tend not to believe it and resist implementing it, particularly since there is usually an increased time commitment. – waiting until rock solid PER results are available before implementation is too slow and not consistent with effective physics research, which is incremental (complete and correct theories do not appear at once) and characterized by parallel efforts and shared results. • Bottom line: There is a tremendous pool of talent that is not being tapped. Ok, what do we propose to do? *Time for some analogies!* Two Models for Software Development • *The Cathedral and the Bazaar* [Eric Raymond] – Cathedral → closed source, proprietary development (e.g., Microsoft) – Bazaar → open-source development model * massively parallel development * keywords: openness, peer review, free (as in speech, not beer) software • Development environments – Cathedral mode: “...carefully crafted by individual wizards or a small band of mages working in splendid isolation, with no beta to be released before its time.” Eric Raymond’s essay “The Cathedral and the Bazaar” [RA1] provides a starting point for a discussion of competing models of software development. One can take issue with various details in this and other Raymond essays but they providing a useful overview for this discussion. - The difference between open and closed source involves the freedom (“free as in speech, not beer”) to examine, modify, and incorporate elsewhere a piece of software. This is essential for large-scale peer review and massively parallel development. - One should note here and throughout the discussion the similarities of the bazaar mode and effective physics research (note also that there is also much cathedral building in physics, which often hinders progress!). Cathedral mode always involves beta testing, but in a very restricted form. Beta testers are not treated as co-developers. Cathedral beta testers merely point at problems; open source alpha and beta testers offer fixes (which are evaluated and incorporated). • Development environments (cont.) – Bazaar mode: “...great babbling bazaar of differing agendas and approaches ...out of which a coherent and stable system could seemingly emerge only by a succession of miracles.” • Advantage of evolvable systems “Centrally designed protocols start out strong and improve logarithmically. Evolvable protocols start out weak and improve exponentially.” “Only solutions that produce partial results when partially implemented can succeed. Evolvable systems begin partially working right away and then grow, rather than needing to be perfected and frozen.” • Contributors to the bazaar come in with “differing agendas and approaches” and many different motivations (like the physics community!). – An important feature of successful projects (more later) is that while most participants are in a great babbling bazaar, the final decisions are more coherently reached by a small group (sometimes one person). • The advantages of evolvable systems are obvious. The question is how to achieve them. The claim is that cathedral-style development is not well suited to create evolvable systems, but bazaar-style development is. – growth of web is good example [SH1] – decision to include “View source . . .” in first browsers made html code of any page open so anyone could emulate and build on page layouts or techniques → lowered barriers to new pages → enormous pace of design development. (Ask yourself: would Microsoft have included “View source . . .” if they had led the way?) – also essential to evolvability: separation of the software engine (which does the work) from the user interface (which does the viewing and control). In contrast, software with tight integration between creation, file format, and display (such as Excel or Lotus Notes) does not easily evolve. Examples of Open Source Projects - **Cathedral**: Microsoft Windows 2000 and Office, Adobe Illustrator, Mathematica, etc. - note: these are very good products! - **Bazaar**: Apache, sendmail, Gnu/Linux, Perl, Python, PHP, Samba, MySQL, BIND, ... - Apache runs \(\sim 50\%\) of world’s web servers - Perl is behind much WWW “live content” - BIND provides domain name service for ’net - sendmail is the main email transport software - plus thousands of smaller scale projects - **High quality**: many are “category killers” - **Complex**: e.g., Linux (complete operating system) Many (or most) of the familiar software products on the market are proprietary, closed source, and were built mostly cathedral-style. The argument here is not that you cannot develop excellent products this way, but that the bazaar offers an alternative that has many compelling features. – For example, innovation is enhanced. – MIT’s Technology Review recently compiled a list of 100 young innovators in science, technology, and the arts [MIT1]. Selected as innovator of the year was Miguel de Icaza of Mexico City for his work on the open-source GNOME project. Also on the list: Linus Torvald (Linux czar). Conspicuously absent: anyone from Microsoft! There is a long list of open source software tools and applications. The quality is very high because of the open source market environment and the nature of the hacker culture. See [OR1]. – Many of these are “category killers”: not only extremely capable and robust, but so good that no commercial competition has challenged them (e.g., BIND and sendmail) The projects are not restricted to simple programs. The GNU/Linux operating system is an example of a complex system developed in bazaar mode. It consists of the Linux kernel plus hundreds of open-source packages. – Linux has evolved rapidly: it started in 1991 as a hobby of graduate student Linus Torvald, who wanted a unix-like operating system for his 386 PC. Examples of Cathedral vs. Bazaar in PER - Cathedral → ordinary textbook development, conventional PER model - Bazaar → none as yet (although many projects have this flavor to some degree) Development community - Cathedral: Microsoft → textbook authors, PERG members - Bazaar: Hackers → Physics teachers (and students!) at all levels * large (potential) pool of expertise * willingness to contribute “in spare time” Before going further, we'll make explicit the parallels we have in mind between physics education curriculum development and software development. - cathedral, closed-source development is the usual mode for textbooks but also for physics education research, development and dissemination projects. Our claim is that these have the characteristics of centrally designed protocols: they start out strong but only improve logarithmically. - we are unaware of any projects that are really in the bazaar mode, although many have some aspects (such as open source) - we can argue endlessly about whether this characterization is fair, but this would not be constructive compared to actually trying to run a project in bazaar mode. We propose such a project below. The development community is clear for the cathedral. Our claim is that for the bazaar there is a strong parallel to the hacker community, which are the physics teachers (and some students). - "in spare time" means teaching for many research professors! • **Example: Technical Support** – **Cathedral:** Pay the vendor for support → support from textbook authors, PERG experts * in principle, reliable support, since “paid for” * in practice, support can be uneven or very limited or have a slow turnaround – **Bazaar:** The Internet! → tap the resources of the physics community * in principle, unreliable since no one in charge or directly accountable * in practice, tremendous resources tapped by web search engines * Infoworld gave its “Best Technical Support Award” to “Linux people on the Internet” • One can develop the parallels further, but here we just cite an example: technical support of a product. Cathedral support comes from the vendor and you pay for it (often as part of the purchase price). Bazaar support comes from the community. – Linux support is incredible. Almost any arcane problem or topic: someone has encountered it and there are posted resolutions. If not, post yourself and see the amazingly fast turnaround time. Bazaar-Mode Lessons (Raymond) Some maxims for open-source development → from Raymond’s experience and observations → each is applicable to PER! 1. Every good work of software starts by scratching a developer’s personal itch. 2. Good programmers know what to write. Great ones know what to rewrite (and reuse). 3. “Plan to throw one away; you will, anyhow.” 4. If you have the right attitude, interesting problems will find you. 5. When you lose interest in a program, your last duty to it is to hand it off to a competent successor. 6. Treating your users as co-developers is your least-hassle route to rapid code improvement and effective debugging. There are many important characteristics of bazaar-mode development. To give you a flavor, we consider some of the maxims Eric Raymond presents (the ones in red) and how they carry over to PER and curriculum development and reform [RA1]. 1. E.g., Linux started with Linus Torvald’s itch for a simple 386 OS. Open source (“free software”) is essential here. Otherwise the barriers are too high with too much reinventing of the wheel. But if you can start from someone else’s work and go in your own direction (like physics research!), then it works. 5. This is an essential part of the culture. The project is the focus, not the developer. There are many examples of open-source projects being “handed off”. 6. This is a difference from the ordinary use of beta testers. 7. It is essential to release early and often. But working on the bleeding edge is not for everyone. Since many users do not want buggy versions that change daily, one separates “developer’s” versions from “stable” releases (there is a standard version numbering system that keeps track of these distinctions). 8. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone (else). 9. Smart data structures and dumb code works a lot better than the other way around. 10. If you treat your beta-testers as if they’re your most valuable resource, they will respond by becoming your most valuable resource. 11. The next best thing to having good ideas is recognizing good ideas from your users. Sometimes the latter is better. 12. Often, the most striking and innovative solutions come from realizing that your concept of the problem was wrong. 13. “Perfection (in design) is achieved not when there is nothing more to add, but rather when there is nothing more to take away.” 14. Any tool should be useful in the expected way, but a truly great tool lends itself to uses you never expected. 8. This is a great advantage of massively parallel development. Note that the problem identification and the best fix will very often come from different people. 10. Empirical observation: People will make contributions of their valuable time if they are treated right. 11. This is how advisors stay in business! The most original ideas frequently come from the young and inexperienced, but one must be able to separate the wheat from the chaff. Implementation of PER-Based Bazaars - **Successful bazaar projects don’t run themselves** - bazaar characterizes dynamics of contributors - control at the top is essential to provide quality assurance and to avoid fragmentation - “hacker” culture → constraints - **Implementation models for bazaar development** - Choose from successful models in open-source software development - E.g., benevolent dictator (Linux) or voting council (Apache) - Difficulties in maintaining a consistent style necessitate a dictator or small council • One might be misled by the description of the bazaar into thinking that one just tosses an idea and some source code out onto the net and a complicated project builds and maintains itself. **This is not the case!** Successful bazaar projects don’t run themselves, but are (usually) tightly controlled at the top by the project coordinator(s) [CO1]. • The “bazaar” aspect of the development is in the dynamics of the contributors to the project, who are outside of the control of the coordinators. They can do what they want based on their own motivations. (In a more global sense, the interplay of different projects is also bazaar-like.) Of course, one of their choices, if they don’t like the direction the project is taking, is to split off (“fork”) with their own version (or, less drastically, tailor it to their own needs). • The leadership at the top has to provide the quality assurance to prevent excessive forking and fragmentation. In practice, there are constraints from the hacker culture (“hacker” is not a derogatory term, by the way, and should not be confused with “cracker”), see below. • How is the top-level control implemented? There are several successful models, ranging from benevolent dictatorships to rotating dictatorships to constitutional monarchies to ruling councils (see references linked in [RA1]). – There are typically two tiers of contributors as well: ordinary contributors and co-developers. The co-developers have responsibility for major subsystems. • Critical importance of open source – Stallman: “free as in speech, not as in beer” – computer code but also curriculum materials – materials must be reproducible at all stages, not just in polished, published form • Role of PERG members? – initiators and coordinators of “Open Source” PER-based projects • Why not just rely on new journal(s)? – pace of conventional PER is glacial – participation from very small fraction of physics community – threshold for “publishability” is too high * too complete a product or result is required The success of bazaar-style development depends critically on having open source. - This is also known as “free software,” which is often misinterpreted as meaning non-commercial (no cost). In fact, “free” in this context refers to the freedom one has to read, modify, and incorporate the code. Richard Stallman, the mastermind behind the GNU project (he also wrote emacs!), stresses that it is “free as in speech, not as in beer” and that commercial distributions are fine, as long as the code is available [FSF1]. For example, Redhat sells packaged versions of Linux, all of which could be downloaded from the net without charge. (That being said, most free software is, in fact, free of charge.) - As applied to physics education development, this means not only that software is open, but all materials are open and can not only be used, but adapted and incorporated (with proper attribution) or tailored to local environments. The “agreement” is that improvements are fed back to the coordinator. - This is just like in physics research, where new ideas can be taken and extended without constraint. In many instances, research is not open, as when codes or raw data are not released or the description of a calculation is insufficient to allow the reader to reproduce it completely. In such cases, the pace of innovation and development is dramatically slowed! • What about members of physics education research groups? – They are ideally suited to be the initiators and/or coordinators of open source projects based on physics education research. – Credit is not a problem: the coordinator tends to receive credit far beyond his/her individual contributions! (Translating this into documentation for a tenure case *is* possible.) • As a side note, some comments on the new journal(s) that have been started to provide outlets for physics education research: – they serve a vital function, but are not sufficient for the type of development we describe – the pace is slow and participation is low (from the large community of physics teachers and researchers) – most importantly, the threshold for participation is simply much too high. Because of (necessarily) high standards, it takes much too long and requires too much time and effort for anyone who is not a dedicated physics education researcher. The “masses” are excluded! – one might argue that this is true of any physics specialty, but PER *curriculum development* is different in this respect. [*This argument needs to be fleshed out!*] • **How do you make money?** – figurative (how do PE researchers get tenure?) – literal (how do publishers stay in business?) – funding based on peer-reviewed outcomes • So if everything is free how does one make money? One can ask this question in both figurative and literal senses. − Figuratively, the general payoff for contributing to an open source project is in reputation (just as in physics research!). * in tenure cases, this is documented in the letters from experts in the field. * the bean counters will want some definite metrics, however (e.g., number of publications or citations). − Literally, there is the question of how publishers fit into the equation. * while there are business models for making money with open source projects, it is too early to judge whether they are viable − How should government agencies (e.g., NSF-DUE) fund open-source projects? * fund the coordinator(s) * evaluate based on peer-reviewed outcomes, as usual A pre-emptive attack on some potential criticisms: 1. But curriculum materials are “open source” by construction! (I.e., anyone can read them.) 2. What about intellectual property rights? 3. Why should someone spend time producing materials that anyone can copy? 4. What about quality control? 5. Won’t there be a million versions of everything, many of them wrong, untested, etc.? 6. Is this approach is so great, why wasn’t it used before? 1. Just because the materials can be read doesn’t mean they are open source. One needs: the source code (TeX or Word files that can be edited directly) and permission to change or extend the material and use/distribute it (with appropriate restrictions → open-source licensing). Much effort has gone into developing appropriate licensing for this process [LI1]. 2. The culture and the licensing are geared to maintain credit for intellectual developments. Copyrights are held by the authors but you cannot charge license fees. Removing a person’s name from a project history is absolutely not done without consent. The treatment of intellectual property is similar to that for theoretical ideas in physics. 3. cf. physics → different utility function. Reputation is the coin! 4. The examples from the open source community show that quality control can be excellent in bazaar development. The testing and quality assurance is in the process (massive peer review) and the control at the top. 5. Several issues here: - The *possibility* of code “forks” are essential because the globally best path to take may not be clear locally. The historical experience of forking in software development has pluses and minuses. Most of the minuses were associated with proprietary versions, such as the fractionation of unixes in the 1980’s. But the pluses are that different paths can be explored and then complete to determine the best one (needed to find global minima!). Examples are glibc vs. libc (big fight over C libraries that eventually was won by glibc), the different linux distributions (which take turns leapfrogging each other), and KDE vs. Gnome. - Not everything will work optimally so one should rely on competition in the “market” to find the most effective solution(s) (or to develop different variations for different environments). - In practice, forking and fragmentation in open-source software is suppressed by unwritten community rules that “assign control of an open-source project, including the right to designate ‘official’ versions, to a single entity (an individual, an informal group, or a formal organization).” Also, there are strong social pressures against forking. - “Rogue” versions have not been a problem because of public review and because there is typically a single place to get an “official” version that has undergone additional review and testing. 6. The maturation of the internet has led to qualitatively new possibilities for collaborative development. New tools are essential, however (next!). 13-2 Tools for Bazaar-Style Development • Why now? Because of internet + new tools for massively parallel collaborative development – Example of new technology changing how physicists work: Los Alamos preprint server – Here: some tools that can be adapted to physics education research * Bugzilla — bug-tracking system * Faq-O-Matic — “knowledge base” system * SourceForge — free hosting service • Sample tools for massively parallel collaborative development. Qualitative difference because of quantitative change in tools. • The Los Alamos server has dramatically changed the pace of development in some subfields of physics as well as permanently altered the role of journals. A preprint in nuclear physics effective field theory will often have 2/3 of its references to unpublished Los Alamos preprints and there may be several preprint cycles before journal articles appear. Different types of contributions (like summer school lectures) can carry equal weight. No obvious problems with lack of explicit refereeing; instead there is (implicit) widespread peer review (cf. Amazon.com book reviews or “open content”). • See [BU1] for a long list of “Call Center, Bug Tracking, and Project Management Tools for Linux.” Tools for the Bazaar [cont.] - **Bugzilla** (http://www.mozilla.org/projects/bugzilla/) → sample bug-tracking system - A database for bugs that lets people report bugs via the web and then assigns them to appropriate developers - “bug” is a generic term for typos, software bugs, requests for enhancements, suggestions for changes - Bugzilla priorities bugs, coordinates schedules (“milestones”), maintains to-do lists, and tracks bug dependencies - Sample application: tracking errata and updates to curriculum materials • Keeping track of “bugs” is essential. – note that “bugs” also include requests for enhancements • Bugzilla is a database for bugs. – also called a “Defect Tracking System” – it lets people report bugs and assigns these bugs to the appropriate developers. – developers can use bugzilla to keep a to-do list as well as to prioritize, schedule, and track dependencies. – far superior to shared lists and email. – see “The Bugzilla Guide Home Page” (http://www.trilobyte.net/barnsons/ ) for more detail, especially the section “Why Should We Use Bugzilla” (http://www.trilobyte.net/barnsons/html/why.html ) • Directly adaptable to curriculum materials! – Every text should use this type of tool! • **Faq-O-Matic** ([faqomatic.sourceforge.net](http://faqomatic.sourceforge.net)) → “knowledge base” system – a web-based system that automates an FAQ (Frequently Asked Questions list) – highly searchable database of “questions” – anyone can contribute * permission system to establish hierarchy of moderators – Sample application: supplement to curriculum materials – See also: “open content” documentation [e.g., Andamooka ([www.andamooka.org](http://www.andamooka.org)) or *php manual* ([http://www.php.net/manual](http://www.php.net/manual)) ] • FAQ’s are great but difficult to maintain. Faq-O-Matic solves the problem! • “Open content” documentation “…opens books up to contribution, comment, and criticism via section-by-section community annotation and makes the book (including annotation) available for download.” Coordinated by David Sweet, recent physics Ph.D. from U.Maryland, now a quantitative analyst for a hedge fund. • **SourceForge** (http://sourceforge.net) – **Free** hosting service for open-source projects – **Features**: CVS repository, mailing lists, bug tracking, message boards/forums, task management software, site hosting, permanent file archival, full backups * **CVS** → Concurrent Version System A tool to keep track of changes made by developers working on the same files. * total web-based, **secure** administration * to set up a project, register as a site user, login, and register your project. That’s it! – **SourceForge** is itself an open source project * started to remove obstacles to open source software development * “A small idea that refused to stop growing.” – **Use to host open-source curriculum projects!** Extensive documentation is available at http://sourceforge.net; see the documentation page http://sourceforge.net/docman/?group_id=1 Prototype Bazaar Project - A common PE “itch” $\rightarrow$ physics simulations - Simulation software $\rightarrow$ natural starting point - use existing development tools and strategies - Modern version of CUPS (and similar projects) - relativity, E&M, quantum mechanics, ... - implement pedagogy from PER as part of PER research and development cycle - build in assessment and feedback tools • We finish up by sketching a proposed bazaar project that will (hopefully) go online later this year, coordinated initially by Dick Furnstahl (and anyone else he can con). • The first of Raymond’s maxims was that every good work of software starts by scratching a developer’s personal itch. A common itch in upper-level physics education is for good physics software simulations. • A development project for simulation software is a natural starting point for bringing the bazaar to physics education – can build on past efforts, such as CUPS, which had some elements of the bazaar – existing development tools and strategies are designed for software development – provides testbed for open source PE development and dissemination • The basic idea is to produce a modern, PER-based, open source version of the CUPS simulations [CU1] – cover all branches of physics in modular form – implement pedagogy from PER in the design of the simulation interfaces and in the associated materials as part of a PER research and development cycle – build in tools to permit “easy” assessment and feedback to the instructor and to the project • **Features:** – decouple hardware implementation (including graphics) from interface from simulation routines from associated curriculum materials – platform independence – downloadable (and updates) from the web * entirely open source • **Essential initial conditions:** – a critical mass of initial code is needed – can start off crude, buggy, incomplete, poorly documented but must have plausible promise – coordinator has to recognize (and use) good design ideas from contributors – coordinator sets vision rather than managing contributors → “the best innovators aren’t dictated to: they are turned loose” • Features: – As with web browsers, the key to evolvability is to decouple the hardware implementation from the interface \textit{and} from the simulation routines themselves \textit{and} from the associated curriculum materials \* “decoupled” means that these elements communicate through a well-defined, standardized protocol – platform independence is important for wide dissemination – downloadable (including all updates) from the web, entirely open source (of course!) • There are some essential initial conditions for a viable project. – There has to be sufficient code that works somewhat to show where things are going and that the project is feasible. It doesn’t have to be perfect! Plausible promise is the key. – In addition, the coordinator(s) must be able and willing to recognize and incorporate good ideas from contributors. The coordinator cannot dictate to contributors, but must set a vision for the project (and make contributors feels that their efforts are worthwhile). [SH1] • **Potential development community:** – physics teachers at all levels who would like to use the simulations in their classes – physics researchers simulating their specialty * cf. Physics 2000 simulations [http://www.colorado.edu/physics/2000/] – undergraduate students doing research project → great source of computer expertise! – hackers interested in physics * large community (just read slashdot) – some subset → co-developers (e.g., in charge of individual modules) The potential development community is extensive. - physics teachers at all levels who would like to use the simulations in their classes - physics researchers simulating their specialty, such as the Physics 2000 simulations from the University of Colorado - undergraduate research projects → great source of computer expertise! - hackers interested in physics; this is a large community that is apparent by browsing slashdot (“News for Nerds”) - some subset of these contributors would become co-developers of various aspects of the project Summary • **Bazaar-style development addresses:** – short cycle times $\rightarrow$ parallel efforts – dissemination $\rightarrow$ same time as development – reinventing $\rightarrow$ reuse/extend existing materials – static content $\rightarrow$ continuous updates – time sink $\rightarrow$ lower barriers to contribute – physics subtleties $\rightarrow$ involve experts – community acceptance $\rightarrow$ get them involved! • **Tools for massively parallel collaborative development are readily available** • **Physics education curriculum development and reform have largely operated in Cathedral mode; it is time to try the Bazaar!** • In summary, bazaar-style development addresses all of the limitations of physics education research for upper level courses: – parallel efforts at many different institutions alleviate the short cycle times, extend the alpha/beta populations, and build in large scale dissemination – open source means reuse rather than reinvention and, together with the development philosophy and culture, dramatically lowers the barriers for contributors (as well as institutionalizing continuous updates) – with the involvement of more physicists, bugs become shallow and modern pedagogy becomes better tested and accepted in the community • The basic tools that enable massively parallel collaborative development are available — It’s time to try the Bazaar! References We have avoided associating dates with most of these references, which are still evolving. Some of these references have not been explicitly cited in the text, but all contributed ideas to the talk. [BR1] Christopher Brown, “Linux and Decentralized Development,” http://www.firstmonday.dk/issues/issue3_3/browne/ or http://vip.hex.net/ cbbrowne/lsf.html http://www.lotus.com/developers/devbase.nsf/articles/doc2000091200 and the response from Eric Raymond, [CU1] CUPS is the Consortium for Upper-Level Physics Software, which is a series of book/software packages with computer simulations on a wide range of physics topics. They are DOS programs written in Turbo Pascal. See http://www.physics.gmu.edu/~cups/ for details. [FSF1] Richard Stallman is the driving force behind the Free Software Foundation. See http://www.fsf.org/philosophy/philosophy.html for an extensive set of links to the philosophy of free software and information about the GNU Project. http://www.hecker.org/writings/setting-up-shop.html http://doxpara.netpedia.net/core.html [LE1] Josh Lerner and Jean Triole, “The Simple Economics of Open Source,” http://papers.nber.org/papers/w7600. [LI1] See http://www.opensource.org/licenses/index.html for descriptions of the most common open-source licenses. [LK1] See http://www.alt.net/~lk/cathedral-bazaar.html http://www.informationweek.com/story/TWB19990812S0003 http://www.techreview.com/magazine/tr100/index.asp and http://www.techreview.com/magazine/tr100/IOTYpr.asp for details. [OR1] Tim O'Reilly, “The Open-Source Revolution,” in [RA1] Eric Raymond, “The Cathedral and the Bazaar” and related essays. The quotes in the talk are taken from this essay. See http://www.tuxedo.org/~esr/writings/cathedral-bazaar/ for the current text and links to commentary and criticism. Other aspects of the open-source process and culture are discussed in “Homesteading the Noosphere” and “The Magic Cauldron” (available on the same page).
{"Source-Url": "http://www.physics.ohio-state.edu:80/~ntg/aapt/aapt_jan2001_v0.9.pdf", "len_cl100k_base": 7789, "olmocr-version": "0.1.50", "pdf-total-pages": 49, "total-fallback-pages": 0, "total-input-tokens": 75558, "total-output-tokens": 10170, "length": "2e12", "weborganizer": {"__label__adult": 0.0005497932434082031, "__label__art_design": 0.00122833251953125, "__label__crime_law": 0.0006279945373535156, "__label__education_jobs": 0.130859375, "__label__entertainment": 0.0003063678741455078, "__label__fashion_beauty": 0.00039887428283691406, "__label__finance_business": 0.0013170242309570312, "__label__food_dining": 0.0007262229919433594, "__label__games": 0.0012874603271484375, "__label__hardware": 0.0019378662109375, "__label__health": 0.0012683868408203125, "__label__history": 0.0008563995361328125, "__label__home_hobbies": 0.0004723072052001953, "__label__industrial": 0.001026153564453125, "__label__literature": 0.0010929107666015625, "__label__politics": 0.0007309913635253906, "__label__religion": 0.0014123916625976562, "__label__science_tech": 0.1572265625, "__label__social_life": 0.00070953369140625, "__label__software": 0.032318115234375, "__label__software_dev": 0.66162109375, "__label__sports_fitness": 0.0005974769592285156, "__label__transportation": 0.0010623931884765625, "__label__travel": 0.00038814544677734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38329, 0.0079]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38329, 0.51698]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38329, 0.92336]], "google_gemma-3-12b-it_contains_pii": [[0, 434, false], [434, 1824, null], [1824, 2195, null], [2195, 3203, null], [3203, 4418, null], [4418, 5052, null], [5052, 5573, null], [5573, 6580, null], [6580, 7174, null], [7174, 8407, null], [8407, 9001, null], [9001, 10383, null], [10383, 10805, null], [10805, 11823, null], [11823, 12405, null], [12405, 12848, null], [12848, 13570, null], [13570, 14655, null], [14655, 15517, null], [15517, 15965, null], [15965, 16513, null], [16513, 18010, null], [18010, 18567, null], [18567, 19937, null], [19937, 21087, null], [21087, 21270, null], [21270, 22087, null], [22087, 22535, null], [22535, 23529, null], [23529, 25082, null], [25082, 25490, null], [25490, 26317, null], [26317, 26862, null], [26862, 27572, null], [27572, 28145, null], [28145, 28533, null], [28533, 29290, null], [29290, 29423, null], [29423, 29830, null], [29830, 30976, null], [30976, 31616, null], [31616, 32637, null], [32637, 33144, null], [33144, 33687, null], [33687, 34365, null], [34365, 35124, null], [35124, 36350, null], [36350, 37544, null], [37544, 38329, null]], "google_gemma-3-12b-it_is_public_document": [[0, 434, true], [434, 1824, null], [1824, 2195, null], [2195, 3203, null], [3203, 4418, null], [4418, 5052, null], [5052, 5573, null], [5573, 6580, null], [6580, 7174, null], [7174, 8407, null], [8407, 9001, null], [9001, 10383, null], [10383, 10805, null], [10805, 11823, null], [11823, 12405, null], [12405, 12848, null], [12848, 13570, null], [13570, 14655, null], [14655, 15517, null], [15517, 15965, null], [15965, 16513, null], [16513, 18010, null], [18010, 18567, null], [18567, 19937, null], [19937, 21087, null], [21087, 21270, null], [21270, 22087, null], [22087, 22535, null], [22535, 23529, null], [23529, 25082, null], [25082, 25490, null], [25490, 26317, null], [26317, 26862, null], [26862, 27572, null], [27572, 28145, null], [28145, 28533, null], [28533, 29290, null], [29290, 29423, null], [29423, 29830, null], [29830, 30976, null], [30976, 31616, null], [31616, 32637, null], [32637, 33144, null], [33144, 33687, null], [33687, 34365, null], [34365, 35124, null], [35124, 36350, null], [36350, 37544, null], [37544, 38329, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38329, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38329, null]], "pdf_page_numbers": [[0, 434, 1], [434, 1824, 2], [1824, 2195, 3], [2195, 3203, 4], [3203, 4418, 5], [4418, 5052, 6], [5052, 5573, 7], [5573, 6580, 8], [6580, 7174, 9], [7174, 8407, 10], [8407, 9001, 11], [9001, 10383, 12], [10383, 10805, 13], [10805, 11823, 14], [11823, 12405, 15], [12405, 12848, 16], [12848, 13570, 17], [13570, 14655, 18], [14655, 15517, 19], [15517, 15965, 20], [15965, 16513, 21], [16513, 18010, 22], [18010, 18567, 23], [18567, 19937, 24], [19937, 21087, 25], [21087, 21270, 26], [21270, 22087, 27], [22087, 22535, 28], [22535, 23529, 29], [23529, 25082, 30], [25082, 25490, 31], [25490, 26317, 32], [26317, 26862, 33], [26862, 27572, 34], [27572, 28145, 35], [28145, 28533, 36], [28533, 29290, 37], [29290, 29423, 38], [29423, 29830, 39], [29830, 30976, 40], [30976, 31616, 41], [31616, 32637, 42], [32637, 33144, 43], [33144, 33687, 44], [33687, 34365, 45], [34365, 35124, 46], [35124, 36350, 47], [36350, 37544, 48], [37544, 38329, 49]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38329, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
79d3955694a2474b620aaaf3935f0610c440529d
Chapter 5 Inter-process communication 5.1 Hardware support for synchronization You should have already seen a number of hardware mechanisms for synchronizing threads. The most basic one available to an OS is to disable interrupts: \textbf{Algorithm 5.1} Protecting a critical section by disabling interrupts 1: Disable all interrupts and traps 2: Access state in a critical section 3: Enable interrupts Remarks: - This technique doesn’t work in a situation with multiple cores or hardware threads running concurrently. Neither does it take into account DMA devices. - Processes can’t be rescheduled inside critical section. Indeed, this is effectively a mutex on the entire state of the machine. - That said, inside the kernel on a uniprocessor it is extremely efficient for short critical sections. To provide synchronization on a multiprocessor in user space, you need help from the memory system and instruction set. 5.1.1 Shared-memory synchronization instructions Most of this section should be a recap from the Systems Programming and Parallel Programming courses. 5.1. HARDWARE SUPPORT FOR SYNCHRONIZATION Algorithm 5.2 Test-And-Set 1: inputs 2: \( p \) \{Pointer to a word in memory\} 3: outputs 4: \( v \) \{Flag indicating if the test was successful\} 5: do atomically: 6: \( v \leftarrow *p \) 7: \( *p \leftarrow 1 \) 8: end do atomically 9: return \( v \) Remarks: - Plenty of processors provide an instruction for this, or something equivalent like Read-And-Clear. - Some Systems-on-Chip also provide peripheral hardware registers that function this way as well when you read from them. Algorithm 5.3 Compare-and-Swap 1: inputs 2: \( p \) \{Pointer to a word in memory\} 3: \( v1 \) \{Comparison value\} 4: \( v2 \) \{New value\} 5: outputs 6: \{Original value\} 7: do atomically: 8: if \( *p = v1 \) then 9: \( *p \leftarrow v2 \) 10: return \( v1 \) 11: else 12: return \( *p \) 13: end if 14: end do atomically Remarks: - This is available on most modern processors as an instruction. - It is strictly more powerful than TAS. In fact, it is as powerful as needed: you can show [HFP02] that any other atomic operation can be efficiently simulated with CAS, though not with TAS. --- **Algorithm 5.4 Load-linked / Store Conditional (LL/SC)** **Load-linked** 1. **inputs** 2. \( p \) {Pointer to a word in memory} 3. **outputs** 4. \( v \) {Value read from memory} 5. **do atomically:** 6. \( v \leftarrow \ast p \) 7. mark \( p \) as “locked” 8. **end do atomically** 9. **return** \( v \) **Store-conditional** 10. **inputs** 11. \( p \) {Pointer to a word in memory} 12. \( v \) {Value to store to memory} 13. **outputs** 14. \( r \) {Result of store} 15. **do atomically:** 16. **if** \( \ast p \) has been updated since load-linked **then** 17. \( r \leftarrow 1 \) 18. **else** 19. \( \ast p \leftarrow v \) 20. \( r \leftarrow 0 \) 21. **end if** 22. **end do atomically** 23. **return** \( r \) Remarks: - Also known as “Load-locked”, “Load-reserve”, etc. - Well-suited to RISC load-store architectures - Often implemented by marking the line in the cache --- **5.1.2 Hardware Transactional Memory** LL/SC can be viewed as providing highly restricted form of transaction (on a single word), which aborts if a conflicting update to the word has taken place during the transaction. Definition 5.5 (Transactional Memory). Transactional Memory is a programming model whereby loads and stores on a particular thread can be grouped into transactions. The read set and write set of a transaction are the set of addresses read from and written to respectively during the transaction. A data conflict occurs in a transaction if another processor reads or writes a value from the transaction’s write set, or writes to an address in the transaction’s read set. Data conflicts cause the transaction to abort, and all instructions executed since the start of the transaction (and all changes to the write set) to be discarded. Example 5.6. Intel’s Transactional Synchronization Extensions (TSX) provide three basic instructions for implementing “Restricted Transactional Memory” or RTM: XBEGIN (which starts a transaction), XEND (which commits a transaction), and XABORT (which forces the transaction to abort). There is also XTEST, which returns whether it is executing under a transaction. If a transaction aborts, the processor rolls back the write set and jumps to a fallback instruction address specified by the XBEGIN instruction, with register information saying why the transaction aborted. This code can then choose to retry the transaction, or so something else (like take out a conventional lock instead). TSX also provides an alternative to RTM called Hardware Lock Elision (HLE). Under HLE, the code is written to take out locks using atomic instructions as above, but the processor doesn’t actually do this the first time. Instead, it executes the critical section under a transaction, and only if it aborts does it try again, this time really taking out the lock. Remarks: - As with LL/SC, HTM is usually implemented using the cache coherency protocol to make lines in the cache as part of the read and write sets. Coherency messages signal remote accesses, which then cause aborts. For this reason, conflict detection is actually done at the granularity of entire cache lines rather than just words. - As with many speculation-based CPU features, HTM is notoriously difficult to get right. The first Intel Haswell and Broadwell processors to be sold supporting TSX had to have the functionality disabled in microcode after serious bugs came to light. - There are many things other than conflicts or an explicit instruction that can cause an HTM transaction to abort (false sharing, interrupts, etc.) The “false abort rate” is an important measure of the effectiveness of a HTM implementation. - There is a limit to the size of read and write sets that can be checked (such as in the L1 cache). If this is exceeded, the transaction aborts. It’s important not to retry a transaction like this, since it’s always going to abort. The abort handling code is therefore usually supplied with information about whether the CPU thinks the transaction should be retried or not. 5.2 Shared-memory synchronization models We’ll assume you’re already familiar with semaphores (and P, V operations), mutexes (Acquire, Release), condition variables (Wait, Signal/Notify, Broadcast/NotifyAll), and monitors (Enter, Exit). Our focus here is the interaction of these operations with the rest of the OS, in particular the scheduler. Assuming for a moment a priority-based scheduler (as in Unix, Windows, etc.). **Definition 5.7. Spinlock.** A spinlock is a multiprocessor mutual exclusion primitive based on one processor spinning on a memory location written by another. **Algorithm 5.8** TAS-based spinlock 1: inputs 2: p is an address of a word in memory 3: Acquire the lock 4: repeat 5: v ← TAS(*p) 6: until v = 0 7: . . . 8: Release the lock 9: *p ← 0 **Remarks:** - Spinlocks only make sense on a multiprocessor: if you’re spinning, nobody else is going to release the lock. - A pure spinlock only makes sense if the duration that any process holds the lock is short, otherwise, it’s better to block. **Definition 5.9** (Spin-block problem). The spin-block problem is to come up with a strategy for how long a thread should spin waiting to acquire a lock before giving up and blocking, given particular values for the cost of blocking, and the probability distribution of lock hold times. **Theorem 5.10.** (Competitive spinning): in the absence of any other information about the lock hold time, spinning for a time equal to the cost of a context switch results in overhead at most twice that of the optimal offline algorithm (which has perfect knowledge about the future). This bound is also tight: no online algorithm can do better than a factor of two. Proof: see Anna Karlin et al. [KMMO90]. **Remarks:** - The proof is subtle (and worth reading!), but the intuition is as follows: in the best case, you avoid a context switch and save time. Otherwise, your overhead is at worst twice as bad as immediately blocking. 5.3 Messages: IPC without shared memory The alternative to communication using shared data structures protected by thread synchronization primitives is to send messages instead. You’ve already seen this in networking using sockets. Message passing is best thought of as an abstraction: it’s perfectly possible to implement thread synchronization using only messages, and vice versa. This was famously demonstrated by Hugh Lauer and Roger Needham \cite{LN79}, which showed that the two models are essentially equivalent, but can vary greatly in performance based on the properties of the underlying hardware. **Definition 5.11 (Asynchronous IPC).** In *asynchronous* or *buffered* IPC the sender does not block, but the send operation instead returns immediately. If the receiving process is not waiting for the message, the message is buffered until the receive call is made. On the receive side, the receive call blocks if no message is available. **Definition 5.12 (Synchronous IPC).** In contrast, in a *synchronous* or *unbuffered* IPC system, both sender and receiver may block until both are ready to exchange data. **Remarks:** - Asynchronous IPC is the model you’re probably most familiar with from network sockets. “Asynchronous” and “synchronous” are heavily overloaded terms in computer science, but here “asynchronous” means that the send and receive operation do not need to overlap in time: a send can complete long before the corresponding receive starts. - Asynchronous IPC implies a buffer to hold messages which have been sent but not yet received. If this buffer becomes full, it’s not clear what to do: you can discard messages (as in UDP), or block the sender from sending (as in TCP) until the buffer is drained. - Synchronous IPC, on the other hand, requires no buffering, merely two threads synchronizing in the OS kernel. - You’ve probably heard of “non-blocking” I/O, which is an orthogonal concept. **Definition 5.13 (Non-blocking I/O).** *Blocking* communication operations may block the calling thread (such as the asynchronous receive call described above). *Non-blocking* variants of these operations instead immediately return a code indicating that the operation should be retried. **Remarks:** - Non-blocking operations can be thought of as *polling*: there’s usually some kind of operation that can tells which potential non-blocking operations would succeed if they were tried right now: see \texttt{select()} or \texttt{poll()} in UNIX. - You can have *synchronous, non-blocking* operations: the send call only succeeds when the receiver is waiting. Example 5.14. Unix pipes: Pipes are the more fundamental IPC mechanism in UNIX, and are closely related to \texttt{fork()}; one might reasonable claim that UNIX is any OS based on \texttt{fork()} and \texttt{pipe()}. A pipe is a unidirectional, buffered communication channel between two processes, created by: \begin{verbatim} int pipe(int pipefd[2]) \end{verbatim} Each end is identified by a file descriptor, returned by reference in the array \texttt{pipefd}. One sets up a pipe between two processes by creating the pipe, then forking the other the process. This is, at heart, how the shell works. When you create a pipe, you immediately get both end-points in one go. We can make this model more flexible, for example allowing the processes at each end of the pipe to be created independently. Example 5.15. Unix domain sockets. Like network sockets, UNIX domain sockets can be bound to an address, which in this case is a filename. The filename can then be used by clients to connect to the socket. \begin{verbatim} int s = socket(AF_UNIX, type, 0); \end{verbatim} This allows us to split up, on the client side, the name of a communication end-point (the filename in this case) from the reference you use to actually send and receive data (the file descriptor you get back from open). Example 5.16. Unix named pipes. named pipes (also called “FIFO”s) go one step further by allowing both client and server to open the FIFO based on its name (it’s also a special file type). You can create a FIFO from the command line: \begin{verbatim} $ mkfifo /tmp/myfifo \end{verbatim} 5.4 Upcalls So far, every operation to do with communication that we have seen has involved the sender or receiver (in other words, a userspace process) calling “down” into the OS to perform an operation (send, receive, etc.). Definition 5.17 (Upcall). An upcall is an invocation by the operating system (usually the kernel) of a function inside a user process. The called function in the user program is called the entry point, or the upcall handler. Remarks: - This is the inverse of a regular system call: the OS calls the program. It is a very important structuring concept for systems, and yet not widely known among non-systems programmers. One way to view an upcall is as the generalization of an interrupt. Obviously, the kernel has to know what to call in the user program, i.e. the address of the upcall handler. If the OS is running conventional processes, and the process has been preempted when the kernel is entered, this naturally raise the question of what happens to the previous thread context that was saved by the OS. One approach is to keep this around, and treat the upcall as running in a “special” context which only exists until it returns (to the kernel). Alternatively, the OS might choose to pass the previously-saved thread context to the user program, in case it wants to do something with it (like resume the thread). Example 5.18 (Unix signals), are an example of a the first type of upcall mechanism: the user program registers “signal handlers” as functions that can be called to deliver a signal. Unix systems have a fixed number of signal types (see “man 7 signal”). For each signal, an action can be specified: ignore the signal, call a handler function, terminate the process completely, etc. Remarks: Signals raise some interesting concurrency questions. Which stack does the signal handler run on, for example? Another is: what happens if the signal handler issues a system call? Since it’s not really part of the regular process, what happens? It turns out that signal handlers are quite limited in what they are allowed to do. For example, “man 7 signal-safety” will list the system calls that a signal handler is allowed to make, and there are not many of them. They do include signal() and sigaction(), however. Signal handlers can’t, in general, safely access program global or static variables, since the main process might have these protected under a lock when the signal handler is called. This includes many standard C library calls cannot (including the reentrant “_r” variants of functions like strtok()). It is possible to longjmp() out of a signal handler (and into the process) if you are careful. It’s a good exercise to figure out what the OS needs to do so that the process keeps running smoothly. As with all upcalls, what happens if another signal arrives? If multiple signals of the same type are to be delivered, Unix will discard all but one – signals of the same type are basically indistinguishable. If signals of different types are to be delivered, Unix will deliver them all, but is free to do so in any order. Example 5.19 (Scheduler activations), take the idea of upcalls much further than signals. Every time a process is resumed by the kernel, instead of simply restoring its registers, the kernel upcalls into the process letting it know where the previous execution state has been stored. This allows the process to resume it, or do something different: the original motivation for this design was to run highly efficient user-level threads that were aware of when the process itself was preempted and rescheduled. Indeed, the first implementations also upcalled the process (on one core) whenever it was descheduled (on another core), just to let it know. The upcall handler is basically the entry point to a user-level thread scheduler. Remarks: • Scheduler activations allow a thread implementation that elegantly combines the performance of user-space threads, and the predictability and flexibility of kernel threads, and this is why they were adopted in recent versions of Windows. • As with signals, what happens if more than one scheduler activation is pending? The original implementation used a stack and a reentrant activation handler to allow multiple scheduler activations to be active at a time; but an alternative approach (published at the same time, but under a different name) simply disables upcalls until the activation handler tells the kernel it’s OK. In the meantime, the kernel simply resumes the process instead. 5.5 Client-Server and RPC Message-passing can be fast, and has quite nice semantics (either a message is sent or it isn’t, and it’s either received or it isn’t). Moreover, over a network (as we’ll see later in the course), it’s the only way to communicate. We will soon encounter cases which, for the moment at least, we do not see in a single-machine OS: lost messages, reordered messages, or messages that are delayed by some unbounded time. Until then, consider just two parties communicating by messages (often the common case). Typically, this interaction is asymmetric: one end of the communication is offering a service, while the other is using it. Definition 5.20 (Client-Server). In the client-server paradigm distributed computing, a server offers a service to potentially multiple clients, who connect to it to invoke the service. Remarks: • This is a distributed computing concept, but it applies even in the single-machine OS case (which is why we introduce it here). Indeed, the distinction between inter-process communication (with a single OS) and networked communication (between machines over a network) is increasingly blurred these days, and we’ll see more of this later. Rather than focussing on whether one or more machines is involved, it’s better to think about what network model is assumed between endpoints: can messages be lost? Reordered? Delayed indefinitely? etc. Pipes can’t handle client-server communication, since either the client or server (or a common ancestor) must have forked the other. Client-server requires a way to name the end-point where the server is offering the service, so that clients can connect to it. You have seen one way to deal with this: sockets, where the server address is passed to the `connect()` call by the client. If you write client-server code using sockets, however, you immediately encounter an issue: you find yourself writing the same “wrapper” code over and over again for every service and every client. **Definition 5.21** (Remote Procedure Call). Remote Procedure Call or RPC is a programming technique whereby remote client-server interactions are made to look to the programmer of both the client and the server as simple procedure calls: the client program calls the server using a simple procedure call, and the server program implements the service purely as a procedure with the appropriate name. How this works is as follows: the *signature* of the remote procedure is fed into a *stub compiler*, which outputs two chunks of code that go into libraries. The first is the client stub (or *proxy* which implements the client side procedure: this takes its arguments, *marshals* them into a buffer, sends the buffer to the server, and waits for a message in reply. When this comes back, it *unmarshals* the return value(s) from the call, and returns to the client program. The second is the server stub, which performs the corresponding actions on the server side: wait for a message, unmarshal the arguments, call the server code with the arguments, marshal the return value(s) into a buffer, and send it back to the client. **Remarks:** - As described, this allows only one procedure to be made available remotely. In practice, this generalizes. The stub compiler can generate code for a *interface*, which is a collection of related procedures. All RPCs to the same interface go over the same connection, with an extra, hidden argument marshalled in: the *procedure number* of the function to call on the other side. - For languages without strong type systems like C, a separate language is needed to define interfaces. **Definition 5.22** (Interface Definition Language). An *Interface Definition Language* or *IDL* is a small, domain-specific language for writing RPC interface definitions. **Remarks:** - If the RPC is to be carried over a network, both sides need to agree on a common representation for the arguments and results of the RPC. This requires, in networking terms, a *presentation-layer protocol*. An example is XDR, the eXternal Data Representation used for, among other applications, the UNIX Network File System (NFS). • If the RPC is, instead, local to a machine (in which case it is called, without apparent irony, Local RPC), the use of a presentation-layer protocol is less important. However, the performance tradeoff is now different. For classical networked RPC, the time to execute the simplest possible RPC (the “Null RPC”, which just returns) is dominated by the network propagation delay. On a single machine, through the kernel, it can be dominated by the cost of entering and exiting the kernel. For a server which is executing lots of requests, or a client which needs to send many requests in sequence, this can be a bottleneck. Example 5.23 (RPC over synchronous IPC). A client executing an RPC needs to perform two operations, which are typically system calls: first, the send call, and second, the receive call to get the reply. Similarly, a server processing requests need to receive the message, then execute another system call to send the result back. For this reason, high-performance local RPC systems allow the OS the combine two operations in a single system call. There are two important cases. The first is sometimes called “send with closed receive”: the operation sends a message to a given destination, and then the thread blocks waiting for a reply from that destination. This performs the whole of the client side of an RPC communication in one syscall. The second is sometimes called “send with open receive”: this sends a message to a destination and then blocks waiting for a message from any source. This is both halves of the server side combined, but in reverse order: the server calls this to send a reply and then block waiting for the next call. 5.6 Distributed objects How does this get set up? As with TCP network connections, a server needs to create an end-point (in the TCP case, a listening socket) and advertise its address somewhere, while a client has to look up this address and connect to it. This usually requires a 3rd party. Definition 5.24 (Name server). A name server is a service (usually invoked using RPC) which holds the addresses of other RPC services. Servers register their services with the name server, and clients lookup the service they want to get the address to connect to. The data that the name server stores and hands out for a service is sometimes called an interface reference. It’s a name for the service, and it can be passed around freely (which is why the nameserver can be an RPC server like any other). Definition 5.25 (RPC binding). To contact an RPC service, a client has to acquire an interface reference for the service, and then establish a binding to the service. Establishing the binding is basically setting up a connection, and results in an invocation reference, which is the required client stub. Binding can be explicit, in which case the client has to call some kind of “bind” or “connect” procedure to establish it. However, implicit binding is also possible: as part of unmarshalling an interface reference, the binding is established immediately and an invocation reference returned to the client (or server) instead. Remarks: - This is similar to the notion of a binding we saw early on, but here the binding is an association between the client stub and the remote service: the local pointer or reference the client program has to its stub is now, implicitly, bound to the remote server. - We didn’t talk about this in Chapter 2 but the client binding (and the analogous binding on the server side) are often first-class objects themselves: you can perform operations on them to manipulate the connection or query its status, for example. By now, you’re probably thinking that this is beginning to feel somewhat object-oriented. **Definition 5.26.** Distributed object system. A *distributed object system* is an RPC system with implicit binding where interface references are viewed as object references, and the IDL (if present) defines classes of which services are instances. A local datastructure called the *object table* holds a mapping from interface references to invocation references, so that when an interface reference arrives twice, only a single client stub is created. Remarks: - Note that a distributed object system need not be tied to a particular language: well-known examples like CORBA and DCOM are not. However, they do need to define their own type system if they cannot lift one from a single programming language. Most examples use C primitive types, plus records and (discriminated) unions, plus variable length arrays (sequences) and, of course, interface/object references. - The type system for interfaces can be arbitrarily sophisticated, supporting subtyping (interface inheritance), and (in some cases) distributed garbage collection. RPC, and distributed objects, are intuitive for programmers, but also are predicated on things mostly working: either your RPC was delivered, was executed, and you got a reply, or you get a program exception saying it didn’t. However, when implementing some distributed algorithms which are designed to handle lost messages, delays, node failures, etc., this model isn’t realistic (often because these algorithms are intended to create this reliability). Hence, when discussing things like consensus, we tend to talk in terms of messages, whereas talking to web services is couched in terms of RPCs. **Bibliography**
{"Source-Url": "https://www.systems.ethz.ch/sites/default/files/chapter5_0.pdf", "len_cl100k_base": 5818, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 28086, "total-output-tokens": 6730, "length": "2e12", "weborganizer": {"__label__adult": 0.0003502368927001953, "__label__art_design": 0.000308990478515625, "__label__crime_law": 0.0002918243408203125, "__label__education_jobs": 0.0008978843688964844, "__label__entertainment": 9.447336196899414e-05, "__label__fashion_beauty": 0.00015175342559814453, "__label__finance_business": 0.00018215179443359375, "__label__food_dining": 0.0003843307495117187, "__label__games": 0.0006103515625, "__label__hardware": 0.004642486572265625, "__label__health": 0.0005602836608886719, "__label__history": 0.0003399848937988281, "__label__home_hobbies": 0.00013196468353271484, "__label__industrial": 0.0007295608520507812, "__label__literature": 0.0003211498260498047, "__label__politics": 0.00024247169494628904, "__label__religion": 0.0006976127624511719, "__label__science_tech": 0.1195068359375, "__label__social_life": 0.00010216236114501952, "__label__software": 0.0123748779296875, "__label__software_dev": 0.85595703125, "__label__sports_fitness": 0.0003180503845214844, "__label__transportation": 0.0007548332214355469, "__label__travel": 0.00022840499877929688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27114, 0.03113]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27114, 0.53418]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27114, 0.91521]], "google_gemma-3-12b-it_contains_pii": [[0, 1083, false], [1083, 1946, null], [1946, 3331, null], [3331, 6230, null], [6230, 8181, null], [8181, 10781, null], [10781, 13090, null], [13090, 15783, null], [15783, 18303, null], [18303, 21041, null], [21041, 24148, null], [24148, 26658, null], [26658, 27114, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1083, true], [1083, 1946, null], [1946, 3331, null], [3331, 6230, null], [6230, 8181, null], [8181, 10781, null], [10781, 13090, null], [13090, 15783, null], [15783, 18303, null], [18303, 21041, null], [21041, 24148, null], [24148, 26658, null], [26658, 27114, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27114, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27114, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27114, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27114, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27114, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27114, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27114, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27114, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27114, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27114, null]], "pdf_page_numbers": [[0, 1083, 1], [1083, 1946, 2], [1946, 3331, 3], [3331, 6230, 4], [6230, 8181, 5], [8181, 10781, 6], [10781, 13090, 7], [13090, 15783, 8], [15783, 18303, 9], [18303, 21041, 10], [21041, 24148, 11], [24148, 26658, 12], [26658, 27114, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27114, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
71fa9fb33c661c577e361eec9794701dc41e8f9e
Approximation Algorithms Approximation Ratios Optimization Problems - We have some problem instance $x$ that has many feasible “solutions”. - We are trying to minimize (or maximize) some cost function $c(S)$ for a “solution” $S$ to $x$. For example, - Finding a minimum spanning tree of a graph - Finding a smallest vertex cover of a graph - Finding a smallest traveling salesperson tour in a graph Approximation Ratios - An approximation produces a solution $T$ - $T$ is a **k-approximation** to the optimal solution $OPT$ if $c(T)/c(OPT) \leq k$ (assuming a min. prob.; a maximization approximation would be the reverse) - Put another way, $c(T) \leq k \cdot c(OPT)$ - We would have $c(T) \geq (1/k) \cdot c(OPT)$ - In both cases we assume $k > 1$. **Special Case of the Traveling Salesperson Problem** **OPT-TSP:** Given a complete, weighted graph, find a cycle of minimum cost that visits each vertex. - OPT-TSP is NP-hard - Special case: edge weights satisfy the triangle inequality (which is common in many applications): - \( w(a,b) + w(b,c) \geq w(a,c) \) Aside: Euler Tour of a Rooted Tree Input: Output: 1 5 4 2 4 3 4 5 1 A 2-Approximation for TSP Special Case Algorithm $TSP_{Approx}(G)$ - **Input** weighted complete graph $G$, satisfying the triangle inequality - **Output** a TSP tour $T$ for $G$ 1. $M \leftarrow$ a minimum spanning tree for $G$ 2. $P \leftarrow$ an Euler tour traversal of $M$, starting at some vertex $s$ 3. $T \leftarrow$ empty list for each vertex $v$ in $P$ (in traversal order) if this is $v$’s first appearance in $P$ then $T$.insertLast$(v)$ $T$.insertLast$(s)$ return $T$ A 2-Approximation for TSP Special Case - Proof - The optimal tour is a spanning tour; hence $|M| \leq |OPT|$. - The Euler tour $P$ visits each edge of $M$ twice; hence $|P| = 2|M|$. - Each time we shortcut a vertex in the Euler Tour we will not increase the total length, by the triangle inequality $(w(a,b) + w(b,c) \geq w(a,c))$; hence, $|T| \leq |P|$. - Therefore, $|T| \leq |P| = 2|M| \leq 2|OPT|$. Output tour $T$ (at most the cost of $P$) Euler tour $P$ of MST $M$ (twice the cost of $M$) Optimal tour $OPT$ (at least the cost of MST $M$) Vertex Cover - A **vertex cover** of graph $G=(V,E)$ is a subset $W$ of $V$, such that, for every $(a,b)$ in $E$, $a$ is in $W$ or $b$ is in $W$. - **OPT-VERTEX-COVER**: Given a graph $G$, find a vertex cover of $G$ with smallest size. - **OPT-VERTEX-COVER** is NP-hard. A 2-Approximation for Vertex Cover - Every chosen edge $e$ has both ends in $C$ - But $e$ must be covered by an optimal cover; hence, one end of $e$ must be in OPT - Thus, there is at most twice as many vertices in $C$ as in OPT. - That is, $C$ is a 2-approx. of OPT - Running time: $O(n+m)$ **Algorithm** `VertexCoverApprox(G)`: **Input:** A graph $G$ **Output:** A small vertex cover $C$ for $G$ 1. $C \leftarrow \emptyset$ 2. **while** $G$ still has edges **do** - select an edge $e = (v, w)$ of $G$ - add vertices $v$ and $w$ to $C$ - **for** each edge $f$ incident to $v$ or $w$ **do** - remove $f$ from $G$ 3. **return** $C$ An Aside: Harmonic Numbers The \( n \)th harmonic number \( H_n \) is defined as \[ H_n = \sum_{i=1}^{n} \frac{1}{i}. \] **Theorem A.16:** If \( H_n \) is the \( n \)th harmonic number, for \( n > 1 \), then \( \ln n < H_n < \ln n + 1 \). **Solution:** For the upper bound, \[ H_n = 1 + \sum_{i=2}^{n} \frac{1}{i} \leq 1 + \int_{x=1}^{n} \frac{dx}{x} = 1 + \ln n. \] For the lower bound, \[ H_n \geq \sum_{i=1}^{n-1} \frac{1}{i} \geq \int_{x=1}^{n} \frac{dx}{x} = \ln n. \] Set Cover (Greedy Algorithm) OPT-SET-COVER: Given a collection of \( m \) sets, find the smallest number of them whose union is the same as the whole collection of \( m \) sets? - OPT-SET-COVER is NP-hard - Greedy approach produces an \( O(\log n) \)-approximation algorithm. **Algorithm SetCoverApprox(\( S \))**: - **Input**: A collection \( S \) of sets \( S_1, S_2, \ldots, S_m \) whose union is \( U \) - **Output**: A small set cover \( C \) for \( S \) \[ \begin{align*} C &\leftarrow \emptyset \quad \text{// The set cover built so far} \\ E &\leftarrow \emptyset \quad \text{// The elements from \( U \) currently covered by \( C \)} \\ \text{while } E &\neq U \text{ do} \\ &\quad \text{select a set } S_i \text{ that has the maximum number of uncovered elements} \\ &\quad \text{add } S_i \text{ to } C \\ &\quad E \leftarrow E \cup S_i \\ \text{Return } C. \\ \end{align*} \] Greedy Set Cover Analysis Consider the moment in our algorithm when a set $S_j$ is added to $C$, and let $k$ be the number of previously uncovered elements in $S_j$. We pay a total charge of 1 to add this set to $C$ (meaning we made our cover one subset larger), so we charge each previously uncovered element $i$ of $S_j$ a charge of $c(i) = 1/k$. Thus, the total size of our cover is equal to the total charges made. To prove an approximation bound, we will consider the charges made to the elements in each subset $S_j$ that belongs to an optimal cover, $C'$. So, suppose that $S_j$ belongs to $C'$. Let us write $S_j = \{x_1, x_2, \ldots, x_{n_j}\}$ so that $S_j$’s elements are listed in the order in which they are covered by our algorithm. Now, consider the iteration in which \( x_1 \) is first covered. At that moment, \( S_j \) has not yet been selected; hence, whichever set is selected must have at least \( n_j \) uncovered elements. Thus, \( x_1 \) is charged at most \( 1/n_j \). So let us consider, then, the moment our algorithm charges an element \( x_l \) of \( S_j \). In the worst case, we will have not yet chosen \( S_j \) (indeed, our algorithm may never choose this \( S_j \)). Whichever set is chosen in this iteration has, in the worst case, at least \( n_j - l + 1 \) uncovered elements; hence, \( x_l \) is charged at most \( 1/(n_j - l + 1) \). Therefore, the total amount charged to all the elements of \( S_j \) is at most \[ \sum_{l=1}^{n_j} \frac{1}{n_j - l + 1} = \sum_{l=1}^{n_j} \frac{1}{l}, \] which is the familiar **harmonic number**, \( H_{n_j} \). It is well known (for example, see the Appendix) that \( H_{n_j} \) is \( O(\log n_j) \). Let \( c(S_j) \) denote the total charges given to all the elements of a set \( S_j \) that belongs to the optimal cover \( C' \). Our charging scheme implies that \( c(S_j) \) is \( O(\log n_j) \). Thus, summing over the sets of \( C' \), we obtain \[ \sum_{S_j \in C'} c(S_j) \leq \sum_{S_j \in C'} b \log n_j \leq b |C'| \log n, \] for some constant \( b \geq 1 \). But, since \( C' \) is a set cover, \[ \sum_{i \in U} c(i) \leq \sum_{S_j \in C'} c(S_j). \] Therefore, \[ |C| \leq b |C'| \log n. \] A problem $L$ has a **polynomial-time approximation scheme (PTAS)** if it has a polynomial-time $(1+\varepsilon)$-approximation algorithm, for any fixed $\varepsilon > 0$ (this value can appear in the running time). 0/1 Knapsack has a PTAS, with a running time that is $O(n^3/\varepsilon)$. Backtracking and Branch-and-Bound Back to NP-Completeness Many hard problems are NP-Complete - But we still need solutions to them, even if they take a long time to generate Backtracking and Branch-and-Bound are two design techniques that have proven promising for dealing with NP-Complete problems - And often these find "good enough" solutions in a reasonable amount of time Backtracking A backtracking algorithm searches through a large set of possibilities in a systematic way: - Typically optimized to avoid symmetries in problem instances (think, e.g., TSP) - Traverses search space in manner such that "easy" solution is found if one exists. Technique takes advantage of an inherent structure possessed by many NP-Complete problems. NP-Complete Problem Structure Recall: acceptance of solution for instance $x$ of NP-complete problem can be verified in polynomial time. Verification typically involves checking a set of choices (the output of the choice function) to see whether they satisfy a formula or demonstrate a successful configuration. - E.g., Values assigned to Boolean variables, vertices of graph to include in special set, items to go in knapsack. Backtracking - Systematically searches for a solution to problem - Traverses through possible "search paths" to locate solutions or dead ends - Configuration at end of such a path consists of a pair \((x, y)\) - \(x\) is the remaining problem to be solved - \(y\) is the set of choices that have been made to get to this subproblem from the original problem instance Backtracking - Start search with the pair $(x, \phi)$, where $x$ is the original problem instance. - Any time that backtracking algorithm discovers a configuration $(x, y)$ that cannot lead to a valid solution, it cuts off all future searches from this configuration and backtracks to another configuration. Backtracking Pseudocode Algorithm Backtrack(x): Input: A problem instance x for a hard problem Output: A solution for x or “no solution” if none exists F ← { (x, ∅) }. \{ F is the “frontier” set of subproblem configurations \} while F ≠ ∅ do select from F he most promising configuration (x, y) expand (x, y) by making a small set of additional choices let (x₁, y₁), (x₂, y₂), ... (xₖ, yₖ) be the set of new configurations. for each new configuration (xᵢ, yᵢ) do perform a simple consistency check on (xᵢ, yᵢ). if the check returns “solution found” then return the solution derived from (xᵢ, yᵢ) if the check returns “dead end” then discard the configuration (xᵢ, yᵢ) \{ Backtrack \} else F ← F ∪ { (xᵢ, yᵢ) } \{ (xᵢ, yᵢ) starts a promising search path \} return “no solution” Backtracking: Required Details - Define way of selecting the most promising candidate configuration from frontier set F. - If F is a stack, get depth-first search of solution space. F a queue gives a breadth-first search, etc. - Specify way of expanding a configuration \((x,y)\) into subproblem configurations - This process should, in principle, be able to generate all feasible configurations, starting from \((x,\phi)\) - Describe how to perform a simple consistency check for configuration \((x,y)\) that returns "solution found", "dead end", or "continue" Example: CNF-SAT - **Problem:** Given Boolean formula $S$ in CNF, we want to know if $S$ is satisfiable - **High level algorithm:** - Systematically make tentative assignments to variables in $S$ - Check whether assignments make $S$ evaluate immediately to 0 or 1, or yield a new formula $S'$ for which we can continue making tentative value assignments - **Configuration in our algorithm is a pair** ($S', y$) where - $S'$ is a Boolean formula in CNF - $y$ is assignment of values to variables NOT in $S'$ such that making these assignments to $S$ yields the formula $S'$ Example: CNF-SAT (details) - Given frontier $F$ of configurations, the most promising subproblem $S'$ is the one with the smallest clause. - This formula would be the most constrained of all formulas in $F$, so we expect it to hit a dead end most quickly (if it’s going to hit a dead end). - To generate new subproblem from $S'$, locate smallest clause in $S'$, pick a variable $x_i$ that appears in that clause, and create two new subproblems associated with assigning $x_i=0$ and $x_i=1$ respectively. Example: CNF-SAT (details) - Processing $S'$ for consistency check for assignment of a variable $x_i$ in $S'$ - First, reduce any clauses containing $x_i$ based on the value assigned to $x_i$ - If this reduction results in new single literal clause ($x_j$ or $\neg x_j$) we assign value to $x_j$ to make single literal clause satisfied. Repeat process until no single literal clauses - If we discover a contradiction (clauses $x_i$ and $\neg x_i$, or an empty clause) we return ``dead end'' - If we reduce $S'$ all the way to constant 1, return ``solution found'' along with assignments made to reach this - Else, derive new subformula $S''$ such that each clause has at least two literals, along with value assignments that lead from $S$ to $S''$ (this is the reduce operation) Example: CNF-SAT (details) - Placing this in backtracking template results in algorithm that has exponential worst case run time, but usually does OK - In fact, if each clause has no more than two literals, this algorithm runs in polynomial time. Branch-and-Bound - Backtracking not designed for optimization problems, where in addition to having some feasibility condition that must be satisfied, we also have a cost $f(x)$ to be optimized - We’ll assume minimized - Can extend backtracking to work with optimization problems (result is Branch-and-Bound) - When a solution is found, we continue processing to find the best solution - Add a scoring mechanism to always choose most promising configuration to explore in each iteration (best-first search) In addition to the details required for the backtracking design pattern, we add one more to handle optimization: For any configuration \((x,y)\) we assume we have a lower bound function, \(lb(x,y)\), that returns a lower bound on the cost of any solution that is derived from this configuration. - Only requirement for \(lb(x,y)\) is that it must be less than or equal to the cost of any derived solution. - Of course, a more accurate lower bound function results in a more efficient algorithm. Algorithm Branch-and-Bound($x$): **Input**: A problem instance $x$ for a hard optimization (minimization) problem **Output**: An optimal solution for $x$ or “no solution” if none exists $F \leftarrow \{(x, \emptyset)\}$. {Frontier set of subproblem configurations}. $b \leftarrow (+\infty, \emptyset)$. {Cost and configuration of current best solution}. while $F \neq \emptyset$ do select from $F$ the most promising configuration $(x, y)$ expand $(x, y)$ yielding new configurations $(x_1, y_1), (x_2, y_2), \ldots (x_k, y_k)$ for each new configuration $(x_i, y_i)$ do perform a simple consistency check on $(x_i, y_i)$. if the check returns “solution found” then if the cost $c$ of the solution for $(x_i, y_i)$ beats $b$ then $b \leftarrow (c, (x_i, y_i))$ else discard the configuration $(x_i, y_i)$ if the check returns “dead end” then discard the configuration $(x_i, y_i)$ {Backtrack} else if $lb(x_i, y_i)$ is less than the cost of $b$ then $F \leftarrow F \cup \{(x_i, y_i)\}$ {$(x_i, y_i)$ starts a promising search path} else discard the configuration $(x_i, y_i)$ {A “bound” prune} return $b$ Branch-and-Bound Alg. for TSP Problem: optimization version of TSP - Given weighted graph G, want least cost tour - For edge e, let c(e) be the weight (cost) of edge e We compute, for each edge e = (v,w) in G, the minimum cost path that begins at v and ends at w while visiting all other vertices of G along the way. We generate the path from v to w in G – {e} by augmenting a current path by one vertex in each loop of the branch-and-bound algorithm. Branch-and-Bound Alg. for TSP - After having built a partial path P, starting, say, at v, we only consider augmenting P with vertices not in P. - We can classify a partial path P as a "dead end" if the vertices not in P are disconnected in G – {e}. - We define the lower bound function to be the total cost of the edges in P plus c(e). - Certainly a lower bound for any tour built from e and P. Branch-and-Bound Alg. for TSP - After having run the algorithm to completion for one edge of $G$, we can use the best path found so far over all tested edges, rather than restarting the current best solution $b$ at $+\infty$. - Run time can still be exponential, but we eliminate a considerable amount of redundancy. - There are other heuristics for approximating TSP. We won’t discuss them here.
{"Source-Url": "https://facultystaff.richmond.edu/~dszajda/classes/cs315/Spring_2020/lectures/ApproximationAlgorithms.pdf", "len_cl100k_base": 4556, "olmocr-version": "0.1.53", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 45343, "total-output-tokens": 5872, "length": "2e12", "weborganizer": {"__label__adult": 0.0005960464477539062, "__label__art_design": 0.0004968643188476562, "__label__crime_law": 0.0012159347534179688, "__label__education_jobs": 0.00103759765625, "__label__entertainment": 0.0001481771469116211, "__label__fashion_beauty": 0.0003407001495361328, "__label__finance_business": 0.00064849853515625, "__label__food_dining": 0.0008974075317382812, "__label__games": 0.0027923583984375, "__label__hardware": 0.0027751922607421875, "__label__health": 0.001824378967285156, "__label__history": 0.00078582763671875, "__label__home_hobbies": 0.00031685829162597656, "__label__industrial": 0.0015802383422851562, "__label__literature": 0.0004963874816894531, "__label__politics": 0.0006403923034667969, "__label__religion": 0.000941753387451172, "__label__science_tech": 0.377685546875, "__label__social_life": 0.00011718273162841796, "__label__software": 0.00690460205078125, "__label__software_dev": 0.59423828125, "__label__sports_fitness": 0.0011835098266601562, "__label__transportation": 0.001926422119140625, "__label__travel": 0.0004558563232421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15818, 0.00549]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15818, 0.30161]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15818, 0.81705]], "google_gemma-3-12b-it_contains_pii": [[0, 25, false], [25, 407, null], [407, 769, null], [769, 1086, null], [1086, 1156, null], [1156, 1663, null], [1663, 2218, null], [2218, 2492, null], [2492, 3146, null], [3146, 3627, null], [3627, 4520, null], [4520, 5272, null], [5272, 6715, null], [6715, 7008, null], [7008, 7042, null], [7042, 7387, null], [7387, 7752, null], [7752, 8183, null], [8183, 8555, null], [8555, 8864, null], [8864, 9727, null], [9727, 10294, null], [10294, 10878, null], [10878, 11386, null], [11386, 12176, null], [12176, 12424, null], [12424, 12938, null], [12938, 13435, null], [13435, 14566, null], [14566, 15021, null], [15021, 15419, null], [15419, 15818, null]], "google_gemma-3-12b-it_is_public_document": [[0, 25, true], [25, 407, null], [407, 769, null], [769, 1086, null], [1086, 1156, null], [1156, 1663, null], [1663, 2218, null], [2218, 2492, null], [2492, 3146, null], [3146, 3627, null], [3627, 4520, null], [4520, 5272, null], [5272, 6715, null], [6715, 7008, null], [7008, 7042, null], [7042, 7387, null], [7387, 7752, null], [7752, 8183, null], [8183, 8555, null], [8555, 8864, null], [8864, 9727, null], [9727, 10294, null], [10294, 10878, null], [10878, 11386, null], [11386, 12176, null], [12176, 12424, null], [12424, 12938, null], [12938, 13435, null], [13435, 14566, null], [14566, 15021, null], [15021, 15419, null], [15419, 15818, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15818, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15818, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15818, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15818, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 15818, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15818, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15818, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15818, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15818, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15818, null]], "pdf_page_numbers": [[0, 25, 1], [25, 407, 2], [407, 769, 3], [769, 1086, 4], [1086, 1156, 5], [1156, 1663, 6], [1663, 2218, 7], [2218, 2492, 8], [2492, 3146, 9], [3146, 3627, 10], [3627, 4520, 11], [4520, 5272, 12], [5272, 6715, 13], [6715, 7008, 14], [7008, 7042, 15], [7042, 7387, 16], [7387, 7752, 17], [7752, 8183, 18], [8183, 8555, 19], [8555, 8864, 20], [8864, 9727, 21], [9727, 10294, 22], [10294, 10878, 23], [10878, 11386, 24], [11386, 12176, 25], [12176, 12424, 26], [12424, 12938, 27], [12938, 13435, 28], [13435, 14566, 29], [14566, 15021, 30], [15021, 15419, 31], [15419, 15818, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15818, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
23377dd0f1cb6626c7b36ea690fc420fb2b3cead
Abstract Machine learning is an important research area of computer science with rapid development. This is due to the growth in the database industry, the advancement of data analysis, research and the resulting market needs for methods that are capable of extracting valuable knowledge from large data stores. Using the concept of Machine Learning, we have tried to apply it to a strategic game Peg Solitaire. We have used a decision tree, backtracking and pagoda function after analysis. Patterns are discovered, and further moves are taken in the game. It shows that ML is a perfect approach for a strategic game. Introduction Artificial intelligence (AI) plays a vital role in computer evolution. From past few decades, large research work is being done on AI and games. Automated programs now exist for games, which perform very well in classic board games such as Chess, Scrabble, Pentagon, Othello, etc. This research impacts about remarkable progress in machine learning methods, search algorithms and computer hardware[1]. From past few years, complex strategy simulation games are being focused by AI researchers, which offers new challenges including asynchronous gameplay, partially observable environments, comparatively large decision spaces with changing abstraction levels, and also with the necessity of resource management processes[2]. Games with smaller decision spaces require less sophisticated representations and reasoning capabilities than large decision space games.[5] Most single-player/multi-player video games are widely distributed with inbuilt artificial intelligence player that permits humans to play against the computer. The task of building such players is complicated because the AI player has to be challenging, but the significant fact is that the game still has to be winnable by the player. Creating an AI system for a game require contributions from many different research areas like machine learning to produce a realistic system rather than hardware resources. It is used as a platform for the human level AI System[3]. “The Machine Learning field evolved from the broad field of Artificial Intelligence, which aims to imitate intelligent abilities of humans by machines.” The important issue in the field of Machine Learning is how to make machines able to “learn.” Machine learning tasks are typically classified into three broad categories: Supervised Learning, Unsupervised Learning, and Reinforcement Learning[4]. Here we are dealing with reinforcement learning. It is a computer program interacts with a dynamic environment in which it must perform a certain goal without precise information which shows is it close to its target or not[5]. Game playing is a leading application area of research in AI widespread and Machine learning specifically. So here, we are focusing on classic board game Pegsolitaire with applying the concept of machine learning. **Rules for the Game** Peg Solitaire game is a well-known puzzle, which can show difficult despite its simplerules. Pegs are arranged on board; one ‘hole’ remains which shows the initial state of the game[6]. By making strategic wise moves, pegs are gradually removed until no further moves are possible or some goal state is achieved. This article considers the English variant, consisting of a board in a cross shape with 33 holes (32 pegs).[7] Pegsolitaire game is played on board, which is an n-by-m grid of holes, from which each hole can be either valid or invalid as shown in figure 1. A soundhole may contain a peg. If a hole contains a peg, then it is filled, and otherwise, it is empty. A configuration of a board depends on some subset of its filled holes. Thus, a board with H holes has $2^H$ possible configurations. In a peg solitaire game, a player changes the configuration from one board to another via jumps. A leap involves 3 (horizontally or vertically) consecutive holes, from which the first hole is known as the starting hole for the jump. To be more specific, for a given board, a jump can be represented as a triplet (p, q, dir), where p and q indicate the row and column on the board. The jump’s starting hole (indexed from 0), and direction of the jump (either EAST, NORTH, SOUTH, or WEST).[6] A legal jump on board for a particular configuration is if the jump involves three consecutive holes in which the first and second holes are filled, and the third hole is empty. [9] ![Figure 1: The Solitaire board and first and last states of central Solitaire.](image-url) Related Work Nowadays, computer game and its automation is the most demanding field which requires advancement time to time. It is a stable area of application and thus may see a more intellectually deep impact from evaluated interface specifications[7]. Several exceptional works are available on machine learning and Reinforcement Learning (RL) in gaming. Machine learning apparently RL in games is a popular area of application for AI research. RL has been used in a variety of different types of AI research both popular games and in video games. They describe the implementation of a simple, very specialized RL-based method. This method is Micro-managing combat, which focuses on the size of the state, space limits, and the efficiency of the learning agent[8]. Strategy games are an important and challenging subclass of Video games. ML is significantly used in a turned based strategic game civilization IV. In this competition, the player has to choose a leader amongst given options. Each leader has this strategic play, ML provides a powerful tool which allows different strategy. The player has to take the decisions so as to win the game. He builds cities, train workers, and military units, and interact with other human or AI players. According to the situations, the player has to change his strategy and also leaders. The aim of these games is to make the best use of limited resources to defeat the opposing players. Here in this strategic play, ML provides a powerful tool which allows agents to adapt the strategies and takes actions on it quickly, even in critical scenarios when a player is about to lose the game. This paper used a Markov Decision Processes as the general framework[9]. To win a strategic game, what requires is a great reasoning capability, strong decision power, and techniques to overcome challenging states. This needs an algorithm that gives the optimal environment through experience. Here for a solution they have developed a lattice that is a kind of decision tree for representing and interacting abstract states in a real-time strategy game which can be used incorporated in many board games. Case-Based Reasoning (CBR) is also a popular approach in strategic games[10]. Several decision techniques like Markov Decision Processes, integrated scheduling, have been implemented in many game’s logic. CBR technique is used to select several action sequences. However, this approach limits the set of actions that can be chosen for a new state. Pentago is a board game which is deterministic, strategic, and unsolved[11]. Therefore it is suitable for Machine Learning. It explores the power of computing of machine learning algorithms. This game is similar to the Peg solitaire game. The ability of a computer to solve the problems with different techniques like Brute Force approach is remarkable. Games like Pentagon often contain too many choices at each step. So it becomes a challenging job for ML to calculate the best move in a reasonable amount of time. Peg Solitaire is a well-known puzzle which can prove difficult despite its simple rules. Modeling Peg Solitaire presents a considerable challenge. Jefferson and Terrific solve the Peg Solitaire using CP and OR techniques[12]. This problem is interesting as it is highly symmetric. AI planning apparently ML tackles its sequential nature more usually. It includes solving the problem with unknown parameters. It explores the unmatched ability of ML to develop a complex system which applies different decision strategies for a large variety of cases in the game. **Solving Game** In this game of peg solitaire to reach the completion of a game state, the player has to go through many rules, and for winning the game, he has to make some strategy. The initial state is a state which shows the primary stage of the game where all the pegs are at their places and just one hole is empty. A description of possible actions that can be taken during the gameplay is stated with different data and rule runners. The set of measures is often a function of the current situation of a search agent, which is traversing the problem space for further moves. This is accomplished by a Decision tree. After removing the pegs one by one, many different possibilities will be there. Each position will be evaluated for further different moves. Various patterns can be possible for a single peg position. Few of them lead towards a solution. We check the moves given by player against the decision tree. Also, we are using the concept of backtracking here. Backtracking refers to go back to the previous state due to some undesirable conditions. The collection of state-action pairs is known as the state transition model. This field gives a description about the success or failure of the game after each move. Last state or final state is a goal state which determines that only one peg will remain in the hole. For solving this problem algorithmically and for the optimal solution, we are considering Pagoda Function. With the help of Pagoda function, we can get the solution with fewer moves. We are using this for guiding the players. In this, we give a value to each position on the board. The values are provided in a linear way such that three positions x, y, z are adjacent and in one line. The condition x+y>=z should be satisfied. The pagoda value is calculated for each function at the given state. This is provided by adding the values of the positions on the board(x, y, z) where there is a peg. One more condition is the middle value must be greater than the third value i.e., Y>z. We know that the moves are like if x, y, z are in a line then to make a move, z should be empty and x Andy should not. Peg at position x is removed and placed at position z. this shows that value of Pagoda function will never increase.\textsuperscript{[15]} There are infinitely many pagoda functions exist in the algorithm. Here, the bulk of experimental evaluation is concerned with the Solitaire reversals. That is, beginning with a particular single hole position on the board and ending with only a single peg at the same situation.\textsuperscript{[4]} Peg Solitaire game is a deterministic game so, after applying algorithm, we can say that player will win or lose the game. **Architecture and Algorithm** Here we have proposed an algorithm for our system which uses concepts of the decision tree, backtracking and pruning approach to solving a peg solitaire problem. This algorithm searches for a solution from a given start configuration by making a legal jump J from the outset, after which it recursively searches for a solution from the new configuration. The peg makes a move when a legal jump is possible otherwise it backtracks. Since the number of pegs (filled holes) is reduced after every jump on the board by one, the algorithm stops recursing. At that time, both start and final have the same number of pegs. We develop a decision tree according to it and prune it for a better search space. This tree is stored in the repository. Find Solution: FindSolution(Start,Final,Path) 1. if start.numbpeg <= final.numbpeg Return(Start=Final) 2. else Jumpj(a*b*{N,E,S,W}) 3. if j is a legal jump for start Then makeMove(j) Path.push(j) Else Illegal jump 4. makeReverseMove(j) 5. repeat step 3,4 untill If final.pegposition = Inverse(start.pegposition) Here we are proposing a general architecture for strategic rule based games which later we used as a reference for our architecture. ![General Gaming Architecture](image) **Fig. 3 General gaming architecture.** This is a most widely used architecture for games. It starts with a data repository where all the data like game implementation programs, systems libraries, and all the information is stored here. It has a training session for the new players under Trainer field. This will give trial games for training sessions for new players. A rule runner is provided for keeping track of standards in the games. For every move, the rules are considered. If anything goes wrong, the rule runner checks the rules against the rule repository. This rule repository contains all the rules of the game. Next to it is an algorithm for asetto to be executed. Also, rule management tool is there for the generated rules. This is a general game architecture for any strategic play. As for our Peg Solitaire game we have given the following architecture. ![Architecture for Peg Solitaire](image) **Fig 4: Architecture for peg solitaire.** Our system will work for a move as given in this architecture. A sequence of actions is taken after making a move. The user is unknown of it. Here machine learning works. The reinforcement algorithm is carried out here. According to it A peg is moved. The rule runner will check whether it is a valid move or not. If it is a valid move then it will go to next step otherwise, that move is backtracked i.e., that move is rejected. If the move is valid, then it is given to rule runner. Rule runner at first check the move against the decision tree and then it is checked for the Pagoda function as discussed above. If it satisfies both, then the move is an optimal move and can lead to a successful game. If not then it is not an optimal move, and it may or may not take you to a successful play. We can backtrack from such move. At times decisions are made by the player, the algorithm is modified for the specific move as per Machine Learning. Machine Learning can be summed up with Representation, Evaluation, and Optimization. At first, the move is represented then evaluated with a decision tree and then optimized with Pagoda function. V. Possible Solutions We play this game without considering any strategy, so we cannot repeat that moves later. By seeing this games it looks like very easy to play but considering all rules it is it is a very hard strategic play.\textsuperscript{[14]} \begin{center} \begin{tabular}{c} 01 02 03 \\ 04 05 06 \\ 07 08 09 10 11 12 13 \\ 14 15 16 17 18 19 20 \\ 21 22 23 24 25 26 27 \\ 28 29 30 \\ 31 32 33 \\ \end{tabular} \end{center} Solution1: 15--17, 28--16, 21--23, 07--21, 16--28, 31--23, 24--22, 21--23, 26--24, 23--25, 32--24--26, 33--25, 26--24, 12--26, 27--25, 13--27, 24--26, 27--25, 10--12, 25--11, 12--10, 03--11, 10--12, 08--10, 01--09--11, 02--10, 17--05, 12--10, 05--17. It is a 33peg solitaire game, which is having 32 pegs so we need 31 jumps to reach its solution because we must have to remove 31 pegs. It also counts double jumps as single jump like (32--24--26), so it is possible to have less than correct 31 moves. The board is symmetrical vertically, horizontally or diagonally, therefore, there are more solutions possible. Solution2: Smallest solution of this problem is with 18 moves which made a World Record. 15--17, 28--16, 21--23, 24--22, 26--24, 33--25, 18--30, 31--33--25, 09--23, 01--09, 06--18--30--28--16--04, 07--21--23--25, 13--11, 10--12, 27--13--11, 03--01--09, 08--10--12--26--24--10, 05--17.\textsuperscript{[16]} Peg Mobility: As shown in figure (3 colored class classification) if the peg is at any position in the red part, jumps are possible in all directions. Two possible jumps can be from the blue field, one possible jump from the green field. There are 76 jumps may possible together. As shown in figure (4 colored class classification) also defined. In which after completion of any move peg will always remain in the same colored class. **Rule Runner for beginners:** Most frequently repeated patterns are as shown in the figure. In all cases, only one peg must leave over. These shown problems are very useful for beginners to learn this game. **Pattern Solutions:** 1) Submarine: 10--24, 15--17, 24--10, 19--17, 10--24. 2) Greek Cross: 10--02, 24--10, 19--17--05, 02--10, 15--17, 10--24, 29--17. **Dead End Patterns:** Game starting with 33 pegs and on the way to completion of this game we may get these types of patterns which lead to the dead end of problem solution. Solutions: 2 squares: 19--17, 16--18, 14--16, 05--17--15, 04--16--14, 29--17, 18--16, 06--18, 07--09, 13--11, 18--06, 16--04, 01--09, 03--11, 30--18, 28--16, 21--23, 27--25, 16--28, 18--30, 33--25, 31--23. Avenue: 19--17, 30--18, 17--19, 20--18, 27--25, 15--17, 04--16, 17--15, 07--09, 14--16, 06--04, 18--06, 03--11, 01--03, 28--30, 16--28, 25--23, 33--25, 31--33, 23--31, 21--23, 09--01, 11--09, 13--11. Pair: 19--17, 30--18, 17--19, 06--18, 13--11, 27--13, 26--12, 18--06, 03--11, 01--03, 04--06, 16--04, 11--09, 13--11, 08--10, 11--09, 03--11, 22--08--10, 11--09, 04--16, 24--22, 21--23, 07--21, 32--24--22, 31--23, 16--28, 21--23--31. 4 squares: 29--17, 26--24, 17--29, 32--24, 23--25, 31--23, 22-24, 25--23, 33--25, 08--22--24, 10--08, 07--09, 21--07, 02--10--08, 07--09, 12--10, 03--11, 24--26, 27--25, 19--17, 10--24, 13--27, 24--26, 27--25, 09--23, 01--09. **Worst(Shortest) possible pattern:** It will reach to dead end within only six moves. Pattern:4--16,23--9,14--16,17--15,19--17,31--23 **Conclusion** Machine learning is an important research area of computer science. It has a rapid development. This is due to the growth in the database industry, the advancement in data analysis and research and the resulting market needs for methods that are capable of extracting valuable knowledge from large data stores. With the concept of ML, we have tried to solve a strategic game Peg Solitaire. We have used a decision tree, backtracking and pagoda function after analysis. Patterns are discovered, and further moves are taken in the match. It shows that ML is a perfect approach for strategic play. **References**
{"Source-Url": "http://www.ijptonline.com/wp-content/uploads/2017/01/25648-25657.pdf", "len_cl100k_base": 4384, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 34083, "total-output-tokens": 6103, "length": "2e12", "weborganizer": {"__label__adult": 0.0016469955444335938, "__label__art_design": 0.0013866424560546875, "__label__crime_law": 0.0026378631591796875, "__label__education_jobs": 0.00408935546875, "__label__entertainment": 0.00087738037109375, "__label__fashion_beauty": 0.0010213851928710938, "__label__finance_business": 0.0012521743774414062, "__label__food_dining": 0.0026454925537109375, "__label__games": 0.2081298828125, "__label__hardware": 0.003692626953125, "__label__health": 0.00286865234375, "__label__history": 0.0017518997192382812, "__label__home_hobbies": 0.0005097389221191406, "__label__industrial": 0.0024929046630859375, "__label__literature": 0.0014333724975585938, "__label__politics": 0.0011701583862304688, "__label__religion": 0.0014934539794921875, "__label__science_tech": 0.370849609375, "__label__social_life": 0.00034427642822265625, "__label__software": 0.01334381103515625, "__label__software_dev": 0.370849609375, "__label__sports_fitness": 0.0031337738037109375, "__label__transportation": 0.0015459060668945312, "__label__travel": 0.0007238388061523438}, "weborganizer_max": "__label__science_tech", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20320, 0.10377]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20320, 0.48401]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20320, 0.90715]], "google_gemma-3-12b-it_contains_pii": [[0, 2106, false], [2106, 4510, null], [4510, 7230, null], [7230, 10248, null], [10248, 11912, null], [11912, 13283, null], [13283, 15552, null], [15552, 16510, null], [16510, 18847, null], [18847, 20320, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2106, true], [2106, 4510, null], [4510, 7230, null], [7230, 10248, null], [10248, 11912, null], [11912, 13283, null], [13283, 15552, null], [15552, 16510, null], [16510, 18847, null], [18847, 20320, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20320, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20320, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20320, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20320, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20320, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20320, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20320, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20320, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20320, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20320, null]], "pdf_page_numbers": [[0, 2106, 1], [2106, 4510, 2], [4510, 7230, 3], [7230, 10248, 4], [10248, 11912, 5], [11912, 13283, 6], [13283, 15552, 7], [15552, 16510, 8], [16510, 18847, 9], [18847, 20320, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20320, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
a99fd578d2796a548296700a2c3752295a70e45a
A Possible Protocol Plateau I. Introduction We have been engaged in two activities since the network meeting of March 17, 1970 and, as promised, are reporting our results. First, we have considered the various modifications suggested from all quarters and have formed preferences about each of these. In Section II we give our preferences on each issue, together with our reasoning. Second, we have tried to formalize the protocol and algorithms for the NCP, we attempted to do this with very little specification of a particular implementation. Our attempts to date have been seriously incomplete but have led to a better understanding. We include here, only a brief sketch of the structure of the NCP. Section III gives our assumptions about the environment of the NCP and in Section IV the components of the NCP are described. II. Issues and Preferences In this section we try to present each of the several questions which have been raised in recent NWG/RFC’s and in private conversations, and for each issue, we suggest an answer or policy. In many cases, good ideas are rejected because in our estimation they should be incorporated at a different level. A. Double Padding As BBN report #1822 explains, the Imp side of the Host-to-Imp interface concatenates a 1 followed by zero or more 0’s to fill out a message to an Imp word boundary and yet preserve the message length. Furthermore, the Host side of the Imp-to-Host interface extends a message with 0’s to fill out the message to a Host word boundary. BBN’s mechanism works fine if the sending Host wants to send an integral number of words, or if the sending Host’s hardware is capable of sending partial words. However, in the event that the sending Host wants to send an irregular length message and its hardware is only capable of sending word-multiple messages, some additional convention is needed. One of the simplest solutions is to modify the Imp side of the Host-to-Imp interface so that it appends only 0’s. This would mean that the Host software would have to supply the trailing 1. BBN rejected the change because of an understandably strong bias against hardware changes. It was also suggested that a five instruction patch to the Imp program would remove the interface supplied 1, but this was also rejected on the new grounds that it seemed more secure to depend only upon the Host hardware to signal message end, and not to depend upon the Host software at all. Two other solutions are also available. One is to have "double padding", whereby the sending Host supplies 10* and the network also supplies 10*. Upon input, a receiving Host then strips the trailing 10* 10*. The other solution is to make use of the marking. Marking is a string of the form 0*1 inserted between the leader and the text of a message. The original intent of marking was to extend the leader so that the sending Host could _begin_ its text on a word boundary. It is also possible to use the marking to expand a message so that it _ends_ on a word boundary. Notice that double padding could replace marking altogether by abutting the text beginning against the leader. For 32 bit machines, this is convenient and marking is not, while for other lengths, particularly 36 bit machines, marking is much more convenient than double padding. We have no strong preference, partially because we can send word fragments. Shoshani, et al in NWG/RFC #44 claim that adjusting the marking does not cause them any problems, and they have a 32 bit machine. Since the idea of marking has been accepted for some time, we suggest that double padding not be used and that marking be used to adjust the length of a message. We note that if BBN ever does remove the 1 from the hardware padding, only minimal change to Host software is needed on the send side. A much prettier (and more expensive) arrangement was suggested by W. Sutherland. He suggested that the Host/Imp interfaces be smart enough to strip padding or marking and might even parse the message upon input. B. Reconnection A very large population of networkers has beat upon us for including dynamic reconnection in the protocol. We felt it might be of interest to relate how it came to be included. After considering connections and their uses for a while, we wondered how the mechanism of connections compared to existing forms of intra-Host interprocess communication. Two aspects are of interest, what formalisms have been presented in the literature, and what mechanisms are in use. The formalisms are interesting because they lead to uniform implementations and parsimonious design. The existing mechanisms are interesting because they point out which problems need solving and sometimes indicate what an appropriate formalism might be. In particular, we have noticed that the mechanisms for connecting a console to the logger upon dial in, the mechanisms for creating a job, and the mechanisms for passing a console around to various processes within a job tend to be highly idiosyncratic and distinct from all other structures and mechanisms within an operating system. With respect to the literature, it appears there is only one idea with several variations, viz processes should share a portion of their address spaces and cooperatively wake up each other. Semaphores and event channels are handy extensions of wake up signals, but the intent is basically the same. (Event channels could probably function as connections, but it seems not to be within their intended use. In small systems, the efficiency and capacity of event channels are inversely related.) With respect to existing implementations, we note that several systems allow a process to appear to be a file to another process. Some systems, e.g. the SDS-940 at SRI impose a master/slave relationship between two processes so connected, but other systems provide for a coequal relationship e.g. the AI group's PDP-6 system at MAC. The PDP-6 system also has a feature whereby a superior process can "surround" an inferior process with a mapping from device and file names to other device and file names. Consoles have nearly the same semantics as files, so it is quite reasonable for an inferior process to believe it is communicating with the console but in fact be communicating with another process. The similarity between network connections and existing sequential interprocess connections supports our belief that network connections are probably the correct structure for using the network. Moreover, the structure is clean enough and compatible with enough machines to pass as a formalism or theory, at least to the extent of the other forms of interprocess communication presented in the literature. Any new formalism, we believe, must meet at least the following two tests: 1. What outstanding problems does it solve? 2. Is it closed under all operations? In the case of network connections, the candidates for the first are the ones given above, i.e. all operations involving connecting a console to a job or a process. Also of interest are the modelling of sequential devices such as tape drives, printers and card readers, and the modeling of their buffering (spooling, symbiont) systems. The second question mentions closure. In applying the connection formalism to the dial-in and login procedures, we felt the need to include some sort of switching or reconnection, and an extremely mild form is presented in an SJCC paper, which is also NWG/RFC #33. This mild form permits only the substitution of AEN’s, and even then only at the time of connection establishment. However, it is a common experience that if an operation has a natural definition on an extended domain, it eventually becomes necessary or at least desirable to extend its definition. Therefore, we considered the following extensions: 1. Switching to any other socket, possibly in another Host. 2. Switching even after data flow has started. There is even some precedent for feeling these extensions might be useful. In one view of an operating system, we see all available phone lines as belonging to a live process known as the logger. The logger answers calls, screens users, and creates jobs and processes. One of the features of most telephone answering equipment is that many phone lines may serve the same phone number by using a block of sequential numbers and a rotary answering system. In our quest for accurate models of practical systems, we wanted to be able to provide equivalent service to network users, i.e. they should be able to call a single advertised number and get connected to the logger. Thus a prima facie case for switching is established. Next we see that after the logger interrogates a prospective user, it must connect the user to a newly created job. Data flow between the user and the logger has already commenced, so flow control has to be meshed with switching if it is desired not to lose or garble data in transit. With respect to inter-Host switching, we find it easy to imagine a utility service which is distributed throughout the network and which passes connections from one socket to another without the knowledge of the user. Also, it is similar to the more sophisticated telephone systems, to standard facilities of telephone company operators, and to distributed private systems. These considerations led us to investigate the possibility of finding one type of reconnection which provided a basis for all known models. The algorithm did not come easily, probably because of inexperience with finite state automata theory, but eventually we produced the algorithm presented in NWG/RFC #36. A short time later, Bill Crowther produced an equivalent algorithm which takes an alternate approach to race conditions. Networkers seem to have one of two reactions. Either it was pretty and (perhaps ipso facto) useful, or it was complex and (again perhaps ipso facto) unnecessary. The latter group was far more evident to us, and we were put into the defensive position of admitting that dynamic reconnection was only 1. pretty 2. useful for login and console passing In response to persistent criticism, we have made the following change in the protocol. Instead of calling socket <O,H,O> to login, sockets of the form <U,H,O> and <U,H,1> are the input and output sockets respectively of a copy of the logger or, if a job has been stared with user id U, these sockets are the console sockets. The protocol for login is thus to initiate a connection to <U,H,O> and <U,H,1>. If user U is not in use, a copy of the logger will respond and interrogate the caller. If user id U is in use, the call will be refused. This modification was suggested by Barry Wessler recently. (Others also suggested this change much earlier; but we rejected it then.) The logger may demand that the caller be from the same virtual net, i.e. the caller may have user id U in some other Host, or it may demand that the user supply a password matched to user id U, or it may demand both. Some systems may even choose to permit anybody to login to any user id. After login, AEN’s 0 and 1 remain the console AEN’s. Each system presumably has mechanisms for passing the console, and these would be extended to know about AEN’s 0 and 1 for network users. Passing the console is thus a matter of reconnecting sockets to ports, and happens within the Host and without the network. In conversations with Meyer and Skinner after NWG/RFC #46 was received, they suggested a login scheme different from both Meyer’s and ours in section above. Their new scheme seemed a little better and we look forward to their next note. It is generally agreed that login should be "third-level", that is, above the NCP level. We are beginning to be indifferent about particular logins schemes; all seem ok and none impress us greatly. We suggest that several be tried. It is some burden, of course, to modify the local login procedure, but we believe it imposes no extra hardship to deal with diverse login procedures. This is because the text sequences and interrupt conventions are so heterogenous that the additional burden of following, say, our scheme on our system and Meyer’s on Multics is minimal. We are agreed that reconnection should not be required in the initial protocol, and we will offer it later as an optional and experimental tool. In addition, we would like to be on record as predicting that general reconnection facilities will become useful and will provide a unifying framework for currently ad hoc operating system structures. C. Decoupling Connections and Links Bill Crowther (BBN) and Steve Wolfe (UCLA) independently have suggested that links not be assigned to particular connections. Instead, they suggest, include the destination socket as part of the text of the message and then send messages over any unblocked link. We discussed this question a little in NWG/RFC #37, and feel there is yet an argument for either case. With the current emphasis on simplicity, speed and small core requirements, it seems more efficient to leave links and connections coupled. We, therefore, recommend this. D. Error Reporting As mentioned by J. Heafner and E. Harslem of RAND, it is important to treat errors which might occur. A good philosophy is to guard against any input which destroys the consistency of the NCP’s data base. The specific formulation of the error command given by Heafner and Harslem in NWG/RFC #40 and by Meyer in NWG/RFC #46 seems reasonable and we recommend its adoption. Some comments are in order, however. A distinction should be made between resource errors and other types of errors. Resource errors are just the detection of overload conditions. Overload conditions are well-defined and valid, although perhaps undesirable. Other types of errors reflect errant software or hardware. We feel that resource errors should not be handled with error mechanisms, but with mechanisms specific to the problem. Thus the <CLS> command may be issued when there is no more room to save waiting <RFC>’s. Flow control protocol is designed solely to handle buffering overload. With respect to true errors, we are not certain what the value of the <ERR> command is to the recipient. Presumably his NCP is broken, and it may only aggravate the problem to bombard it with error commands. We therefore, recommend that error generation be optional, that all errors be logged locally in a chronological file and that <ERR> commands received likewise be logged in a chronological file. No corrective action is specified at this time. In the short time the network has been up at UCLA, we have become convinced that the network itself will generate very few errors. We have watched the BBN staff debug and test the IMP program, and it seemed that most of the errors affected timing and throughput rather than validity. Hence most errors will probably arise from broken Hosts and/or buggy NCP’s. E. Status Testing and Reporting A valuable debugging aid is to be able to get information about what a foreign NCP thinks is happening. A convenient way to do this is to permit NCP’s to send status whenever they wish, but to always have them do it whenever they receive a request. Since we view this feature as primarily a debugging tool, we suggest that a distinct link, like 255, be used. The intent is that processing of status requests and generating of status messages should use as little of the normal machinery as possible. Thus we suggest that link 255 be used to send "request status" and "status is" commands. The form follows the suggestion on page 2 of NWG/RFC #40. Meyer’s <ECO> command is easily implemented and serves the more basic function of testing whether a foreign NCP is alive. We suggest that the length of the <ECO> command be variable, as there seems to be no significance in this context to 48 bits. Also, the value of a (presumably) 8 bit binary switch is unclear, so we recommend a pair of commands: ``` <ECO> <length> <text> ``` and ``` <ERP> <length> <text> ``` where ``` <length> is 8 bits. ``` Upon receipt of an <ECO> command the NCP would echo with the <ERP> command. F. Expansion and Experimentation As Meyer correctly points out in NWG/RFC #46, network protocol is a layered affair. Three levels are apparent so far. 1. IMP Network Protocol 2. Network Control Program Protocol 3. Special user level or Subsystem Level Protocol This last level should remain idiosyncratic to each Host (or even each user). The first level is well-specified by BBN, and our focus here is on level 2. We would like to keep level 2 as neutral and simple as possible, and in particular we agree that login protocol should be as much on level 3 as possible. Simplicity and foresight notwithstanding, there will arise occasions when the level 2 protocol should change or be experimented with. In order to provide for experimentation and change, we recommend that only link numbers 2 through 31 be assigned to regular connections, with the remaining link numbers, 32 to 255, used experimentally. We have already suggested that link 255 be used for status requests and replies, and this is in consonance with our view of the experimental aspects of that feature. We also recommend that control command prefixes from 255 downward be used for experimentation. These two conventions are sufficient, we feel to permit convenient experimentation with new protocol among any subset of the sites. We thus do not favor inclusion of Ancona’s suggestion in NWG/RFC #42 for a message data type code as the first eight bits of the text of a message. G. Multiplexing Ports to Sockets Wolfe in NWG/RFC #38 and Shoshani et al in NWG/RFC #44 suggest that it should be possible to attach more than one port to a socket. While all of our diagrams and prototypical system calls have shown a one-to-one correspondence between sockets and ports, it is strictly a matter of local implementation. We note that sockets form a network-wide name space whose sole purpose is to interface between the idiosyncratic structures peculiar to each operating system. Our references to ports are intended to be suggestive only, and should be ignored if no internal structures corresponds to them. Most systems do have such structures, however, so we shall continue to use them for illustration. H. Echoing, Interrupts and Code Conversion 1. Interrupts We had been under the impression that all operating systems scanned for a reserved character from the keyboard to interpret it as an interrupt signal. Tom Skinner and Ed Meyer of MIT inform us that model 37 TTY’s and IBM 2741 generate a "long space" of 200-500 milliseconds which is detected by the I/O channel hardware and passed to the operating system as an interrupt. The "long space" is not a character -- it has no ASCII code and cannot be program generated. Well over a year ago, we considered the problem of simulating console interrupts and rejected the <INT> type command because it didn’t correctly model any system we knew. We now reverse our position and recommend the implementation of an INTERRUPT system call and an <INT> control command as suggested by Meyer in NWG/RFC #46. Two restrictions of the interrupt facility should be observed. First, when communicating with systems which scan for interrupt characters, this feature should not be used. Second, non-console-like connections probably should not have interrupts. We recommend that systems follow their own conventions, and if an <INT> arrives for a connection on which it shouldn’t the <INT> should be discarded and optionally returned as an error. 2. Echoing and Code Conversion We believe that each site should continue its current echoing policy and that code conversion should be done by the using process. Standardization in this area should await further development. Ancona’s suggestion of a table-driven front-end transducer seems like the right thing, but we believe that such techniques are part of a larger discussion involving higher-level languages for the network. I. Broadcast Facilities Heafner and Harslem suggest in NWG/RFC #39 a broadcast facility, i.e. <TER> and <BDC>. We do not fully understand the value of this facility and are thus disposed against it. We suspect that we would understand its value better if we had more experience with OS/360. It is probably true in general that sites running OS/360 or similar systems will find less relevance in our suggestions for network protocol than sites running time-sharing systems. We would appreciate any cogent statement on the relationship between OS/360 and the concepts and assumptions underlying the network protocol. J. Instance Numbers Meyer in NWG/RFC #46 suggests extending a socket to include an _instance_ code which identifies the process attached to the socket. We carefully arranged matters so that processes would be indistinguishable. We did this with the belief that both as a formal and as a practical matter it is of concern only within a Host whether a computation is performed by one or many processes. Thus we believe that all processes within a job should cooperate in allocating AEN’s. If an operating system has facilities for passing a console from process to process within a job, these facilities mesh nicely with the current network protocol, even within reconnection protocol; but instance numbers interfere with such a procedure. We suggest this matter be discussed fully because it relates to the basic philosophy of sockets and connections. Presently we recommend 40 bit socket numbers without instance codes. K. AEN’s Nobody, including us, is particularly happy with our name AEN for the low order 8 bits of the socket. We rejected _socket_number_, and are similarly unhappy with Meyer’s _socket_code_. The word socket should not be used as part of the field name, and we solicit suggestions. III. Environment We assume that the typical host will have a time-sharing operating system in which the cpu is shared by processes. Processes We envision that each process is tagged with a _user_number_. There may be more than one process with the same user number, and if so, they should all be cooperating with respect to using the network. We envision that each process contains a set of _ports_ which are unique to the process. These ports are used for input to or output from the process, from or to files, devices or other processes. We also envision that each process has an event channel over which it can receive very short messages (several bits). We will use this mechanism to notify a process that some action external to the process has occurred. To engage in network activity, a process _attaches_ a _local_socket_ to one of its ports. Sockets are identified by user number, host and AEN, and a socket is local to a process if their user numbers match and they are in the same host. A process need only specify an AEN when it is referring to a local socket. Each port has a status which is modified by system calls and by concurrent events outside the process. Whenever the status of a port is changed, the process is sent an event over its event channel which specifies which port’s status has changed. The process may then look at a port’s status. These assumptions are used descriptive material which follows. However, these assumptions are not imposed by the network protocol and the implementation suggested by section IV is in no way binding. We wish to make very clear that this material is offered only to provide clues as to what the implementation difficulties might be and not to impose any particular discipline. For example, we treat <RFC>’s which arrive for unattached local sockets as valid and queue them. If desired, an NCP may reject them, as Meyer suggests, or it might hold them for awhile and reject them if they’re not soon satisfied. The offered protocol supports all these options. Another local option is the one mentioned before of attaching multiple ports to a socket. We have shown one-one correspondence but this may be ignored. Similarly, the system calls are merely suggestive. System Calls These are typical system calls which a user process might execute. We show these only for completeness; each site will undoubtedly implement whatever equivalent set is convenient. We use the notation ``` Syscall ( arg , arg ...; val ... ) ``` where Syscall is the system call arg etc. are the parameters supplied with the call, and val etc. are any values returned by the system call. ``` Init (P,AEN,FS,Bsiz;C) ``` P Specifies a port of the process. AEN Specifies a local socket. The user number of this process and host number of this host are implicit. FS Specifies a socket with any user number in any host, with any AEN. Bsiz Specified the amount of storage in bits the user wants to devote to buffering messages. C The condition code returned. Init attempts to attach the local socket specified by AEN to the port P and to initiate a connection with socket FS. Possible returned values of C are C = ok The Init was legal and the socket FS is being contacted. When the connection is established or when FS refuses, the process will receive an event. C = busy The local socket was in use by a port on this or some other process with the same user number. No action was taken. C = homosex The AEN and FS were either both send or both receive sockets. C = nohost The host designated within FS isn’t known. C = bufbig Bsiz is too large. Listen (P,AEN,Bsize;C) P Specifies a port of the process. AEN Specifies a local socket. Bsize Specified a buffer size. C The returned legality code. Codes for C are C = ok C = busy C = bufbig The local socket specifies by AEN is attached to P. If there is a waiting call, it is processed; otherwise no action is taken. When a call comes in, a connection will be established and the process notified via an event. Close (P) P Specifies a port of the process. Any activity is stopped, and the port becomes free for other use. Transmit (P,M,L1;L2,C) P Specifies port with an open connection. M The text to be transmitted. L1 Specifies the length of the text. L2 The length actually transmitted. C The error code. Transmission between the processes on either side of the port takes place. Codes for C are \[ \begin{align*} C &= \text{ok} \\ \text{or} & \\ C &= \text{not open} \quad \text{if no connection is currently open and otherwise uninhibited} \end{align*} \] Status \((P;C)\) The status of port \(P\) is returned as \(C\). IV. The NCP We view the NCP as having five component programs, three associative tables, some queues and buffers, and a link assignment table. Each site will of course, vary this design to meet its needs, so our design is only illustrative. The Component Programs 1. The Input Handler This is an interrupt driven input routine. It initiates Imp-to-Host transmission into a resident buffer and wakes up the Input Interpreter when transmission is complete. 2. The Output Handler This is an interrupt driven output routine. It initiates Host-to-Imp transmission out of a resident buffer and wakes up the Output Scheduler when transmission is complete. 3. The Input Interpreter This program decides whether the input is a regular message intended for a user, a control message, an Imp-to-Host message, or an error. For each class of message, this program takes the appropriate action. 4. The Output Scheduler Three classes of message are sent to the Imp \((a)\) Host-to-Imp messages \((b)\) Control messages \((c)\) Regular messages We believe that a priority should be imposed among these classes. The priority we suggest is the ordering above. The Output Scheduler selects the highest priority message and gives it to the Output Handler. 5. The System Call Interpreter This program interprets requests from the user. The two interesting components are the Input Interpreter and the System Call Interpreter. These are similar in that the Input Interpreter services foreign requests and the System Call Interpreter services local requests. Associative Tables We envision that the bulk of the NCP’s data base is in three associative tables. By "associative", we mean that there is some lookup routine which is presented with a key and either returns successfully with a pointer to the corresponding entry, or fails if no entry corresponds to the key. 1. The Rendezvous Table "Requests-for-connection" and other attributes of a connection are held in this table. This table is accessed by local socket, but other tables have pointers to existing entries. The components of an entry are: (a) local socket (key) (b) foreign socket (c) link (d) queue of callers (e) text queue (f) connection state (g) flow state (h) pointer to attached port An entry is created when a user executes either an Init or a Listen system call or when a <RFC> is received. Some fields are unused until the connection is established, e.g. the foreign socket is not known until a <RFC> arrives if the user did a Listen. 2. The Input Link Table The Input Interpreter uses the foreign host and link as a key to get a pointer to the entry in the rendezvous table for the connection using the incoming link. 3. The Output Link Table In order to interpret RFNM’s, the Input Interpreter needs a table in the same form as the Input Link Table but using outgoing links. Link Assignment Table This is a very simple structure which keeps track of which links are in use for each host. One word per host probably suffices. The following diagram is our conception of the Network Control Program. Boxes represent tables and Buffers, boxes with angled corners and a double bottom represent Queues, and jagged boxes represent component programs, the arrows represent data paths. The abbreviated names have the following meanings. ILT - Input Link Table OLT - Output Link Table LAT - Link Assignment Table RT - Rendezvous Table HIQ - Host to Imp Queue OCCQ - Output Control Command Queue ORMQ - Output Regular Message Queue IHBuf - Buffer filled by the Input Handler from the IMP and emptied by the Input Interpreter OHBuf - Buffer of outgoing messages filled from the Queues by the Output Scheduler and emptied by the Output Handler. [ Editor's note: The original hand-drawn diagram represented Queues by cylinders and component programs by "squishy ameoba like things". ]
{"Source-Url": "http://www.rfc-editor.org/pdfrfc/rfc48.txt.pdf", "len_cl100k_base": 6471, "olmocr-version": "0.1.49", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 33954, "total-output-tokens": 7311, "length": "2e12", "weborganizer": {"__label__adult": 0.00034427642822265625, "__label__art_design": 0.0003421306610107422, "__label__crime_law": 0.00032973289489746094, "__label__education_jobs": 0.0006361007690429688, "__label__entertainment": 0.0001308917999267578, "__label__fashion_beauty": 0.00014352798461914062, "__label__finance_business": 0.0002734661102294922, "__label__food_dining": 0.0004041194915771485, "__label__games": 0.0006442070007324219, "__label__hardware": 0.006256103515625, "__label__health": 0.000537872314453125, "__label__history": 0.0003876686096191406, "__label__home_hobbies": 0.0001399517059326172, "__label__industrial": 0.00075531005859375, "__label__literature": 0.00033164024353027344, "__label__politics": 0.00023794174194335935, "__label__religion": 0.0005698204040527344, "__label__science_tech": 0.2064208984375, "__label__social_life": 0.00010859966278076172, "__label__software": 0.030609130859375, "__label__software_dev": 0.7490234375, "__label__sports_fitness": 0.00034165382385253906, "__label__transportation": 0.000835418701171875, "__label__travel": 0.0002510547637939453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30369, 0.00703]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30369, 0.17985]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30369, 0.94167]], "google_gemma-3-12b-it_contains_pii": [[0, 1709, false], [1709, 4018, null], [4018, 6467, null], [6467, 8643, null], [8643, 10953, null], [10953, 13101, null], [13101, 15185, null], [15185, 17194, null], [17194, 19147, null], [19147, 21369, null], [21369, 23409, null], [23409, 24994, null], [24994, 26193, null], [26193, 27556, null], [27556, 29024, null], [29024, 30231, null], [30231, 30231, null], [30231, 30369, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1709, true], [1709, 4018, null], [4018, 6467, null], [6467, 8643, null], [8643, 10953, null], [10953, 13101, null], [13101, 15185, null], [15185, 17194, null], [17194, 19147, null], [19147, 21369, null], [21369, 23409, null], [23409, 24994, null], [24994, 26193, null], [26193, 27556, null], [27556, 29024, null], [29024, 30231, null], [30231, 30231, null], [30231, 30369, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30369, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30369, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30369, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30369, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30369, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30369, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30369, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30369, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30369, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30369, null]], "pdf_page_numbers": [[0, 1709, 1], [1709, 4018, 2], [4018, 6467, 3], [6467, 8643, 4], [8643, 10953, 5], [10953, 13101, 6], [13101, 15185, 7], [15185, 17194, 8], [17194, 19147, 9], [19147, 21369, 10], [21369, 23409, 11], [23409, 24994, 12], [24994, 26193, 13], [26193, 27556, 14], [27556, 29024, 15], [29024, 30231, 16], [30231, 30231, 17], [30231, 30369, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30369, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
b04439e14f2e20c5c555e50d5008b8e488a6dae2
Introduction Ruby is an object-oriented, imperative scripting language - "I wanted a scripting language that was more powerful than Perl, and more object-oriented than Python. That's why I decided to design my own language." - "I believe people want to express themselves when they program. They don't want to fight with the language. Programming languages must feel natural to programmers. I tried to make people enjoy programming and concentrate on the fun and creative part of programming when they use Ruby." — Yukihiro Matsumoto ("Matz") Applications of Scripting Languages - Scripting languages have many uses - Automating system administration - Automating user tasks - Quick-and-dirty development - Major application - Text processing Books on Ruby - Earlier version of Thomas book available on web - See course web page Output from Command-Line Tool ```bash % wc * 271 674 5312 AST.c 100 392 3219 AST.h 117 1459 238788 AST.o 1874 5428 47461 AST_defs.c 1375 6307 53667 AST_defs.h 371 884 9483 AST_parent.c 810 2328 24589 AST_print.c 640 3070 33530 AST_types.h 285 846 7081 AST_utils.c 59 274 2154 AST_utils.h 50 400 28756 AST_utils.o 866 2757 25873 Makefile 270 725 5578 Makefile.am 866 2743 27320 Makefile.in 38 175 1154 alloca.c 2035 4516 47721 aloctypes.c 86 350 3286 aloctypes.h ``` Climate Data for IAD in August, 2005 | AVG | MX | 2MIN | DY | MAX | MIN | AVG | DEP | HDD | CDD | WTR | SNW | DPTH | SPD | SPD | DIR | MIN | PSBL | S-S | WX | SPD | DR | |-----|----|------|----|-----|----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----| | 87 | 66 | 77 | 1 | 0 | 12 | 0.00| 0.0 | 0 | 2.5 | 9 | 200 | M | M | 7 | 18 | 12 | 210 | | 92 | 67 | 80 | 4 | 0 | 15 | 0.00| 0.0 | 0 | 3.5 | 10 | 10 | M | M | 3 | 18 | 17 | 320 | | 93 | 69 | 81 | 5 | 0 | 16 | 0.00| 0.0 | 0 | 4.1 | 13 | 360 | M | M | 2 | 18 | 17 | 360 | | 95 | 69 | 82 | 6 | 0 | 17 | 0.00| 0.0 | 0 | 3.6 | 9 | 310 | M | M | 3 | 18 | 12 | 290 | | 94 | 73 | 84 | 8 | 0 | 19 | 0.00| 0.0 | 0 | 5.9 | 18 | 10 | M | M | 3 | 18 | 25 | 360 | | 89 | 70 | 80 | 4 | 0 | 15 | 0.02| 0.0 | 0 | 5.3 | 20 | 200 | M | M | 6 | 138 | 23 | 210 | | 89 | 69 | 79 | 3 | 0 | 14 | 0.00| 0.0 | 0 | 3.6 | 14 | 200 | M | M | 7 | 1 | 16 | 210 | | 86 | 70 | 78 | 3 | 0 | 13 | 0.74| 0.0 | 0 | 4.4 | 17 | 150 | M | M | 10 | 18 | 23 | 150 | | 76 | 70 | 73 | -2 | 0 | 8 | 0.19| 0.0 | 0 | 4.1 | 9 | 90 | M | M | 9 | 18 | 13 | 90 | | 87 | 71 | 79 | 4 | 0 | 14 | 0.00| 0.0 | 0 | 2.3 | 20 | 200 | M | M | 5 | 1 | 10 | 210 | A Simple Example Let's start with a simple Ruby program ```ruby # This is a ruby program x = 37 y = x + 5 print(y) print("\n") ``` Run Ruby, Run There are three ways to run a Ruby program - `ruby -w filename` – execute script in `filename` - tip: the `-w` will cause Ruby to print a bit more if something bad happens - `irb` – launch interactive Ruby shell - can type in Ruby programs one line at a time, and watch as each line is executed - `ruby -e` – launch executable Run Ruby, Run (cont.) Suppose you want to run a Ruby script as if it were an executable ```ruby #!/usr/local/bin/ruby -w print("Hello, world!\n") ``` Language Basics - comments begin with #, go to end of line - variables need not be declared - no special main() function or method - line break separates expressions (can also use ";", to be safe) Raw Census 2000 Data for DC ``` 2000 Census Summary File 1 1000 - UNINCORPORATED # This is a ruby program x = 37 y = x + 5 print(y) print("\n") ``` Run Ruby, Run (cont.) - `ruby -w filename` – execute script in `filename` - The first line ("shebang") tells the system where to find the program to interpret this text file - Must chmod u+x `filename` first - `chmod a+x filename` so everyone has exec permission - Warning: Not very portable - Depends on location /usr/local/bin/ruby ```ruby #!/usr/local/bin/ruby -w print("Hello, world!\n") ``` Explicit vs. Implicit Declarations - Java and C/C++ use **explicit variable declarations** - Variables are named and typed before they are used - `int x, y; x = 37; y = x + 5;` - In Ruby, variables are **implicitly declared** - First use of a variable declares it and determines type - `x = 37; y = x + 5;` - `x, y` exist, will be integers Tradeoffs? <table> <thead> <tr> <th>Explicit Declarations</th> <th>Implicit Declarations</th> </tr> </thead> <tbody> <tr> <td>Higher overhead</td> <td>Lower overhead</td> </tr> <tr> <td>Helps prevent typos</td> <td>Easy to mistype variable name</td> </tr> <tr> <td>Forces programmer to document types</td> <td>Figures out types of variables automatically</td> </tr> </tbody> </table> Methods in Ruby - Methods are declared with `def...end` - List parameters at definition - May omit parens on call - Invoke method ```ruby def sayHi(message, n) i = 0 while i < n puts message i = i + 1 end return i end x = sayHi("hello", 3) puts(x) ``` (Methods should begin with lowercase letter and be defined before they are called) Method (and Function) Terminology - **Formal parameters** - Parameters used in the body of the method - `message, n` in our example - **Actual parameters** - Arguments passed in to the method at a call - "hello", 3 in our example More Control Statements in Ruby - A **control statement** is one that affects which instruction is executed next - We've seen two so far in Ruby - `while` and function call - Ruby also has conditionals ```ruby if grade >= 90 then puts "You got an A" elsif grade >= 80 then puts "You got a B" elsif grade >= 70 then puts "You got a C" else puts "You're not doing so well" end ``` What is True? - The **guard** of a conditional is the expression that determines which branch is taken ```ruby if grade >= 90 then ... end ``` Guard ```ruby if grade >= 90 then ... end ``` - The **true** branch is taken if the guard evaluates to anything except - `false` - `nil` - Warning to C programmers: 0 is not **false**! Yet More Control Statements in Ruby - **unless** `cond` then `stmt-f` else `stmt-t` end - Same as "if not cond then stmt-t else stmt-f end" - **until** `cond` body end - Same as "while not cond body end" Using If and Unless as Modifiers - Can write if and unless after an expression - puts "You got an A" if grade >= 90 - puts "You got an A" unless grade < 90 - Why so many control statements? - Is this a good idea? - Advantages? Disadvantages? Other Useful Control Statements - ```ruby def even(x) for i in (1..x) if i % 2 == 0 puts i end end end ``` Objects and Classes - Objects are data - Classes are types (the kind of data which things are) - But in Ruby, classes themselves are objects! <table> <thead> <tr> <th>Object</th> <th>Class</th> </tr> </thead> <tbody> <tr> <td>10</td> <td>Fixnum</td> </tr> <tr> <td>-3.30</td> <td>Float</td> </tr> <tr> <td>&quot;CMSC 330&quot;</td> <td>String</td> </tr> <tr> <td>String.new</td> <td>String</td> </tr> <tr> <td>Fixnum</td> <td>Class</td> </tr> <tr> <td>String</td> <td>Class</td> </tr> </tbody> </table> - Fixnum, Float, String, etc., (including Class), are objects of type Class Two Cool Things to Do with Classes - Since classes are objects, you can manipulate them however you like - If p then x = String else x = Time end # Time is another class - y = x.new # creates a String or a Time, depending upon p - You can get names of all the methods of a class - Object.methods => ["send", "name", "class_eval", "object_id", "new", "autoload?", "singleton_methods", ...] The nil Object - Ruby uses a special object nil - All uninitialized fields set to nil (@ refers to a class field) - Like NULL or 0 in C/C++ and null in Java - nil is an object of class NilClass - It’s a singleton object — there is only one instance of it > NilClass does not have a new method - nil has methods like to_s, but not other methods that don’t make sense > NoMethodError for nil:NilClass What is a Program? - In C/C++, a program is... - A collection of declarations and definitions - With a distinguished function definition > int main(int argc, char *argv[]) { ... } - When you run a C/C++ program, it’s like the OS calls main(...) - In Java, a program is... - A collection of class definitions - With a class MyClass that contains a method > public static void main(String[] args) - When you run java MyClass, the main method of class MyClass is invoked A Ruby Program is... - The class Object - When the class is loaded, any expressions not in method bodies are executed ```ruby def sayN(message, n) i = 0 while i < n puts message i = i + 1 end return i end ``` - defines a method of Object - invokes self.sayN Ruby is Dynamically Typed - Recall we don’t declare types of variables - But Ruby does keep track of types at run time > x = 3; x.foo NoMethodError: undefined method 'foo' for 3:Fixnum - We say that Ruby is dynamically typed - Types are determined and checked at run time - Compare to C, which is statically typed ```ruby # C x = 3 x = "foo" # not allowed ``` ```ruby # Ruby x = 3 x = "foo" # gives x a new type ``` Types in Java and C++ - Are Java and C++ statically or dynamically typed? - A little of both - Many things are checked statically ```java Object x = new Object(); x.println("hello"); // No such method error at compile time ``` - But other things are checked dynamically ```java Object o = new Object(); String s = (String) o; // No compiler warning, fails at run time ``` (Some Java compilers may be smart enough to warn about above cast) Tradeoffs? <table> <thead> <tr> <th>Static types</th> <th>Dynamic types</th> </tr> </thead> <tbody> <tr> <td>More work to do when writing code</td> <td>Less work when writing code</td> </tr> <tr> <td>Helps prevent some subtle errors</td> <td>Can use objects incorrectly and not realize until execution</td> </tr> <tr> <td>Fewer programs type check</td> <td>More programs type check</td> </tr> </tbody> </table> Tradeoffs? <table> <thead> <tr> <th>Static types</th> <th>Dynamic types</th> </tr> </thead> <tbody> <tr> <td>More work to do when writing code</td> <td>Less work when writing code</td> </tr> <tr> <td>Helps prevent some subtle errors</td> <td>Can use objects incorrectly and not realize until execution</td> </tr> <tr> <td>Fewer programs type check</td> <td>More programs type check</td> </tr> </tbody> </table> Classes and Objects in Ruby ```ruby class Point def initialize(x, y) @x = x @y = y end def addX(x) @x += x end def to_s return "(@x, @y)" end end ``` ```ruby p = Point.new(3, 4) p.addX(4) puts(p.to_s) ``` - Keywords: - constructor definition - class contains method/constructor definitions - constructor definition - method with no arguments - instance variables prefixed with "@" - instantiation - invoking no-arg method Classes and Objects in Ruby (cont’d) - Recall classes begin with an uppercase letter - `inspect` converts any instance to a string ```ruby irb(main):033:0> p.inspect => "#<Point:0x54574 @y=4, @x=7>" ``` - Instance variables are prefixed with @ - Compare to local variables with no prefix - Cannot be accessed outside of class - The `@s` method can be invoked implicitly - Could have written `puts(p)` - Like Java's `toString()` methods Notes For Java Programmers - Ruby does not support method overloading - A typical Java class might have two or more constructors - Since Ruby does not support method overloading there can only be one initialize method - Ruby does issue an exception or warning if classes define more than one initialize method - But last initialize method defined is the valid one Inheritance - Recall that every class inherits from Object ```ruby class B < A def add(y) return (super(y) + 1) end b = B.new puts(b.add(3)) ``` - Keywords: - extend superclass - invoke add method of parent super( ) in Ruby - Within the body of a method - Call to super( ) acts just like a call to that original method - Except that search for method body starts in the superclass of the object that was found to contain the original method Global Variables in Ruby - Ruby has two kinds of global variables - Class variables beginning with @@ (static in Java) - Global variables across classes beginning with $ ```ruby class Global @@x = 0 def Global.inc @@x = @@x + 1; $x = $x + 1 end def Global.get return @@x end end ``` ```ruby $x = 0 Global.inc $x = $x + 1 Global.inc puts(Global.get) puts($x) ``` Define a class ("singleton") method Special Global Variables - Ruby has a bunch of global variables that are implicitly set by methods - The most insidious one: $_ - Default method return, argument in many cases - Example ```ruby gets # implicitly reads input into $_ print # implicitly writes $ ``` - Using $_ leads to shorter programs - And confusion - It's suggested you avoid using it Substitution in Ruby Strings - Writing `elt` as `#{elt}` makes it clear that it is a variable to be evaluated, not a literal word to be printed. This is a cleaner way to express output; it builds a single string and presents it as a single argument to puts. ```ruby ruby> for elt in [100,-9.6,"pickle"] | puts "#{elt} (#{elt.class})" | end 100 (Fixnum) -9.6 (Float) pickle (String) ``` Creating Strings in Ruby (cont.) - Ruby also has printf and sprintf - printf("Hello, %s\n", name); - sprintf("%d: %s", count, Time.now) - Returns a string - The to_s method returns a String representation of a class object Standard Library: String - The String class has many useful methods: - `s.length` # length of string - `s1 == s2` # structural equality (string contents) - `s = "A line\n"; s.chomp` # returns "A line" - Return new string with s's contents except newline at end of line removed - `s = "A line\n"; s.chomp!` # destructively removes newline from s - **Convention:** methods ending in `!` modify the object - **Another convention:** methods ending in `?` observe the object - `"r1\tr2\tr3".each("\t") { |rec| puts rec }` # Apply code block to each tab-separated substring Object Copy vs. Reference Copy - Consider the following code: - Assume an object/reference model like Java or Ruby - Or even two pointers pointing to the same structure <table> <thead> <tr> <th><code>x = &quot;groundhog&quot; ; y = x</code></th> </tr> </thead> <tbody> <tr> <td>x</td> </tr> <tr> <td>(reference)</td> </tr> <tr> <td>&quot;groundhog&quot;</td> </tr> </tbody> </table> - Which of these occur? - Object copy - Reference copy Physical vs. Structural Equality - Consider these cases again: <table> <thead> <tr> <th><code>x = &quot;groundhog&quot; ; y = x</code></th> </tr> </thead> <tbody> <tr> <td>x</td> </tr> <tr> <td>(reference)</td> </tr> <tr> <td>&quot;groundhog&quot;</td> </tr> </tbody> </table> - If we compare `x` and `y`, what is compared? - The references, or the contents of the objects they point to? - If references are compared (physical equality) the first would return false but the second true - If objects are compared both would return true String Equality - In Java, `x == y` is physical equality, always - Compares references, not string contents - In Ruby, `x == y` for strings uses structural equality - Compares contents, not references - `==` is a method that can be overridden in Ruby! - To check physical equality, use the `equal?` method - Inherited from the Object class - It’s always important to know whether you’re doing a reference or object copy - And physical or structural comparison Consider these three examples again: - All involve searching in a string for a certain pattern - What if we want to find more complicated patterns? - Find first occurrence of "a" or "b" - Split string at tabs, spaces, and newlines Regular Expressions! ### Comparing Equality <table> <thead> <tr> <th>Language</th> <th>Physical equality</th> <th>Structural equality</th> </tr> </thead> <tbody> <tr> <td>Java</td> <td><code>a == b</code></td> <td><code>a.equals(b)</code></td> </tr> <tr> <td>C</td> <td><code>a == b</code></td> <td><code>*a == *b</code></td> </tr> <tr> <td>Ruby</td> <td><code>a == b</code></td> <td><code>a.equal?(b)</code></td> </tr> <tr> <td>Ocaml</td> <td><code>a == b</code></td> <td><code>a # b</code></td> </tr> <tr> <td>Python</td> <td><code>a is b</code></td> <td><code>a == b</code></td> </tr> <tr> <td>Scheme</td> <td><code>(eq? a b)</code></td> <td><code>(equal? a b)</code></td> </tr> <tr> <td>Visual Basic .NET</td> <td><code>a ls b</code></td> <td><code>a = b</code></td> </tr> </tbody> </table> ### Summary - **Scripting languages** - **Ruby language** - Implicit variable declarations - Many control statements - Classes & objects - Strings
{"Source-Url": "http://www.cs.umd.edu/class/spring2008/cmsc330/lectures/02-ruby.pdf", "len_cl100k_base": 5320, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 25282, "total-output-tokens": 5733, "length": "2e12", "weborganizer": {"__label__adult": 0.0004298686981201172, "__label__art_design": 0.0002510547637939453, "__label__crime_law": 0.00021004676818847656, "__label__education_jobs": 0.0009398460388183594, "__label__entertainment": 5.221366882324219e-05, "__label__fashion_beauty": 0.0001181960105895996, "__label__finance_business": 0.0001895427703857422, "__label__food_dining": 0.0003294944763183594, "__label__games": 0.00037980079650878906, "__label__hardware": 0.0004532337188720703, "__label__health": 0.000255584716796875, "__label__history": 0.00015044212341308594, "__label__home_hobbies": 9.649991989135742e-05, "__label__industrial": 0.0002491474151611328, "__label__literature": 0.0002167224884033203, "__label__politics": 0.0001614093780517578, "__label__religion": 0.0003921985626220703, "__label__science_tech": 0.0008873939514160156, "__label__social_life": 0.0001056194305419922, "__label__software": 0.0041961669921875, "__label__software_dev": 0.9892578125, "__label__sports_fitness": 0.0002694129943847656, "__label__transportation": 0.00030517578125, "__label__travel": 0.0002002716064453125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16280, 0.03462]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16280, 0.21956]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16280, 0.75466]], "google_gemma-3-12b-it_contains_pii": [[0, 1317, false], [1317, 4100, null], [4100, 6126, null], [6126, 6716, null], [6716, 9166, null], [9166, 11797, null], [11797, 13454, null], [13454, 15601, null], [15601, 16280, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1317, true], [1317, 4100, null], [4100, 6126, null], [6126, 6716, null], [6716, 9166, null], [9166, 11797, null], [11797, 13454, null], [13454, 15601, null], [15601, 16280, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16280, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16280, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16280, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16280, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16280, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16280, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16280, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16280, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16280, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16280, null]], "pdf_page_numbers": [[0, 1317, 1], [1317, 4100, 2], [4100, 6126, 3], [6126, 6716, 4], [6716, 9166, 5], [9166, 11797, 6], [11797, 13454, 7], [13454, 15601, 8], [15601, 16280, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16280, 0.11441]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
60945c7a76c61cc83802a94fe635fe026b6c9376
Generating high-performance arithmetic operators for FPGAs Florent de Dinechin, Cristian Klein, Bogdan Pasca To cite this version: Florent de Dinechin, Cristian Klein, Bogdan Pasca. Generating high-performance arithmetic operators for FPGAs. 2008. ensl-00321209 HAL Id: ensl-00321209 https://hal-ens-lyon.archives-ouvertes.fr/enl-00321209 Preprint submitted on 12 Sep 2008 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Generating high-performance arithmetic operators for FPGAs LIP research report 2008-28 Florent de Dinechin, Cristian Klein and Bogdan Pasca * LIP (CNRS/INRIA/ENS-Lyon/UCBL), Université de Lyon École Normale Supérieure de Lyon 46 allée d’Italie, 69364 Lyon cedex Abstract This article addresses the development of complex, heavily parameterized and flexible operators to be used in FPGA-based floating-point accelerators. Languages such as VHDL or Verilog are not ideally suited for this task. The main problem is the automation of problems such as parameter-directed or target-directed architectural optimization, pipeline optimization, and generation of relevant test benches. This article introduces FloPoCo, an open object-oriented software framework designed to address these issues. Written in C++, it inputs operator specifications, a target FPGA and an objective frequency, and outputs synthesizable VHDL fine-tuned for this FPGA at this frequency. Its design choices are discussed and validated on various operators. 1 Arithmetic operator design 1.1 Floating-point and FPGAs FPGA-based coprocessors are available from a variety of vendors, and it is natural to try and use them for accelerating floating-point (FP) applications. On floating-point matrix multiplication, their floating-point performance slightly surpasses that of a contemporary processor [6], using tens of operators on the FPGA to compensate their much slower frequency (almost one order of magnitude). However, FPGAs are no match to GPUs here. For other FP operations that are performed in software in a processor (for instance all the elementary functions such as exp, log, trigonometric...) there is much more speedup potential: One may design a dedicated pipelined architecture on an FPGA that outperforms the corresponding processor code by one order of magnitude while consuming a fraction of the FPGA resources [4]. Implementing the same architecture in a processor would be wasted silicon, since even the logarithm is a relatively rare function typical processor workloads. For the same reason, GPUs have hardware acceleration for a limited set of functions and in single precision only. In an FPGA, you pay the price of this architecture only if your application needs it. Besides, operators can also be specialized in FPGAs. For example, a squarer theoretically requires half the logic of a multiplier: A floating-point multiplication by the constant 2.0 boils down to adding one to the exponent (a 12-bit addition in double-precision), and shouldn’t use a full-blown FP multiplier as it does in a processor. Actually it is possible to build an optimized architecture for any multiplication by a constant [2]. Finally, operators can be fused on an FPGA, for example the Euclidean norm $\sqrt{x^2 + y^2}$ can be implemented more efficiently than by linking --- *This work was partly supported by the XtremeData university programme, the ANR EVAFlo project and the Egide Brâncuși programme 14914RL. two squarers, one adder and one square root operator. There are many more opportunities for floating-point on FPGAs [3]. The object of the FloPoCo project\(^1\) is to study and develop such FPGA-specific Floating-Point Cores. 1.2 From libraries to generators FloPoCo is not a library but a generator of operators. Indeed, it is the successor to FPLibrary, a library of operators written in VHDL. Many parts of FPLibrary were actually generated by as many ad-hoc programs, and FloPoco started as an attempt to bring all these programs in a unified framework. A first reason is that it is not possible, for instance, to write by hand, directly in VHDL or Verilog, an optimized multiplier by a constant for each of an infinite number of constants. However, this task is easy to automate in a program that inputs the constant. Another reason is the need for flexibility. Whether the best operator is a slow and small one or a faster but larger one depends on the context. FPGAs also allow flexibility in precision: arithmetic cores are parameterized by the bit-widths of their inputs and outputs. Flexibility also makes it possible to optimize for different hardware targets, with different LUT structure, memory and DSP features, etc. Thus, the more flexible an operator, the more future-proof. Finally, for complex operators such as elementary function evaluators, the optimal design is the result of a costly design-space exploration, which is best performed by a computer. VHDL and Verilog are good for describing a library of operators optimized for a given context, but the more flexibility and the more design-space exploration one wants, the more difficult it gets. It is natural to write operator generators instead. A generator inputs user specifications, performs any relevant architectural exploration and construction (sometimes down to pre-placement), and outputs the architecture in a synthesizable format. To our knowledge, this approach was pioneered by Xilinx with their core generator tool\(^2\). An architecture generator needs a back-end to actually implement the resulting circuit. The most elegant solution is to write an operator generator as an overlay on a software-based HDL such as SystemC, JBits, HandelC or JHDL (among many others). The advantages are a preexisting abstraction of a circuit, and simple integration with a one-step compilation process. The inconvenient is that most of these languages are still relatively confidential and restricted in the FPGAs they support. Basing FloPoCo on a vendor generator would be an option, but would mean restricting it to one FPGA family. FloPoCo therefore took a less elegant, but more universal route. The generator is written in a mainstream programming language (we chose C++), and it outputs operators in a mainstream HDL (we chose standard synthesizable VHDL). Thus, the FloPoCo generator is portable, and the generated operators can be integrated into most projects, simulated using mainstream simulators, and synthesized for any FPGA using the vendor back-end tools. Section 2.2 will show how they can nevertheless be optimized to a given FPGA target. The inconvenient of this approach is that we had to develop a framework, instead of reusing one. Section 2 describes this framework and the way it evolved in a very practical and bottom-up way. 1.3 The arithmetic context It is important to understand that this framework was developed only with arithmetic operators in view. An arithmetic operator is the implementation of a mathematical function, and this underlying mathematical nature is exploited pervasively in FloPoCo. For instance, an operator may be combinational or pipelined, but will usually involve no feedback loop or state machine (the only current exception is an accumulator). With this restriction, we are able to implement a simple, efficient and automatic approach to pipelining (see section 3) and testbench generation (see section 4). As another example, when generating... --- \(^{1}\)www.ens-lyon.fr/LIP/Arenaire/Ware/FloPoCo/ \(^{2}\)We would welcome any feedback on early architecture generators test benches, relevant test patterns may be defined by function analysis, and the expected output is defined as a mathematical function of the input, composed with a well-defined rounding function [1]. These are only a few examples. The design-space exploration for complex operators is based on automated error analysis [4], which is also specific to the arithmetic context. FloPoCo is not only a generator framework, it is also a generator of arithmetic cores using this framework. It currently offers about 20 operators, from simple ones such as shifters or integer adders to very complex ones such as floating-point exp and log. This article is not about these operators, but will be illustrated by actual examples of already implemented operators. FloPoCo is distributed under the LGPL, and interested readers are welcome to try it, use it and improve it. 2 The FloPoCo framework The FloPoCo generator inputs (currently in the command-line) a list of operator specifications, internally builds a list of Operator objects (some of which may be sub-components of the specified operators), then outputs the corresponding VHDL. 2.1 Operators The core class of FloPoCo is Operator. From the circuit point of view, an Operator corresponds to a VHDL entity, but again, with restrictions and extensions specific to the arithmetic context. All the operators of FloPoCo extend this class. The main method of Operator is outputVHDL(), which outputs the VHDL code of an operator. To implement this virtual method for an operator, one may simply embed some existing VHDL code in the C++ code. However, with many parameters, GENERATE constructs in VHDL are best replaced with loops and tests in the C++ code, which makes the VHDL code simpler and easier to debug. In addition, the Operator class provides many helper methods which relieve the designer from repetitive or error-prone work, for example entity and component declaration, signal declaration, signal registering, etc. In short, Operator provides black box functionality for known VHDL recipes, but otherwise requires manual output of VHDL code. This approach allowed us to quickly backport existing generators. More importantly, we may tinker with the framework without having to rework existing cores. Design space exploration, if any, is done in the operator constructor. The input specification (input and output widths, etc.) and the deployment and performance constraints (e.g. VirtexIV, 300MHz) are analysed, and operator attributes are set to be used later by outputVHDL(). For instance, the constructor of an integer constant multiplier internally builds and pipes a directed acyclic graph (see Figure 1) with several labels on each node [2]. Operator also defines other virtual methods for the purpose of pipelining and testbench generation. These will be considered in due course. 2.2 Targets The Target class abstracts the features of actual FPGA chips. Classes representing real FPGA chips extend this class (we currently have classes for two very different FPGAs, Xilinx VirtexIV and Altera StratixII). The idea is to declare abstract methods in Target, which are implemented in its subclasses, so that the same generator code fits all the targets. To this purpose, a Target is given as argument to the constructor of an operator – it also receives an objective frequency, this will be detailed in section 3. The methods provided by the Target class can be semantically split into two categories: • Architecture-related methods provide information about the architecture of the FPGA and are used in architectural exploration. For instance, lutInputs() returns the number of inputs of the FPGA’s LUTs. 3There is a Signal class, but it has currently no real signal semantic (it doesn’t checks for pending signals, short circuits, etc), it just makes writing outputVHDL() easier, and we will not detail it any further. 4Of course, it is also possible to have a big if that runs completely different code depending on the target. Delay-related methods provide approximative informations about the delays for signals traveling the FPGA. For example, `adderDelay(int n)` returns the delay of an n-bit addition. These methods will be used for automatic pipelining, see section 3. Some of these methods have an architecture-related dual, for example `suggestAdderSize(double delay)` that returns the size of an adder that will have the required delay. The difficulty here is to find the right abstraction level for Target. On one hand, we do not hope to provide an exhaustive and detailed description of all the existing – and future – FPGAs. On the other hand, we do not need to: Vendor tools are very good at fitting a design to a given target, and we should rely on them. The complexity of exploiting the details of the target should be left to the back-end tools. To understand how we approach this issue in FloPoCo, consider the example of integer multiplication support in FPGA. Early FPGAs were logic-only, then came hard multipliers embedded in the FPGA fabric, then these multipliers were promoted to DSP blocks with the addition of accumulators. Current DSP blocks are very complex and differ widely from FPGA to FPGA. Some contain 18x18 multipliers, some 18x24, some 36x36 which can be split in several 18x18 or 9x9 multipliers, with subtle restrictions. All contain internal registers levels, and some an accumulator. What is the best way to abstract this complexity and variety, in a way both concise and generally useful? The current – probably not definitive – answer is a method called `suggestSubmultSize()` which takes the objective frequency as an argument and returns the input sizes of the largest (possibly rectangular) sub-multiplier that, when written as a * in VHDL, runs at the objective frequency on this target. This abstract method will be implemented very differently in subclasses of Target that describe actual FPGAs, depending on the availability or not of DSP blocks and on the capabilities of these blocks. With this simple interface, we are able to generate large floating-point multipliers (build by assembling several DSP blocks) whose performance and resource consumption almost match (and sometimes even surpass) the vendor-generated ones, while being more flexible. For the design-space exploration of future operators, we will need other methods, for example a method that inputs a multiplier size and returns an estimation of its area. And of course, the question of target abstraction will remain an open one forever, as new FPGA features keep appearing and new operators present new problems. 3 Automatic pipelining Pipelining a given arithmetic operator for a given context is relatively easy, if tedious. What is important is to evaluate the number of pipeline levels and their approximate location, but the details are best left to the back-end tools, which will actually place the registers after logic optimization as part of the technology mapping step. Recent tools apply, to various extent, retiming techniques (moving registers around to improve the critical path without changing the functionnality of the circuit) [5]. This is also best done after technology mapping. 3.1 Frequency-directed pipelining Currently, FloPoCo implements frequency-directed pipelining using variations on the following generic algorithm, which is simple (greedy), and linear in the size of the operator. The constructor, working from the output to the input (or the other way round, depending what is expected to give the best results), accumulates estimations of the critical path delay (provided by the Target class), and inserts register levels when the current critical path exceeds the reciprocal of the objective frequency. When this process is finished, the constructor sets the pipelineDepth attribute of the Operator class. When an operator instantiates a sub-component (e.g. FPMultiplier instantiates IntMultiplier), it may obtain its pipelineDepth, which allows it to delay signals accordingly – this works recursively. FloPoCo does not provide any special framework for the computation of critical paths or the distribution of registers: an ad-hoc program needs to be written by hand for each operator to implement the previous generic algorithm. This is about as tedious as pipelining VHDL code, but needs to be done only once. In addition, FloPoCo has several facilities for inserting multiple delays on signals which help prevent bugs due to lack of synchronization. The test-bench generation framework also takes into account the pipeline depth of the unit under test automatically. ### 3.2 Discussion This part of the framework is still being improved. It is not clear yet if we want it to eventually end up as a generic retiming engine using a generic abstraction of an arithmetic circuit. Firstly, it would be a tremendous amount of work to get it right, and we choose to focus our efforts on operator development. Secondly, it is not obvious that it would be useful. The ad-hoc, per-operator approach is more flexible and more powerful in the design exploration phase, for instance. Critical path delay estimations are necessarily inaccurate: actual values can only be obtained after placement and routing of the complete application in which the operator takes place, something which is out of the scope of FloPoCo. No guarantee is therefore given that the operator will actually function at the objective frequency. What is easy to ensure, however, is that when the objective frequency is raised, the number of pipeline stages increases and the critical path delay does not increase. The real question is, do we need to actually place registers, which is the bulk of the work? If all the back-ends implemented retiming [5] efficiently (which is not the case yet), we would just have to evaluate the pipeline depth as a function of the frequency (a comparatively simple task) and place the corresponding number of register levels at the output of the operator, leaving to the tools the task to push them in. Still, retiming is a costly iterative process, and considering its local nature, it should be sped up by a good initial placement. We have little practical experience on this subject and would appreciate feedback. ### 3.3 A detailed example The command ```bash flopoco -frequency=200 -target=VirtexIV IntConstMult 24 26353589 ``` builds a multiplier of a 24-bit integer with the constant 26353589 (the 24 first bits of π), with objective frequency 200MHz, for a VirtexIV. Figure 1 shows the obtained architecture. Note that some adders are much larger than the others. With `-frequency=100`, only the two last levels of registers (marked with a *) are generated: The lower levels are grouped in a single pipeline stage. Table 1 provides some synthesis results for Xilinx VirtexIV using ISE 9.1, for three values of the objective frequency. In this example, the reported frequency is much better than the specified one. This is because the automatic pipeline program for this operator works at the adder level, not at the bit level (splitting --- **Table 1: Synthesis results for pipelined operators** <table> <thead> <tr> <th><code>-frequency=</code></th> <th>latency</th> <th>frequency</th> <th>area</th> </tr> </thead> <tbody> <tr> <td>100</td> <td>2</td> <td>146 MHz</td> <td>176 sl</td> </tr> <tr> <td>200</td> <td>4</td> <td>264 MHz</td> <td>199 sl</td> </tr> <tr> <td>300</td> <td>5</td> <td>373 MHz</td> <td>218 sl</td> </tr> </tbody> </table> the adders into sub-adders). The current framework would allow for bit-level pipelining for these operators, but it would be much more complex and we currently feel this effort is not justified. We however pipeline adders when they are too large to reach the objective frequency, because we can use for that the existing pipelined \texttt{IntAdder} operator. ### 4 Test case generation In order to make sure that no bugs leak into the VHDL code generated for each arithmetic operator, FloPoCo was also designed to automate test case generation. Due to the large number of parameters that can be customised, writing test benches by hand for each operator is not an option. Test benches are pieces of VHDL code (possibly with a few associated data files) which run from a VHDL simulator, give certain inputs (test cases) to the arithmetic operators and test the correctness of the outputs. Test cases are operator-specific and are generated by doing the equivalent computation in software. We deliberately avoid duplicating the hardware algorithms in software in order to avoid introducing the same bugs. Instead, we relied on well tested libraries such as GMP and MPFR. Small operators can be exhaustively tested, but it becomes impossible to exhaustively test larger ones, in particular double-precision ones. Instead, our strategy consists in maximising the number of data paths and signal combinations that are tested. A FloPoCo test bench basically generates two types of test cases. The first is random test cases. The \texttt{Operator} class is able to generate uniform random inputs, but it should most of the cases be overridden in an operator-dependent way that focuses the test on the interesting domains of input. For instance, a double-precision exponential returns $+\infty$ for all inputs larger than $1024$ and returns 0 for all inputs smaller than $-1024$. In other terms, the most interesting test domain for this function is when the input exponent is between $-10$ and $10$, a tiny fraction of the full double-precision exponent domain ($-1024$ to $1023$). Generating random 64-bit integers and using them as floating-point inputs would mean testing mostly the overflow/underflow logic, which is a tiny part of the operator. Similarly, in a floating-point adder, if the difference between the exponents of the two operands is large, the adder will simply return the biggest of the two, and again this is the most probable situation when taking two random operands. Here it is better to generate random cases where the two operands have close exponents. In addition to random test cases, there are also mandatory test cases which test specific situations which a random test would have little chance to hit. Again, these test cases are operator-specific. ### 5 Conclusion This article introduced FloPoCo, an open-source software framework for generating high-quality, highly parameterized, pipelined and flexible operators for FPGAs. This framework evolved (and still evolves) in a deliberately bottom-up and practical way focussed on the needs of arithmetic cores. It uses a low-level, highly flexible \texttt{printf}-based approach to VHDL generation. In its current state, FloPoCo is already an invaluable booster for the core focus of our research, which is to develop new arithmetic operators. In the future, a wider range of design objectives may be added: optimizing for power, for a given pipeline depth, for memory, etc. In addition, we will refine and extend the FPGA model as new needs appear, for instance to model the internal memory resources. FloPoCo’s automatic pipeline framework can in principle be used to build larger computation pipelines composed of many operators, in a way that automatically delays signals to match the pipelines of the various operators, and does so even when the designer, to optimize resource usage, changes the precision of some operators. We also intend to explore this possibility. ### References
{"Source-Url": "https://hal-ens-lyon.archives-ouvertes.fr/ensl-00321209/file/DinechinKleinPasca.pdf", "len_cl100k_base": 4945, "olmocr-version": "0.1.51", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 21312, "total-output-tokens": 5690, "length": "2e12", "weborganizer": {"__label__adult": 0.0007314682006835938, "__label__art_design": 0.0006432533264160156, "__label__crime_law": 0.0006213188171386719, "__label__education_jobs": 0.0005559921264648438, "__label__entertainment": 0.00014293193817138672, "__label__fashion_beauty": 0.00027751922607421875, "__label__finance_business": 0.0002961158752441406, "__label__food_dining": 0.0005869865417480469, "__label__games": 0.0009441375732421876, "__label__hardware": 0.014373779296875, "__label__health": 0.00109100341796875, "__label__history": 0.00044345855712890625, "__label__home_hobbies": 0.00022542476654052737, "__label__industrial": 0.0017414093017578125, "__label__literature": 0.00022590160369873047, "__label__politics": 0.0005373954772949219, "__label__religion": 0.0011425018310546875, "__label__science_tech": 0.2080078125, "__label__social_life": 0.00011038780212402344, "__label__software": 0.007213592529296875, "__label__software_dev": 0.7578125, "__label__sports_fitness": 0.0006341934204101562, "__label__transportation": 0.0013628005981445312, "__label__travel": 0.0003173351287841797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24361, 0.04033]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24361, 0.59409]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24361, 0.90205]], "google_gemma-3-12b-it_contains_pii": [[0, 921, false], [921, 3913, null], [3913, 8030, null], [8030, 12037, null], [12037, 16225, null], [16225, 19428, null], [19428, 23505, null], [23505, 24361, null]], "google_gemma-3-12b-it_is_public_document": [[0, 921, true], [921, 3913, null], [3913, 8030, null], [8030, 12037, null], [12037, 16225, null], [16225, 19428, null], [19428, 23505, null], [23505, 24361, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24361, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24361, null]], "pdf_page_numbers": [[0, 921, 1], [921, 3913, 2], [3913, 8030, 3], [8030, 12037, 4], [12037, 16225, 5], [16225, 19428, 6], [19428, 23505, 7], [23505, 24361, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24361, 0.04854]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
91188c75a6a8e71654f088a7b2245b2e8eb07829
Formalizing UML Models and OCL Constraints in PVS\(^1\) Marcel Kyas and Harald Fecher\(^2\) *Institute for Computer Science and Applied Mathematics, Christian-Albrechts-Universität zu Kiel, Germany* Frank S. de Boer and Joost Jacob\(^3\) *CWI Amsterdam, The Netherlands* Jozef Hooman and Mark van der Zwaag\(^4\) *Computer Science Department, University of Nijmegen, The Netherlands* Tamarah Arons and Hillel Kugler\(^5\) *Weizmann Institute of Science, Rehovot, Israel* **Abstract** The Object Constraint Language (OCL) is the established language for the specification of properties of objects and object structures in UML models. One reason that it is not yet widely adopted in industry is the lack of proper and integrated tool support for OCL. Therefore, we present a prototype tool, which analyzes the syntax and semantics of OCL constraints together with a UML model and translates them into the language of the theorem prover PVS. This defines a formal semantics for both UML and OCL, and enables the formal verification of systems modeled in UML. We handle the problematic fact that OCL is based on a three-valued logic, whereas PVS is only based on a two valued one. **Key words:** OCL, PVS, Formal Verification, Formal Semantics, UML \(^1\) Part of this work has been financially supported by IST project Omega (IST-2001-33522), see [http://www-omega.imag.fr](http://www-omega.imag.fr), and NWO/DFG project Mobi-J (RO 1122/9-1, RO 1122/9-2). \(^2\) mailto:{mky,hf}@informatik.uni-kiel.de \(^3\) mailto:{frb,jacob}@cwi.nl \(^4\) mailto:{hooman,mbz}@cs.kun.nl \(^5\) mailto:{tamarah,kugler}@wisdom.weizmann.ac.il *This is a preliminary version. The final version will be published in Electronic Notes in Theoretical Computer Science* *URL: [www.elsevier.nl/locate/entcs](http://www.elsevier.nl/locate/entcs)* 1 Introduction Today, UML [7,8] and its textual specification language OCL [11,1] are widely used as specification and modeling languages for object-oriented systems. A wide range of tools are available supporting the development of systems using UML’s notations, from simple syntactic analyzers to simulators, compilers enabling run-time checking of specifications, model checkers, and theorem provers. However, there exists no program that integrates verification and validation of UML class diagrams, state machines, and OCL specifications. We focus on deductive verification in higher order logic. This allows the verification of possibly infinite state systems. We present a translation of the notations of UML (class diagrams, state machines, and OCL constraints) into the input language of PVS [9]. Then the specification, originally given in OCL, can be verified using PVS. The compiler we describe does implements a translation of a well-defined subset of UML diagrams which is sufficient for many applications. This subset consists of class diagrams which only have associations with a multiplicity of 0 or 1 and no generic classes, flat state-machines (state-machines can always be represented as a flat state machine with the same behavior, as described in [13]), and OCL constraints. OCL, a three-valued logic, has then to be encoded in PVS, which is based on a two-valued logic and which only allows total functions. The reason for OCL’s three-valuedness is that it uses partial functions and that relations are interpreted as strict functions into the three truth-values. Furthermore, the additional truth value only occurs indirectly by applying a function to a value outside its domain. The way partial functions of OCL are translated to PVS decides how the three-valued logic is handled. Our transformation restricts a partial function to its domain yielding a total function. The work reported in [12] defines a formalization of UML state machines in PVS\(^6\), which does not include a translation of OCL to PVS. In [2] a formal semantics of OCL has been proposed in the theorem-prover Isabelle/HOL [6]. Contrary to our approach, partial functions have been extended to total functions by introducing an undefined value. While this approach allows the verification of meta theorems, the verification of an actual UML model w.r.t. its OCL specification still requires the additional proof that all values are defined. 2 Running Example We use the Sieve of Eratosthenes as a running example. It is modeled using the two classes Generator and Sieve (see Figure 1). Exactly one instance, the root object, of the class Generator is present in the model. The generator creates an instance of the Sieve class. Then it sends the new instance natural \(^6\) It has been implemented in the PrUDE program, see \url{http://www.isot.ece.uvic.ca/download.html}. numbers in increasing order, see Figure 2. The association from Generator to Sieve is called itsSieve. Upon creation, each instance of Sieve receives a prime number and stores it in its attribute \( p \). Then it creates a successor object, called itsSieve\(^7\), and starts receiving a sequence of integers \( i \). If \( p \) divides \( i \), then this instance does nothing. Otherwise it sends \( i \) to itsSieve. This behavior is shown in Figure 3.\(^8\) The safety property we would like to prove is that \( p \) is a prime number for each instance of Sieve; this can be formalized in OCL by: \[ \text{context Sieve inv: Integer}\{2..(p-1)}\rightarrow \text{forAll}(i \mid p \mod(i) \neq 0) \] This constraint states that the value of the attribute \( p \) is not divisible by any number \( i \) between 2 and \( p - 1 \). To prove this, we need to establish, that the sequence of integers received by each instance of Sieve is monotonically increasing. We have chosen this example, because it is short, but still challenging to verify. It involves object creation and asynchronous communication, and therefore does not have a finite state space. Furthermore, the behavior of the \(^7\) The association from the Sieve class to itself in Figure 1 is called itsSieve. \(^8\) The trigger \( tm(20) \) in Figure 2 postpones its reaction by 20 time units. model depends on the data sent between objects and note also that the property we want to prove on the model is a number-theoretic property, namely, that the numbers generated are primes. This makes it impossible to show the considered property using automatic techniques like model checking. 3 Representation in PVS The UML diagrams we consider are restricted forms of class diagrams and state machines, which both may contain OCL expressions. 3.1 Class Diagrams To formalize class diagrams in PVS we define a type \( \text{Class} \) which enumerates the names of all classes appearing in the model and a predicate on classes which states whether the class is \texttt{active}. The constant \texttt{rootClass} denotes the active class which provides the first object in the model. Furthermore, the attribute, operation, signal, and reference names of each class are enumerated as a type. The association of a name to its defining class is done by prefixing the name with the corresponding class name. The translation of the class diagram shown in Figure 1 is presented in Figure 4. \[ \begin{align*} \text{Class:} & \quad \text{type}^+ = \{ \text{Generator}, \text{Sieve} \} \\ \text{active:} & \quad \text{pred}[\text{Class}] = \text{LAMBDA} (c: \text{Class}): c = \text{Generator} \text{ OR } c = \text{Sieve}; \\ \text{rootClass:} & \quad (\text{active}) = \text{Generator} \\ \text{Attribute:} & \quad \text{type}^+ = \{ \text{Generator}_x, \\ & \text{Sieve}_x, \text{Sieve}_p, \text{unusedAttribute} \} \\ \text{Reference:} & \quad \text{type}^+ = \{ \text{Sieve}_\text{itsSieve}, \\ & \text{Generator}_\text{itsSieve}, \text{Sieve}_\text{itsGenerator}, \\ & \text{unusedReference} \} \\ \text{Location:} & \quad \text{type}^+ = \{ \text{Generator}_\text{61}, \\ & \text{Generator}_\text{64}, \text{Generator}_\text{66}, \\ & \text{Sieve}_\text{25}, \text{Sieve}_\text{28}, \\ & \text{Sieve}_\text{32}, \text{Sieve}_\text{33} \} \\ \text{t28: Transition = (} & \text{source := Sieve}_\text{28}, \\ & \text{trigger := signalEvent(Sieve}_e, \text{Sieve}_z), \\ & \text{guard := (LAMBDA (val: Valuation): (NOT divides((val'aval(Sieve}_p), \\ & (val'aval(Sieve}_z))))), \\ & \text{actions := (cons((emitSignal(Sieve}_\text{itsSieve}, \\ & \text{Sieve}_e, (LAMBDA (val: Valuation): \\ & (val'aval(Sieve}_p))))), null)), \\ & \text{target := Sieve}_\text{28}, \\ & \text{class := Sieve} \} \\ \text{transitions:} & \quad \text{setof}[\text{Transition}] = \{ t: \text{Transition} | \\ & t = \text{t9 OR } t = \text{t11 OR } t = \text{t13 OR} \\ & t = \text{t25 OR } t = \text{t28 OR } t = \text{t32 OR} \\ & t = \text{t34 OR } t = \text{t37} \} \end{align*} \] Fig. 4. Translation of the Sieve Class Diagram Fig. 5. Translation of the Generator State Machine Objects and object structures are always obtained from these definitions using interpretation functions. We assume a PVS type \texttt{Object}, whose elements represent all objects. The function \texttt{class: [Object -> Class]} assigns each object its class and the function \texttt{state: [Object -> [Attribute -> Value]]} assigns each object its state, which is a valuation of all attributes. Type checking asserts that all objects only uses attributes defined for their class, so we assign to the attributes not defined in an objects’ class an arbitrary value. 3.2 State Machines State machines are represented as graphs and OCL expressions occurring in the state machine are translated to expressions representing their semantics. The semantics of the translated state machines is given in terms of sets of computations using a function like the one described in [5]. For example, the transition from state_1 to state_1 sending an integer to the next object, as shown in Figure 2, is translated to the PVS fragment shown in Figure 5. 3.3 OCL OCL is a three-valued logic that assigns to each formula a value true, false, or undefined (⊥). The reasons for this are that each partial function $f$ is extended in the semantics of OCL to a strict total function $f_\perp \equiv \lambda x. \text{if } x \in \text{dom}(f) \text{ then } f(x) \text{ else } \perp \text{fi}$ and that relations and predicates are interpreted as strict functions to the three truth-values [11]. The OCL standard, however, states that a model is well-formed only if all constraints are true. Since we are interested in verifying the correctness of a model w.r.t. its OCL specification, we have to prove that all OCL constraints are true, i.e., neither false nor undefined. To achieve this, each OCL formula is translated directly to PVS, such that we equate the truth values false and ⊥. The advantages of this approach are: (1) We do not need to require that each function is strict in its arguments. PVS provides its own method through the automatic generation of type consistency constraints (TCC) to handle this. (2) We do not need to redefine the core of the logic, e.g., the and and implies functions, in PVS. Instead, use is made of PVS’s strategies. In order to reduce OCL’s semantics to a two-valued one, we have to consider three situations carefully: partial primitive functions, the operational flavor of OCL’s semantics, and undefined values in the state. Some primitive functions used in OCL are partial functions which may return an undefined value, e.g., division by zero. In [2] Brucker et al. have extended the partial functions to total functions by explicitly introducing the undefined value and formalizing the underlying three-valued logic, which is closer to the semantics of OCL. The disadvantage of this approach is that reasoning in a three-valued logic loses PVS’s automation, mostly because the law of the excluded middle does not hold in OCL, and causes the redefinition of all predefined functions of PVS. Our approach is to restrict each partial function to its domain, which makes it a total function. This requires a formalization of the domains of each primitive function. Many functions used in OCL already have a corresponding equivalent one in PVS, e.g., the arithmetic functions. The missing functions have been defined as total functions by us in a library, with an appropriate semantics. OCL allows user-defined functions in expressions, which are either introduced using a let-construct or by using an operation that has been declared side-effect free. Provided that an implementation of the user-defined function is given, we compute a signature that is based on type definitions computed from the class hierarchy. All instances in a model are identified by a value of the type OclAny\textsuperscript{9}. It contains the special literal nil\textsuperscript{10} which represents the non-existing object. In PVS we define the type ObjectNotNil containing all existing objects excluding nil. For each class Class defined in the model we introduce a type ObjectClass and generate a subtype ObjectClassNotNil of ObjectClass. If a class Class2 is a direct subclass of Class then the super-type of ObjectClass2 is ObjectClass in PVS. This encodes the usual interpretation of the input’s class hierarchy in PVS and satisfies the subsumption property that if a class C is a subclass of D, then each instance of C is also an instance of D. The signature of a user-defined operation in PVS is obtained directly from the original signature, except that the type of the first argument that is always self, becomes ObjectClassNotNil, in which Class refers to the name of the class in which the operation is defined. Then PVS generates TCCs to assert that all arguments are in the domain of a function. A failure to prove them in most cases indicates that the original OCL expression is either false or undefined. The formal semantics of OCL is concerned with executing OCL constraints. As an effect the value of an recursive function is undefined, if its evaluation is diverging. This semantics may be suitable for run-time checking, but it is not implementable, because the termination problem is generally undecidable. In PVS, however, the termination of every recursive function has to be guaranteed by a ranking function and a termination proof. Therefore, we translate recursive functions of OCL to recursive functions in PVS directly. The user has to define the ranking function in the PVS output himself, because OCL does not provide any means to define such ranking functions. The semantics of OCL defines the meaning of universal and existential quantification as an possibly infinite expansion of conjunctions, resp. disjunctions. Together with the allInstances() operation, whose intuitive meaning is the set of all existing class’ instances, this allows counter-intuitive specifications: The evaluation of the expression Integer.allInstances() results in the set of all integers, which is an infinite set. Hence, the expression Integer.allInstances()->forAll(n | true) will not terminate in OCL’s semantics, and has, therefore, the value undefined. Our translation ignores this complication and translates the quantified expressions directly to the PVS ex- \textsuperscript{9} The class OclAny is the superclass of all classes in a model, similar to Java’s Object class. \textsuperscript{10} Note that nil is well-defined and represents any empty collection. expression \texttt{forall (n: int): TRUE}. The main advantage of this approach is that specifications are much easier to prove. We lose soundness of the translation by this choice, because the translated constraint is provable in PVS but its original OCL constraint is undefined, nevertheless we favor our semantics, as it usually better reflects the user’s intention. We seriously doubt, as many others, that the designers of OCL have chosen the right interpretation for the \texttt{allInstances()} function and quantification. Finally, the third way to write specifications which are undefined in the semantics of OCL is by accessing undefined values in the object diagram (or state) of a model, e.g., by accessing an array member outside of the bounds of the array, accessing an attribute not defined (but declared) in this class, or retyping (casting) an object to a subtype of its real type. In a real programming language this usually leads to a run-time exception. We assume that the underlying behavioral semantics guarantees that every attribute is defined, and generate suitable assumptions for the other cases, because we are mainly interested in partial correctness. 4 Initial Experience The formalism and the methods described in this paper have been implemented in a prototype. We have tested the compiler by, e.g., verifying the object-oriented version of the “Sieve of Eratosthenes” described in Sec. 2. We have proved the safety property that only prime numbers are generated. The proof uses TL-PVS [10]. The complexity of the transition relation generated by our compiler proved to be challenging. It appears that this complexity is inherent to the UML semantics. The most difficult part is reasoning about messages in queues. The concepts of messages preceding one another, crucial for the sieve, are difficult to work with for purely technical reasons. The proof of the sieve depends on the facts that no signals are ever discarded and that signals are taken from the queue in a first-in-first-out order. These two properties have to be specified in PVS as invariants and proved separately.\footnote{This is not a limitation of PVS but a limitation of interactive theorem proving in general.} Note, that, if one of the two properties does not hold, then the sieve would not satisfy its specification. The run-time of our compiler is usually less than a minute, but it is in any case dominated by the time required to prove the model correct in PVS. The coarse level of specifications in OCL are not sufficient to automate the whole verification process. For the proofs, annotations of the states of a state machine expressing invariants might be highly useful, as these have to be formulated as intermediate steps in the current proof. This extension entails some changes of the semantic representation, as the proof method resulting from this is more similar to Floyd’s inductive assertion networks (see [4] for references) as implemented in [3]. References
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/32523/32523.pdf?sequence=1", "len_cl100k_base": 4339, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 20320, "total-output-tokens": 5625, "length": "2e12", "weborganizer": {"__label__adult": 0.0003635883331298828, "__label__art_design": 0.0003542900085449219, "__label__crime_law": 0.0003833770751953125, "__label__education_jobs": 0.0005726814270019531, "__label__entertainment": 5.6862831115722656e-05, "__label__fashion_beauty": 0.00016736984252929688, "__label__finance_business": 0.00021469593048095703, "__label__food_dining": 0.00034880638122558594, "__label__games": 0.00043892860412597656, "__label__hardware": 0.0006899833679199219, "__label__health": 0.0005822181701660156, "__label__history": 0.0002167224884033203, "__label__home_hobbies": 8.589029312133789e-05, "__label__industrial": 0.0004787445068359375, "__label__literature": 0.0002694129943847656, "__label__politics": 0.0002753734588623047, "__label__religion": 0.0005130767822265625, "__label__science_tech": 0.0218963623046875, "__label__social_life": 9.41753387451172e-05, "__label__software": 0.0045623779296875, "__label__software_dev": 0.96630859375, "__label__sports_fitness": 0.00034809112548828125, "__label__transportation": 0.0005578994750976562, "__label__travel": 0.0002130270004272461}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20401, 0.02755]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20401, 0.53345]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20401, 0.85202]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 1837, false], [1837, 4713, null], [4713, 6077, null], [6077, 9315, null], [9315, 12165, null], [12165, 15329, null], [15329, 18267, null], [18267, 20401, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 1837, true], [1837, 4713, null], [4713, 6077, null], [6077, 9315, null], [9315, 12165, null], [12165, 15329, null], [15329, 18267, null], [18267, 20401, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20401, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20401, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20401, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20401, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20401, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20401, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20401, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20401, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20401, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20401, null]], "pdf_page_numbers": [[0, 0, 1], [0, 1837, 2], [1837, 4713, 3], [4713, 6077, 4], [6077, 9315, 5], [9315, 12165, 6], [12165, 15329, 7], [15329, 18267, 8], [18267, 20401, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20401, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
c0fd843ce567f21de46daac2c87b054894e92d44
TRSkit: A Simple Digital Library Toolkit Michael L. Nelson <m.l.nelson@larc.nasa.gov> Sandra L. Esler <s.l.esler@larc.nasa.gov> NASA Langley Research Center MS 157A Hampton, VA 23681-0001 Abstract This paper introduces TRSkit, a simple and effective toolkit for building digital libraries on the World Wide Web. The toolkit was developed for the creation of the Langley Technical Report Server and the NASA Technical Report Server, but is applicable to most simple distribution paradigms. TRSkit contains a handful of freely available software components designed to be run under the UNIX operating system and served via the World Wide Web. The intended customer is the person that must continuously and synchronously distribute anywhere from 100 - 100,000's of information units and does not have extensive resources to devote to the problem. TRSkit: A Simple Digital Library Toolkit Michael L. Nelson <m.l.nelson@larc.nasa.gov> Sandra L. Esler <s.l.esler@larc.nasa.gov> NASA Langley Research Center MS 157A Hampton, VA 23681-0001 January 31, 1997 Abstract This paper introduces TRSkit, a simple and effective toolkit for building digital libraries on the World Wide Web. The toolkit was developed for the creation of the Langley Technical Report Server and the NASA Technical Report Server, but is applicable to most simple distribution paradigms. TRSkit contains a handful of freely available software components designed to be run under the UNIX operating system and served via the World Wide Web. The intended customer is the person that must continuously and synchronously distribute anywhere from 100 - 100,000's of information units and does not have extensive resources to devote to the problem. 1.0 Introduction This paper introduces TRSkit, a simple toolkit and methodology for quickly implementing a digital library using the World Wide Web as the transmission medium. This design is based on successful architecture implemented in the Langley Technical Report Server (LTRS) (Nelson & Gottlich, 1994; Nelson, Gottlich, & Bianco, 1994) and the NASA Technical Report Server (NTRS) (Nelson, et. al., 1995), but may be shaped to fit most distribution paradigms. TRSkit is designed to be simpler to install and operate than similar systems, such as the Dienst protocol (Davis, Krafft, & Lagoze, 1995). The intended customer is not necessarily the information science researcher, but rather the individual that is responsible for the dissemination of a body of information units and does not have a lot of time or money to devote to the task. Therefore, this scheme is specifically aimed at government research laboratories, universities and other organizations that desire to distribute their information and do not have distribution limitations or chargeback considerations. This model could be grown to incorporate payment and security, but it is not addressed at this time. The software introduced in this paper is available at: http://techreports.larc.nasa.gov/ntrs/xtrs.html 2.0 Guidelines Some basic presuppositions guided development. Understanding these provides insight to design decisions made during the development of TRSkit. 2.1 Simple things should be done simply The requirement addressed is to set up a digital library that distributes documents with minimum difficulty. Keep in mind the customer is there to access and retrieve an information unit (report, data set, image, software package, etc.) in a convenient and timely manner. 2.2 Focus on distribution only Organizations often tackle too large a problem and attempt to address editing, routing, approval, scanning, etc. in their digital library projects. While these areas are interrelated, there is considerable utility in initially isolating them and resisting broad brush, "one-size fits all" solutions. What is good internally for information production or collection management may not be how the intended customers wish to access data. 2.3 Deliver the final product, not just meta-data Unless distribution is to be controlled for reasons of security or chargeback, avoid wasting time distributing abstracts or other meta-data. Concentrate only on delivering the final product (full publication, software package, etc.) to the customer. 2.4 Use the World Wide Web for distribution Using the web seems obvious now, but used to be a more controversial point. In the past, many database and digital library projects failed because they spent most of their resources defining the specialized software for access, and none of their resources for actually populating their database. Specialized client development is justified only when the application requirements are so specialized that they cannot be satisfied through a general web browser. The increasing use of Java (Arnold & Gosling, 1996) and Tcl/Tk (Ousterhout, 1993) for extending browser functionality should make specialized client development even more rare. 2.5 Set up your digital library in a proper environment This includes both the development and production environment. Historically, this has meant using Unix, however Windows NT has presently duplicated most of the useful Unix functionality and could possibly be used as well (note that the tools presented within this paper have only be tested on a Unix platform). Be sure that the machine chosen is correctly configured to withstand the increased load of an active digital library. 3.0 Background TRSkit is stable and mature enough to form the foundation of popular production systems. The components described here were assembled beginning in 1992 starting with the creation of the Langley Technical Report Server. The follow-on project of the NASA Technical Report Server produced even more components. LTRS has been in operation since January 1993, and NTRS has been in operation since June 1994. Usage has continually increased over time, with NTRS averaging over 30,000 searches a month from sources world wide (Figure 1). ![Figure 1. Average Usage of the NASA Technical Report Server.](Note: NTRS was not available during the US Government Furloughs of November and December 1995) It is important to note that the toolkit is not only useful to publications servers (Table 1). It has been demonstrated for photo, fact sheet, and software servers. 4.0 Architectures The architecture is composed of an underlying framework wrapped with interfaces to hide the details of the system from the user. Hence, there are two distinct points of view: the user’s and the maintainer’s. 4.1 User’s Point of View Figure 2 illustrates how the user perceives the system. Options for browsing and keyword searching are implemented. Browsing provides a method of perusing meta-data, such as abstracts and citations, with links to the appropriate final products (publications, software packages, etc.). Keyword searching provides similar access. A user enters the keywords, a search is performed against the indexed meta-data, and a list of titles returned. The title then lead to a meta data page, with the final product linked from there. The intent is to offer the user a lightweight mechanism to refine their search / browsing by offering increasing levels of description about the final product, which is assumed to be a heavier weight in downloading, either in time to retrieve, or even chargeback. 4.2 Maintainer’s Point of View The maintainer is responsible for tracking two items: the meta-data and the actual data item. A simplifying assumption is made that the actual data item cannot be searched or otherwise have its information extracted. For instance, the data item may be an image, executable software, or an arbitrary document type. Experience from LTRS has indicated that users do not mind this restriction. The logical organization that the maintainer keeps does not have to be the same one that the user sees. Thus, the data items are free to be files in a file system, or objects in a database. The meta-data could be either as well, but in practice, it is easier to keep meta-data as a collection of files. Note that the meta-data is not archived in HTML. There is great utility in archiving the meta-data in a structured format that is presentation independent. It is trivial to convert the meta-data to HTML either on the fly or in advance. 5.0 Components TRSkit is composed of the following set of freely available components designed to create an efficient digital library system (Figure 3): - Meta-Data Processing Tool - Report Object Management - Search Engine - Administration Scripts - UNIX and Perl 5.1 Meta-data Processing Tool In digital libraries, the actual data items do not get manipulated by the maintainer. Often they cannot be indexed or searched because they are images, executable software, or other difficult to abstract data formats. Thus only a tool for manipulating meta-data is employed. The tool used is a modified public domain Perl script, which converts citations and abstracts in bib/refer (Lesk, 1978), bibtex (Lamport, 1986), RFC-1807 (Lasher & Cohen, 1995), and other formats into HTML. Figure 4 contains an example of meta-data used in TRSkit, which is predominately in refer format. A compact scheme is a discretization scheme that is advantageous in obtaining highly accurate solutions. However, the resulting systems from compact schemes are tridiagonal systems that are difficult to solve efficiently on parallel computers. Considering the almost symmetric Toeplitz structure, a parallel algorithm, simple parallel prefix (SPP), is proposed. The SPP algorithm requires less memory than the conventional LU decomposition and is efficient on parallel machines. It consists of a prefix communication pattern and AXPY operations. Both the computation and the communication can be truncated without degrading the accuracy when the system is diagonally dominant. A formal accuracy study has been conducted to provide a simple truncation formula. Experimental results have been measured on a MasPar MP-1 SIMD machine and on a Cray 2 vector machine. Experimental results show that the simple parallel prefix algorithm is a good algorithm for symmetric, almost symmetric Toeplitz tridiagonal systems and for the compact scheme on high-performance computers. The key responsibility of the meta-data is to store and link the appropriate data items. Related to this is the decision of choosing a structured format for the meta-data. There is no "right" format, but it is usually advantageous to choose a common format because tools for conversion are likely to be written. It is important to note that refer supports multiple data formats (i.e., PostScript, PDF, HTML, GIF, etc.). The "%U" tag is used by default to denote the URL of the file, but other fields are defined to handle specific data type: - %U-text/html - %U-application/pdf - %U-application/postscript Using MIME types (Borenstein & Freed, 1993) allows a clear path for future data object formats. If these fields are defined in the refer citation then an extra line with “Formats(s):” will appear with the options that are available. 5.2 Report Object Management TRSkit can support several models of meta-data and report (or other data object) relations. Originally, LTRS began with 1 meta-data object referenced 1 report object, generally a compressed PostScript file. With the increased popularity of Adobe Acrobat (PDF), many meta-data objects now refer to multiple format types. This lead to the MIME type extensions introduced above. Currently, LTRS and other services maintain both their meta-data and their report objects as files in a standard Unix filesystem. Migrating the report objects to reside in a true database would not greatly impact TRSkit since the %U field can be used to “point” to objects within a database. Storing the meta-data itself in a database is possible, but would require more modifications to the meta-data processing tools. 5.3 Search Engine A search engine extracts or filters information from the data collection, transforms it into an index format, interprets the user’s request, and produces a summary of matches relevant to the search term discovered within the index. As determined by the NASA Indexing Benchmarks, search engines can be grouped into two levels according to their performance characteristics (Esler & Nelson, 1996). Level one engines are efficient on small to medium sized data collections. Level two search engines are ideal for data collections up to and beyond 100MB. Table 2 contains recommended freely available (non-commercial) search engines. Table 2. Recommended freely available search engines. <table> <thead> <tr> <th>Engine</th> <th>Level</th> <th>URL</th> </tr> </thead> <tbody> <tr> <td>freeWAIS-sf</td> <td>2</td> <td><a href="http://ls6-www.informatik.uni-dortmund.de/ir/projects/freeWAIS-sf/freeWAIS-sf.html">http://ls6-www.informatik.uni-dortmund.de/ir/projects/freeWAIS-sf/freeWAIS-sf.html</a></td> </tr> <tr> <td>FFW</td> <td>1</td> <td><a href="http://eminfo.emc.maricopa.edu/ffw/ffwhome.html">http://eminfo.emc.maricopa.edu/ffw/ffwhome.html</a></td> </tr> <tr> <td>Harvest/GLIMPSE</td> <td>2</td> <td><a href="http://harvest.cs.colorado.edu">http://harvest.cs.colorado.edu</a></td> </tr> <tr> <td>Isite</td> <td>1</td> <td><a href="http://www.cnidr.org/Software/software.html">http://www.cnidr.org/Software/software.html</a></td> </tr> </tbody> </table> Each of the above search engines are simple to implement and require minimal customization. Although the level one search engines are superb for small libraries and personal file systems, they are not suggested for use with large scale distribution. Unless the domain of the information within your digital library is restricted, plan for success and choose a level two search engine to insure a framework that is expandable. Thus, the implementation of freeWAIS-sf or Harvest/GLIMPSE is suggested for use with most digital libraries. 5.4 Administration Scripts Administration scripts provide a backbone for efficient operation within digital libraries. Scripts included in the TRSkit are capable of the following automated tasks: - Database and usage graphs generated from logfile interpretations. - Updating HTML interfaces according to objects in the data collection. - Forms to add, edit, and remove objects by members of the administrative team. - Forms to set search defaults and other options to provide guidance for the novice user. These scripts play an essential role in the maintenance performance of a digital library. As the data collection grows, it becomes necessary to implement more automated scripts to ensure the timely upkeep of the system. Scripts included in this toolkit can be run interactively or out of cron. 5.5 Unix and Perl Although it is possible to set up a digital library on a PC or Macintosh, TRSkit services to date have been set up on Unix workstations. TRSkit has not been tested on non-Unix machines. Perl is an interpreted language similar to C and C++ that was created to parse arbitrary files, extract information from them, and manipulate the information into reports. Thus, it works efficiently with digital libraries and system management in general. Perl can be obtained free of charge at: ftp://ftp.netlabs.com/pub/outgoing/perl5.0 6. Implementation Implementation of TRSkit is a simple process. To start with it is necessary to have a WAIS URL that points to the abstract database and a URL that points to the query interface. Second, a version of WAIS must be installed (preferably freeWAIS-sf). Other search engine can be used in place of this, but may require more customization. Third, the citations and abstracts must be put in the chosen meta-data structured format. Using refer is favored, but other formats can be used after some customization of the administrative scripts. Detailed information discussing implementation and source code is available at: 7. Future improvements After establishing the core functionality, new services can be added to the digital library. These could include adding caching optimizations, different front ends or options for beginning and advanced users, or storage side enhancements. One improvement underdevelopment is adding the hooks necessary to support fielded searches where available. Most importantly, these improvements can be added incrementally as resources allow or requirements demand, while the system continues to serve. The modular nature of these components allow additions and upgrades without disturbing the larger system. 8. Lessons learned When designing a digital library, there are a number of issues to consider. Most importantly, although this may start out as an expected low volume server, prepare for success. Choose an architecture that will easily expand to accommodate a popular resource. Additional lessons learned with the design and implementation of the TRSkit include: - Scaleable naming scheme - Never throw away source to previous formats - Use rapid prototyping 8.1 Scaleable Naming Scheme Choose a scaleable naming scheme on the implementation side. For instance, the turn of the century poses a significant challenge to the computing industry because many software designers neglected to prepare for the year 2000 when they chose to use a two digit labeling scheme for years (i.e., 96 verses 1996). Secondly, it is important to include the institution in the naming of the data. For example, using TR-1997-123.pdf may be sufficient from a maintainers point of view, but using MIT-EE-TR-1997-123.pdf is better. Additionally, if the report objects are going to be stored on a filesystem, choose a properly structured hierarchy that can accommodate thousands of report objects. A file path similar to: ``` .../report-server/institution/department/year/file.format ``` Is preferable to the more often used: ``` .../report-server/reports/file.format ``` 8.2 Retain Original Source Never throw away the original format or source for the data items. Present formats may fail and be less productive then the previous versions, and storage is much cheaper than regenerating the data item. The original format (Word, WordPerfect, TeX, etc.) does not have to be served to the public, but it keeping it increases flexibility for future conversion efforts. 8.3 Rapid Prototyping Use rapid prototyping: start small and get customer feedback early. Many digital library and data management projects never get off the ground because the sheer size of their legacy information problems. Do not let that dissuade you from providing meaningful, albeit small, services today (Figure 5). Figure 5. Past and Future Issues in publishing and distribution. The size of the archiving and library issues should not interfere with the data management and publishing issues; they are parallel activities. Coordination between the activities is required, however, in a limited resource environment, an adequate digital library can be built using TRSkit to sample the existing publishing process and populate the collection. 9. Conclusion Large volume, customer pleasing digital libraries are not the strictly the realm of complex research prototypes or expensive commercial systems. The presence of the universal information client, the WWW browser, greatly simplifies providing access to multimedia digital libraries. Organizations that have limited resources yet need to distribute their information collection in a scaleable manner can use TRSkit to build an extensible digital library. TRSkit has been in use in the many servers that make up the NASA Technical Report Server and other resources since 1994. The collection of tools that make up TRSkit consist of freely available components that provide for easy distribution on the World Wide Web. Further information and source code is available at: http://techreports.larc.nasa.gov/ntrs/xtrs.html References
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20040110426.pdf", "len_cl100k_base": 4149, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 20077, "total-output-tokens": 5082, "length": "2e12", "weborganizer": {"__label__adult": 0.0002624988555908203, "__label__art_design": 0.0005192756652832031, "__label__crime_law": 0.0003659725189208984, "__label__education_jobs": 0.0024261474609375, "__label__entertainment": 0.00013828277587890625, "__label__fashion_beauty": 0.00015246868133544922, "__label__finance_business": 0.0006084442138671875, "__label__food_dining": 0.0002677440643310547, "__label__games": 0.00038242340087890625, "__label__hardware": 0.0011615753173828125, "__label__health": 0.0003886222839355469, "__label__history": 0.0004658699035644531, "__label__home_hobbies": 0.00011163949966430664, "__label__industrial": 0.00044608116149902344, "__label__literature": 0.000415802001953125, "__label__politics": 0.0002582073211669922, "__label__religion": 0.0003993511199951172, "__label__science_tech": 0.10626220703125, "__label__social_life": 0.00015652179718017578, "__label__software": 0.07391357421875, "__label__software_dev": 0.81005859375, "__label__sports_fitness": 0.00018155574798583984, "__label__transportation": 0.0004458427429199219, "__label__travel": 0.0002522468566894531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21575, 0.039]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21575, 0.62053]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21575, 0.87953]], "google_gemma-3-12b-it_contains_pii": [[0, 905, false], [905, 3994, null], [3994, 6335, null], [6335, 7797, null], [7797, 9217, null], [9217, 12605, null], [12605, 15849, null], [15849, 18221, null], [18221, 19807, null], [19807, 21575, null]], "google_gemma-3-12b-it_is_public_document": [[0, 905, true], [905, 3994, null], [3994, 6335, null], [6335, 7797, null], [7797, 9217, null], [9217, 12605, null], [12605, 15849, null], [15849, 18221, null], [18221, 19807, null], [19807, 21575, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21575, null]], "pdf_page_numbers": [[0, 905, 1], [905, 3994, 2], [3994, 6335, 3], [6335, 7797, 4], [7797, 9217, 5], [9217, 12605, 6], [12605, 15849, 7], [15849, 18221, 8], [18221, 19807, 9], [19807, 21575, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21575, 0.05263]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
e491c35f033621a0d3c208f3b1672a10b86309c0
CHANGING TIMES Quality for Humans in a Digital Age Rich Rogers HEDDON PUBLISHING Chapter Three: Things to do “We’re all busy people. We all have things to do, but it’s important to be here for the start of the meeting. In our work, we can’t afford to be late because, as I think I’ve mentioned before, news doesn’t sleep.” Jonny Wilson’s catchphrase prompted a groan from some of the gathered writers who made up the Sydney Times’ Sports desk. Kim noticed that Andy, the junior who had arrived a few minutes late and prompted Jonny’s comments, was looking down at his feet, no doubt feeling a little self-conscious but also perhaps suppressing a smile at the reaction to Jonny’s comments. Jonny was the Sports Editor and, in part due to his close relationship with the paper’s editor, he was a prominent person at the *Times*; part of a small group who had inside knowledge of important management decisions and the strategy of the owners. His fondness for making jokes, coupled with a self-confidence which he didn’t always remember was not shared by all those around him, could sometimes lead to insensitivity in the way he dealt with others. One evening over drinks with the team, Kim had spoken to him about this, and he had promised to try to rein in the wisecracks, and to avoid giving people a hard time in front of their colleagues. Clearly, he needed to try a little harder. Kim got on well with Jonny, and generally enjoyed working with him. They were around the same age and had been working together for a couple of years, after he had come to the paper from an Auckland-based daily. He had a quite different style from Norman White, the previous Sports Editor. Where Norman was interested in details, sometimes drawing criticism from the journalists for micro-managing their work, Jonny liked to keep his involvement to a minimum. In his own words, he liked to “steer things” or “point people in the right direction”. He had a knack for reducing complex subjects to short statements or comments. Sometimes this worked well and other times people longed for more information. Kim liked the freedom which came with Jonny’s approach, but sometimes wished for a little more clarity about what he actually wanted. As usual, his weekly update covered a few of the previous week’s highlights: mentions for stories which had been favourably received or had generated a reaction in social media – along with some ideas for stories which might take the spotlight in the coming days. This week there were also two items of general interest: a vague update on the technical problems that had affected the system they used for submitting articles ("they’ll get it sorted this week"), and a brief summary of the circulation figures which had been reported the previous day ("pretty crap"). The weekly meeting was different to the daily editorial meetings where they talked about which stories would run in the paper and online, and how prominent they would be. During that meeting, the writers often pitched ideas for feature stories which could run alongside breaking news and sports results. Those discussions were usually interesting, but the more mundane weekly ones often caused Kim’s attention to drift elsewhere. On this occasion, her gaze was drawn towards the view of Darling Harbour and a fond but patchy memory of a drunken evening spent on a harbour cruise. She was brought abruptly back to the here and now by Jonny’s voice. “... so, Kim and Ian, I’d like you to come along, please. Can you both make it?” “No problem for me,” replied Ian Unaware of when and where her presence was requested, but keen to avoid admitting to daydreaming, Kim bluffed an answer. “Yes, that should be fine. I’ll just double-check my calendar after this.” Jonny gave her a strange look, perhaps realising that she hadn’t been listening, but he managed to resist the temptation to make a joke at her expense. Once the meeting was over, she asked Ian what Jonny had been talking about. “It’s a meeting this afternoon with a company called AI-Bel who have some software which he wants us to look into,” he explained. “I don’t know too much more about it. Where had you drifted off to, anyway?” “Oh, I don’t know,” replied Kim. “I was just thinking about something. You know how it is in Jonny’s meetings.” “I do, only too well. Don’t worry about it. I’m not sure Jonny really noticed, and we’ve got some time to look into it. I’ll find out what I can and we can talk it through. Coffee and a chat later?” “Sure,” Kim replied, smiling at the way Ian slipped into his supportive, mentoring role. “We can catch up before the meeting.” Kim found a workspace in ‘The Stadium’; an area of the office with a few banks of hot-desks for sports writers to use. When the office space had been rearranged, someone had come up with the idea of naming the different areas according to the kind of stories which the people who would be sitting there worked on. The Finance writers worked in ‘The Pit’, named after the old trading pits of Stock Exchanges, the political writers were in ‘Parliament’, and so on. It was intended to be fun but some of the old-school journalists found it puerile. They also resented losing their allocated desks. Kim connected her laptop to the screen on the desk. Mercifully, there were no operating system updates to install so she was soon ready to start work. Unfortunately, her mind was racing, with different thoughts swirling around and competing for attention; not all relating to work. If she was to get into the right frame of mind, she needed to eliminate some of the distractions. She formed a mental to-do list. There was some preparation to be done for the meeting with AI-Bel, but she could leave that until she caught up with Ian just before they went in. Better for them to go through it together. Then there was the confusion over the electronic ticket on this morning’s bus journey to sort out, and those emails which she had been looking at over breakfast. The one from Chloe Clark had been important, so that seemed a good place to start. Chloe was Editor of an independent magazine in the UK called Two Halves. It was a well-established publication which had been running for over twenty years and had a growing audience subscribing to an online edition. The magazine focused on football and, where it had once been devoted to the clubs and leagues in the UK, it now featured many stories relating to the global game. Kim’s friend Paul worked for the magazine and it was while she was staying with him during her visit to London the previous year that he had floated the idea of Kim taking up a role with the magazine. He had spoken to Chloe, who had seemed keen to bring Kim in. At first, Kim had dismissed the idea, being reluctant to consider leaving a large daily newspaper for something which she considered less prestigious. She didn’t tell Paul this was the reason, instead saying that she wanted to continue her involvement in a broader range of sports. Over the last couple of months, however, she had started to give it serious consideration. Some of the changes at the Times had led to a strained atmosphere, with growing resentment of the new generation from the older hands. The circulation figures were a constant source of discussion and there had been rumours that the owners may even consider becoming an online-only publication. Not that this in itself troubled Kim - she was familiar with modern digital editing and publishing methods, and generally felt positive about the new ways of working. But the uncertainty at the paper was making people feel unsettled and morale was low. So far, Kim and Chloe had only been in touch via email but Kim had agreed to have a video call with Chloe, to see what she had to say about the magazine and what kind of work might be available. Kim’s working theory was that they might need someone to report on the leagues in Australia, also to write feature articles. on the growth of the game in this region. Chloe seemed interested in the work Kim had done with the Times website team, and in her activity on social media sites, as much as in her writing. Maybe they wanted to grow their digital presence and expand coverage in Australia and Asia. Chloe was very keen to have the video call and was pushing for this week. As Kim looked at her calendar she saw that, as usual, most of her evenings would be busy with interviews or attendance at sporting fixtures, but Thursday night was an exception. Kim replied to Chloe, suggesting that they talk at nine o’clock Sydney-time on Thursday evening, which would be mid-morning in the UK. She also sent a short email to Paul, asking if he knew anything that might be useful for Kim in advance of the call. With these two messages sent, the next email caught Kim’s attention – the reminder from OzTelia, the phone company. The bill was paid but there was still the Direct Debit to set up. She clicked on the link in the email and a web browser window opened. It seemed as though setting up a Direct Debit should be a relatively simple process but the website was apparently created with a secondary purpose as a kind of psychological test. She remembered a lecturer at university who had showed them something called the ‘Five Room Puzzle’; a challenge where you were shown a plan of a house with five rooms and asked to draw a single continuous line which crossed each of the walls just once. It was an ingenious, devious device, intended to frustrate, and ultimately to force the students to accept that there was no legitimate solution. For a group of smart people who liked to solve problems, this was not easy. It was, however, perfect preparation for the experience of using OzTelia’s website. Much like the puzzle, it had looked deceptively easy at first. The section of the website for Direct Debits gave a choice of options: Setup, Amend, or Cancel. Selecting the first of these led to a login page with some instructions about what to do once she had logged in. Of course, once she had logged in, the instructions disappeared, which made things a little complicated. Kim opened another tab in the browser and found the instructions again. Unfortunately, the words and pictures which described the account page and the steps to follow were very different to what she saw on her own screen. There was no sign of a link to create a Direct Debit anywhere but there was a little picture of the company’s mascot; a kind of cartoon mobile phone called Ozzie, which she remembered from their TV ads. Ozzie had a little speech bubble saying: ‘Need help?’ She most certainly did. When she clicked on the picture of Ozzie, a ‘Support’ page appeared, containing a long list of possible topics to read. There was no time to read through them all but the page also had a place to search for a term so she typed ‘Direct Debit’ and hit the button. A picture appeared of Ozzie riding a bike, with an animated spinning circle around him and the words ‘Working on it...’ underneath. After a few seconds, a familiar sight. The search had taken her back to the page with options to Setup, Amend or Cancel; the same place where the process had started. This was no help at all. Surely it was possible to talk to someone instead. After a bit of digging around, Kim found a page called ‘Getting in touch with us’. It looked like there were different paths to follow, depending on what you wanted to talk to them about. One of the options was Billing, and this led to another list of choices. About halfway down that list she spotted ‘Setting up your Direct Debit’. She selected it. After a short interlude during which Ozzie once more appeared on his bike, the screen showed those same three options: Setup, Amend, and Cancel. It didn’t seem to matter what she did. It was her destiny to return to this page. Suddenly, a box popped up on the screen: >> Hi, this is Jim. Can I help you with anything today? She’d seen this sort of thing before; a kind of chat support where they tried to help you without actually talking to you. Ian had told her that they were not real people she was chatting with but some kind of automated response system. It was worth a try: >> Hi Jim. I’m trying to set up a Direct Debit but I can’t work out how to do it A short delay, then: >> I can help you with this. Who am I chatting with? >> My name is Kim >> Hi Kim. And you need help with a Direct Debit? >> Yes >> OK. There are three options for Direct Debits. You can Setup, Amend or Cancel. Which of these would you like to do? >> Setup >> Great. Can you please tell me what sort of service you have with us. Mobile, home phone or broadband. >> Home phone and broadband >> Is this an ADSL or Cable connection? >> I’m not sure... Cable I think >> Unfortunately there is no facility for setting up Direct Debit online for Cable customers. Please call us and we can set this up for you over the phone. Kim closed the chat window in frustration. What a waste of time. Why send out an email telling her to set up Direct Debit online if she couldn’t do it, and why didn’t the website tell her which customers could or couldn’t do it? Now she would have to try to find a phone number to call them and would probably have to sit on hold for twenty minutes. Maybe another day. She felt a little guilty about shutting down the chat with Jim mid-conversation, but if Ian had been right and Jim really was just a robot then she probably hadn’t hurt his feelings too much. Empathy: the essential ingredient If you use technology, there is a good chance that at some time or another you will have had a similar experience to Kim. Carrying out a seemingly simple task - in this case, setting up a direct debit for a phone bill - is thwarted by a website which has been designed and built in such a way that the task becomes prohibitively difficult. The frustration Kim feels is largely down to the content of the site, and the poorly defined links which repeatedly send her back to the same page. But when she finds herself in difficulty, she also feels unsupported. The site’s support pages are no help whatsoever and the chat system, whilst finally providing some kind of explanation, does little to improve Kim’s mood (or her impression of the website and the company). She notices some of the visual elements which have been included, perhaps in an attempt to enliven what might be rather mundane activities, but animated pictures of cartoon mobile phones do not compensate for being unable to do the thing you set out to do. During her efforts, Kim is reminded of an unsolvable puzzle and sometimes it can indeed feel as though these sorts of difficulties are the result of some cruel trick; a deliberate attempt to frustrate us through the use of deception and misinformation. It would be an unusual (and probably unemployed) designer who set out to create such a troubling product. Poor experiences are less likely to be by design than the result of errors, bad decisions, poor management, or a combination of these. Perhaps Kim has little option but to continue using OzTelia’s products and services, regardless of how ineffective they are. It might be that they operate in a market with few competitors and have little incentive to change or to think about how people will feel when they use their products. Kim’s experience suggests that, in the design of their website at least, the company has not shown the care and consideration their customers would hope for. In this section, we will explore how empathy – the ability to see things from another person’s perspective – can help people involved in product development to consider the people who use those products, and help companies like OzTelia create better experiences for their customers. We will look at some of the techniques used in software development which have contributed to an empathy gap, and some methods which have been adopted in an attempt to close that gap. **A lesson from history** During my teenage years, I was given a lesson in the power of empathy. My family had moved to a new city and I had moved to a new school. It was an important time for me in many ways, not least of which was my education. I was about to enter the last two years of schooling, the period when my marks contributed to the final grades I would receive. Whilst some of the subjects I would study were mandatory, I had the opportunity to elect others. The selection mechanism meant that I had to decide between subjects from various groups: two from the Sciences, one from Humanities, and so forth. My criteria for deciding were not entirely logical and dispassionate. A bad experience with dissection of frogs meant that Physics got the nod over Biology, and a particularly tedious lesson in rock formations probably edged out Geography in favour of History. History was not a subject which I had shown any great interest in, or aptitude for, up to that point. This changed during our first lesson with my new teacher, when she explained her teaching philosophy to the class. In particular, she emphasised the importance of empathy, which she said was central to the way we would learn. She even wrote the word on the blackboard in big letters to help us remember. It certainly stuck in my mind. Our teacher went on to describe how we could better understand the events we would learn about by placing ourselves in the position of the people who witnessed them, or in some cases by hearing from those people. The subject matter was the Second World War, and over the following weeks we watched several episodes of the documentary series *The World at War*, produced in the 1970s by Jeremy Isaacs. This was a landmark documentary series, and perhaps its greatest strength was its ability to provide an insight into the experiences of the people who lived through the events leading up to the Second World War, how the conflict affected their day-to-day lives, and the consequences for them in the years that followed. Interviews were used as a technique to capture their recollections but also, crucially, their emotional responses. We could see the pain in their eyes and hear their voices breaking. We could begin to comprehend the sadness and despair of victims, or the guilt or defiance of perpetrators of violent acts and unspeakable atrocities. As a student of this period of human history, I was immediately captivated by the idea of putting myself in the shoes of these witnesses. Hearing their voices and stories, and seeing how they were affected, was a world away from my previous experience of History lessons; endless lists of events, dates and numbers, and little more than a memory exercise. Which year did Charles II become King of England? When was the Magna Carta signed? How many troops were involved in the D-Day landings? These are things which I was taught but which I could only now answer after a quick Google search or a visit to Wikipedia. But ask me how the Allied soldiers felt as they arrived in landing craft on the beaches of Normandy in June 1944 and I can tell you that they were terrified of what lay in wait, but elated to be leaving the small vessels which had been tossed around on the English Channel. during the crossing. Some were simply happy to get back on dry land, despite the dangers ahead. I remember this because I heard it from those who were affected. Facts and figures might not convey real experiences but hearing people’s stories connects us to them. Their feelings and emotions foster empathy, and help us to understand their responses. Quantifying Quality If stories and feelings tell us more about human responses than cold facts, dates and numbers, this might provide a clue as to why software development teams, and the organisations they work with, sometimes struggle with the question of quality. In this field, refuge is often sought in numbers. As discussed earlier in the book, it can be challenging to reach an understanding of what quality means. Perhaps in an attempt to make this task simpler, to reach a consensus among those involved, there is a tendency to attempt to quantify quality. By setting numeric quality targets - sometimes called exit criteria - a desired state can be agreed upon; a point at which the product is deemed good enough for whatever follows, including whether it is ready to be released to customers. The numbers act as a comforting security blanket, creating a sense of control. Even if the picture they paint is troubling, at least they provide a means of seeing that picture. Unfortunately, whether the picture is troubling or not, it may well be a misrepresentation. Common metrics such as counting test cases passed and failed, or counting defects, are unreliable, given that they rely on equivalence (each item counted being of the same value or significance) and ignore subjectivity (what you consider acceptable, I might consider unacceptable). They are also very limited representations, excluding crucial information which is not, or cannot be, counted. For example, a count of test cases does not represent all the testing which occurs as it excludes exploratory or investigative work beyond the narrow confines of those test cases, and a count of defects excludes problems found and fixed without ever being recorded. Not to mention that the numbers tell us nothing about aspects of the product which have not been tested, and problems which may yet be found. Quality is not measurable in the way cost or time might be measured. If we accept that quality is subjective and variable, how can it be a fixed attribute which can be counted? Cost can be measured in units of currency: dollars and cents, pounds and pence. Time can be measured in units of hours, days or weeks. There is no such unit of measurement for quality. This is not to dismiss metrics entirely. Whilst they cannot be treated as absolute measures of quality, some can be useful and a range taken together can, if treated with care, be a helpful tool in identifying patterns and potential problems. I have myself spent time carefully preparing reports containing metrics The connection which stories can create helps us to address the empathy gap which can occur when one party is unable to understand how another party might feel in a given set of circumstances. Such a gap can occur when one is unable to understand the emotions or experiences of another. Stories can help bridge this gap by providing insight into the perspectives and feelings of others. As I discovered in my History lessons, and the conversations triggered by my work reports, storytelling can be far more effective in understanding human responses and how others see things. As difficult as it might sometimes be, the less time and effort spent trying to quantify quality, the better. Numbers might tell us something about products, but not everything, and metrics; even combinations of metrics with clear patterns, offer no guarantees about the impression customers will form of technology. As I discovered in my History lessons, and the conversations triggered by my work reports, storytelling can be far more effective in understanding human responses and how others see things. Mind the gap The connection which stories can create helps us to address the empathy gap which can occur when one party is unable to understand how another party might feel in a given set of circumstances. Such a gap can occur when one is unable to understand the emotions or experiences of another. Stories can help bridge this gap by providing insight into the perspectives and feelings of others. As I discovered in my History lessons, and the conversations triggered by my work reports, storytelling can be far more effective in understanding human responses and how others see things. As difficult as it might sometimes be, the less time and effort spent trying to quantify quality, the better. Numbers might tell us something about products, but not everything, and metrics; even combinations of metrics with clear patterns, offer no guarantees about the impression customers will form of technology. As I discovered in my History lessons, and the conversations triggered by my work reports, storytelling can be far more effective in understanding human responses and how others see things. easily occur in the development of products such as OzTelia’s website; internally, between those working together, and externally, between those people and their customers. When there is pressure to deliver a new product, or to make important changes, we may lose sight of the pressure that colleagues are under, perhaps becoming critical of their decisions or efforts. This can be exacerbated when there is miscommunication and misunderstanding around the objectives of the work, the business benefits, and the judgement of what matters to customers. That judgement may be flawed if customers have not been consulted, or techniques applied which help capture their point of view. Understanding who customers might be and how they could get value from a product is crucial in the quest for quality. It is worth emphasising that this does not mean that customers always know best when it comes to designing products, or that radical and innovative thinking on the part of product designers cannot lead to wonderful, useful and valuable products. The importance of innovation in providing people with something they come to value can be seen in products such as the Ford Model T and, a century later, the iPod. Yet these famous inventions were not borne from a conscious effort to bypass customers or a feeling that designers knew best. Quite the opposite, in fact - they resulted from a deep empathy with real people, the problems those people faced, and how their lives could be improved. Henry Ford recognised that people wanted a fast, affordable, and maintainable method of transportation. He knew that families and individuals alike would relish the opportunity to own a car like the Model T and he also understood the freedom which it represented. Apple, where Sir Jonathan Ive designed the iPod, knew that people wanted a portable means of storing and listening to music; something which had capacity for their music collections but which was lightweight and simple to use. At Apple, the freedom to experiment and innovate came from the top of the organisation, and an understanding that great products can result from thinking about customers first. At a presentation to a large group at Apple’s Worldwide Developer Conference (WWDC) in 1997, Steve Jobs, the company’s CEO explained his philosophy: “One of the things I’ve always found is you’ve got to start with the customer experience and work backwards to the technology... What incredible benefits can we give to the customer? Where can we take the customer? Not start with ‘let’s sit down with the engineers and figure out what awesome technology we have and how we are going to market it.’” This attitude probably helps explain, in part at least, some of the achievements of that company over subsequent years. It would be wrong to suggest that Empathy: the essential ingredient Apple always get things right but, when they do, they design and build spectacularly successful and popular products which many of their customers love. Degrees of separation It isn’t always easy for those who work on the development of technology to empathise with their customers, particularly if they have little direct interaction with them. Whether by accident or design, opportunities for development teams to interact directly with customers can be scarce. In 1929, the Hungarian writer Frigyes Karinthy described the theory of ‘Six Degrees of Separation’; the idea that any two people can be connected to each other through a chain of six or fewer steps. In software development, it can sometimes feel as though those degrees of separation are actively maintained between customers and the technical people working on products for them. Instead, techniques and methods have been used which are intended to bridge the gap; to capture what those customers want, to distil and refine these desires and to relay them to technical people. For many years, the dominant method used in software development was written requirements, captured in documents which were often extensive and weighty. The premise that these documents accurately capture the needs and desires of customers underpins this method. The theory is that through requirement document reviews and rework it is possible to reach an accord on what a product should be like; how it should behave and operate. The documents can be passed on to those tasked with building, testing, supporting and maintaining the product and a shared understanding can be reached. Following this approach means that those people need not concern themselves with why this work is required - technical people can instead focus their energies on what they are building and how to do it. This way, customer empathy is lost. Concern for adherence to process overtakes concern for what people might actually want. If the product is built in such a way that it does what the requirements state, and if it can be demonstrated that it does so, then (so the theory goes) it will meet the needs of customers. Unfortunately, the reality is often quite different. It is not difficult to imagine that this kind of approach might have been adopted in the development of the OzTelia website which caused Kim so much frustration. There could have been an extensive exercise in gathering, documenting, reviewing and reworking requirements. These requirements may have been translated into technical specifications and passed on to development teams, who were asked to build the different components of the website according to those specifications. Testers were very likely asked to do little more than confirm that what had been built conformed to the specifications. Little consideration would be given to customers like Kim, or their experience whilst using the website (although there would no doubt be some Empathy: the essential ingredient metrics which would reassure everybody involved about the quality of the product). The consequences of relying on written requirements can go further than the effect on customers. Requirements can be ambiguous and inconsistent, and can be interpreted differently by different people. They may not even feel that they have the time (or inclination) to properly read and absorb them. As work on the product continues and new information or ideas come to light, the requirements can change. Misunderstandings occur and cause inefficiency as people attempt to reach a consensus. This in turn causes delays and inflated costs, which can result in projects being abandoned. If a product is ultimately delivered, often later than planned, it may be a product which: - Fails to capture what was intended in the first place. - Has taken so long to produce that it is no longer wanted or needed. - Has been beaten to market by a rival product. For public sector organisations, such an outcome can lead to poor services and great costs for taxpayers, embarrassing political failures, inquiries and investigations. In the commercial sector, a poorly received product can be the difference between a company’s success or failure, perhaps affecting their value, share price, reputation, the jobs of people working there (whether they were directly involved in the product development or not) and, in some cases, the continuing existence of the organisation. There are numerous examples of failed technology projects which were either cancelled or subject to lengthy delays and huge increases in costs. In a climate where schedules are likely to be missed and costs spiralling out of control, a ‘blame culture’ can emerge. Consideration for how a product will be received can slip down the list of priorities. People will be more interested in putting themselves out of the firing line than into the shoes of customers. Naturally, those whose ideas and business strategies instigate work on such projects, and those who control the budgets associated with these endeavours, have asked whether there is a better way to deliver products and change. Frustrated and unfulfilled technology workers have asked the same thing. After all, there is little pleasure in working in an environment of confusion and blame, where goals are not achieved and customers and clients are dissatisfied. From these frustrations and questions, from a recognition that large complex releases have not always been in the interests of many organisations, their employees or their customers, the methods associated with Agile and Lean development have emerged and proliferated. Closing the gap As we covered earlier, Agile and Lean practices are intended to help deliver value to customers. In order to do so, there is a recognition that more effective methods are needed to address the separation of customers from the people working to develop products for them. An example is the Scrum Framework; an approach which involves development teams working in short, focused sprints on features prioritised by a Product Owner. The Product Owner is the link between customers and the development team, defining the list of product features, and signing off on products when they are satisfied that the required work has been done. The approach replaces detailed requirements with more succinct User Stories, intended to explain who might want to use a product and describe what they might want to use it for. The role of Product Owner is one which requires deep customer empathy, an intimate understanding of their needs and desires, and an ability to express these effectively. Because this model maintains a degree of separation between customers and the development team, the Product Owner’s ability to communicate with both is crucial. If they are not effective in this regard, if they have not really understood customers’ desires, then they simply become the mechanism by which those desires are miscommunicated. A crucial element of the Lean approach, meanwhile, is the idea of a Minimum Viable Product (MVP), to be provided to customers, or prospective customers, in order to obtain feedback and refine the product (or perhaps abandon it if feedback indicates that it will not ultimately be something which people want). Feedback can be elicited in many ways. Examples include: - Observation of customers using a product, the way it is used, and the responses and reactions generated. - Monitoring of visitors to web pages where customers can sign up for a product. - A/B testing, which compares reactions and responses to two different versions of the product. - Responses provided by customers to support teams and other customer-facing staff. - Customer interviews where people speak about their experiences, expressing what they liked and didn’t like about a product. This last technique demonstrates how customer empathy can be applied in a direct and meaningful way. There may still be hurdles to overcome in reaching a shared understanding with the customer, and in relaying findings to different people working on the product, but the methods make it easier for everybody involved to hear the customer’s voice and to ask them about their responses. Other simple and practical steps can be taken to foster empathy. For example, encouraging people involved in the design and development of applications and websites to try using their product with a phone’s accessibility features enabled, or on a computer with only a keyboard attached, can help them to better understand the experience for customers with visual impairments or an inability to use a mouse. Meanwhile, asking the people who develop an organisation’s applications and systems to spend time sitting alongside colleagues who operate those systems allows them to observe how they are actually used, the features which are popular (as well as those which are ignored), and the problems caused by some aspects of the design. Simple exercises such as these can provide designers and developers with real insight into what their work means for other people. Changes to systems, websites and applications can be made with those people, their needs, and the way they act in mind. A final thought on the empathy gap. Sometimes overlooked is the simple fact that the people working on products are consumers of technology themselves. In some cases, they may use the very product they are working on in their day-to-day lives. If not, they are likely to find similarities between that product and other things they use. These similarities could be in the way products look, how they respond to certain actions, how easy or difficult it is to carry out tasks, or how those tasks are supported. Knowing what they like or don’t like, and what works or doesn’t work well for them, can be a great benefit. It doesn’t mean that their personal preferences always represent the best path to follow but if something feels wrong, or their emotional response suggests a possible problem for customers, this is worthy of attention. It might just be the simplest way to bridge the gap.
{"Source-Url": "https://res.infoq.com/articles/book-review-changing-times/en/resources/CHANGING-TIMES-EXCERPT-1528962301130.pdf", "len_cl100k_base": 7458, "olmocr-version": "0.1.53", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 48966, "total-output-tokens": 8602, "length": "2e12", "weborganizer": {"__label__adult": 0.0008554458618164062, "__label__art_design": 0.003131866455078125, "__label__crime_law": 0.0007791519165039062, "__label__education_jobs": 0.044525146484375, "__label__entertainment": 0.00031876564025878906, "__label__fashion_beauty": 0.0003962516784667969, "__label__finance_business": 0.0264892578125, "__label__food_dining": 0.0008931159973144531, "__label__games": 0.0016202926635742188, "__label__hardware": 0.0011606216430664062, "__label__health": 0.0016431808471679688, "__label__history": 0.0008339881896972656, "__label__home_hobbies": 0.00090789794921875, "__label__industrial": 0.0010328292846679688, "__label__literature": 0.006427764892578125, "__label__politics": 0.0013208389282226562, "__label__religion": 0.0008764266967773438, "__label__science_tech": 0.034149169921875, "__label__social_life": 0.0031375885009765625, "__label__software": 0.0298919677734375, "__label__software_dev": 0.8369140625, "__label__sports_fitness": 0.0006585121154785156, "__label__transportation": 0.001598358154296875, "__label__travel": 0.0004706382751464844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37279, 0.00031]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37279, 0.27016]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37279, 0.98165]], "google_gemma-3-12b-it_contains_pii": [[0, 82, false], [82, 82, null], [82, 773, null], [773, 2014, null], [2014, 3229, null], [3229, 4388, null], [4388, 5578, null], [5578, 6765, null], [6765, 7969, null], [7969, 9208, null], [9208, 10446, null], [10446, 11663, null], [11663, 12614, null], [12614, 13500, null], [13500, 14826, null], [14826, 16210, null], [16210, 17697, null], [17697, 19242, null], [19242, 20630, null], [20630, 22151, null], [22151, 24336, null], [24336, 25829, null], [25829, 27153, null], [27153, 28570, null], [28570, 30134, null], [30134, 31518, null], [31518, 32811, null], [32811, 34255, null], [34255, 35540, null], [35540, 37044, null], [37044, 37279, null]], "google_gemma-3-12b-it_is_public_document": [[0, 82, true], [82, 82, null], [82, 773, null], [773, 2014, null], [2014, 3229, null], [3229, 4388, null], [4388, 5578, null], [5578, 6765, null], [6765, 7969, null], [7969, 9208, null], [9208, 10446, null], [10446, 11663, null], [11663, 12614, null], [12614, 13500, null], [13500, 14826, null], [14826, 16210, null], [16210, 17697, null], [17697, 19242, null], [19242, 20630, null], [20630, 22151, null], [22151, 24336, null], [24336, 25829, null], [25829, 27153, null], [27153, 28570, null], [28570, 30134, null], [30134, 31518, null], [31518, 32811, null], [32811, 34255, null], [34255, 35540, null], [35540, 37044, null], [37044, 37279, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 37279, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37279, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37279, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37279, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37279, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37279, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37279, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37279, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37279, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37279, null]], "pdf_page_numbers": [[0, 82, 1], [82, 82, 2], [82, 773, 3], [773, 2014, 4], [2014, 3229, 5], [3229, 4388, 6], [4388, 5578, 7], [5578, 6765, 8], [6765, 7969, 9], [7969, 9208, 10], [9208, 10446, 11], [10446, 11663, 12], [11663, 12614, 13], [12614, 13500, 14], [13500, 14826, 15], [14826, 16210, 16], [16210, 17697, 17], [17697, 19242, 18], [19242, 20630, 19], [20630, 22151, 20], [22151, 24336, 21], [24336, 25829, 22], [25829, 27153, 23], [27153, 28570, 24], [28570, 30134, 25], [30134, 31518, 26], [31518, 32811, 27], [32811, 34255, 28], [34255, 35540, 29], [35540, 37044, 30], [37044, 37279, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37279, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
22700d97fb9b106a32162c288139f0353e43a222
Polygames: Improved Zero Learning Tristan Cazenave\textsuperscript{a}, Yen-Chi Chen\textsuperscript{b}, Guan-Wei Chen\textsuperscript{c}, Shi-Yu Chen\textsuperscript{c}, Xian-Dong Chiu\textsuperscript{c}, Julien Dehos\textsuperscript{d}, Maria Elsa\textsuperscript{c}, Qucheng Gong\textsuperscript{e}, Hengyuan Hu\textsuperscript{e}, Vasil Khalidov\textsuperscript{e}, Cheng-Ling Li\textsuperscript{e}, Hsin-I Lin\textsuperscript{c}, Yu-Jin Lin\textsuperscript{e}, Xavier Martinet\textsuperscript{e}, Vegard Mella\textsuperscript{e}, Jeremy Rapin\textsuperscript{e}, Baptiste Roziere\textsuperscript{e}, Gabriel Synnaeve\textsuperscript{e}, Fabien Teytaud\textsuperscript{d}, Olivier Teytaud\textsuperscript{e}, Shi-Cheng Ye\textsuperscript{c}, Yi-Jun Ye\textsuperscript{e}, Shi-Jim Yen\textsuperscript{c} and Sergey Zagoruyko\textsuperscript{e} \textsuperscript{a}LAMSADE, University Paris-Dauphine, PSL, France E-mail: tristan.cazenave@lamsade.dauphine.fr \textsuperscript{b}National Taiwan Normal University, Taiwan E-mail: zxkyjimmy@gmail.com \textsuperscript{c}AILAB, Dong Hwa University, Taiwan E-mail: sjyen@mail.ndhu.edu.tw \textsuperscript{d}University Littoral Côte d’Opale, France E-mail: dehos@univ-littoral.fr \textsuperscript{e}Facebook AI Research, France and United States E-mail: oteytaud@fb.com Abstract. Since DeepMind’s AlphaZero, Zero learning quickly became the state-of-the-art method for many board games. It can be improved using a fully convolutional structure (no fully connected layer). Using such an architecture plus global pooling, we can create bots independent of the board size. The training can be made more robust by keeping track of the best checkpoints during the training and by training against them. Using these features, we release Polygames, our framework for Zero learning, with its library of games and its checkpoints. We won against strong humans at the game of Hex in 19x19, including the human player with the best ELO rank on LittleGolem; we incidentally also won against another Zero implementation, which was weaker than humans: in a discussion on LittleGolem, Hex19 was said to be intractable for zero learning. We also won in Havannah with size 8: win against the strongest player, namely Eobllor, with excellent opening moves. We also won several first places at the TAAI 2019 competitions and had positive results against strong bots in various games. Keywords: Zero learning, Board games 1. INTRODUCTION In spite of AlphaGo (Silver et al., 2016), some games still resist to computers and for many games the computational requirement is still huge. We present Polygames, our open-source zero learning framework available at https://github.com/facebookincubator/polygames. It is based on Zero learning, combining Monte Carlo Tree Search and Deep Learning. It features a new architecture for accelerating the training and for making it size-invariant (Section 2.1). It allows neuroplasticity i.e. adding neutral layers, adding channels, increasing kernel size (Section 2.2) and warm start. Polygames also features a new tournament mode in order to make training more robust (Section 2.3). The framework provides a single-file API, that is generic enough for implementing many games, and comes with a library of games and many checkpoints. Basically, for adding a game, one must implement the transition function next (gameplay and reward), the mapping $s \mapsto \text{tensor}(s)$ representing a state as a 3D-tensor, and a mapping for identifying an action to a 3-dimensional location in the 3D output tensor. One can for example duplicate the code in connectfour.h and modify it. Polygames made the first ever win against top level humans at the game of Hex 19x19 (Section 3.1) and Havannah 8x8 (Section 3.3). 1.1. Zero learning AlphaGo and AlphaZero (Silver et al., 2016, 2017) proposed a combination between Monte Carlo Tree Search (Coulom, 2007) and Deep Learning. The version in (Silver et al., 2017) is a simplified and elegant version, learnt end-to-end. 1.1.1. Monte Carlo Tree Search Monte Carlo consists in approximating values (i.e. expected rewards) by averaging random simulations. MCTS (Monte Carlo Tree Search) consists in biasing these random simulations: using the statistics from previous simulations, we increase the frequency of moves which look good. The most well known variant of MCTS is Upper Confidence Trees (UCT) (Kocsis and Szepesvári, 2006), which uses, for biasing the simulations, a formula inspired from the bandit literature: a move is chosen if it maximises a score as follows: \[ \text{score}_{uct}(s,a) = \text{avg reward}(\text{next}(s,a)) + k \sqrt{\frac{\log(\text{num sims}(s))}{\text{num sims}(\text{next}(s,a))}} \] where \(s\) stands for a state, \(a\) for an action, \(\text{num sims}(s)\) for the number of simulations in \(s\), and \(\text{next}\) represents the dynamics of the game. An MCTS rollout consists in doing a simulation from the current board \(s_0\) and playing using the above formula until a final state is reached. We keep from Monte Carlo a random component in MCTS rollouts: in states without any statistics, the selection is simply random move selection. When \(M\) MCTS rollouts have been performed starting in the current state \(s_0\), we choose an action \(a\) maximizing e.g. \(\text{num sims}(\text{next}(s_0,a))\). 1.1.2. Neural policies Let us assume that we have a function \(\text{tensor}\) which maps a state to a tensor \(\text{tensor}(\text{state})\) (to be used in Eq. 2 below). A neural network (NN) can then be applied. We consider a NN with two outputs: \[ \pi_{NN}(\text{state}) \text{ is a tensor,} \\ V_{NN}(\text{state}) \text{ is a real number.} \] The NN typically uses convolutional layers, and then two heads for those two outputs; in the original Alpha-Zero, it contains fully connected layers in both heads. If we assume that each possible action has an index in the output tensor, then \(\pi_{NN}(s)\) can be converted into probabilities of actions by (i) multiplying by some temperature \(T\) and applying the \(\exp\) function to each entry in it (ii) setting to zero illegal actions in state \(s\) (iii) dividing by the sum: \[ \text{logit}(a) = \pi_{NN}(s) \\ \text{Proba}_{NN}(s,a) = \frac{\exp(T\text{logit}(a))}{\sum_{a'}\exp(T\text{logit}(a'))} \] Let us note \(\text{Proba}_{NN}(s,a)\) the probability of action \(a\) in state \(s\) for the neural network \(NN\). 1.1.3. Zero model of play: how the MCTS is modified by adding the NN The zero-model is then as follows. First, the UCT formula (Eq. 1) is adapted as PUCT (Silver et al., 2017), as follows: \[ \text{score}_{\text{PUCT}}(s,a) = Q(s,a) + k \times P_{\text{NN}}(\text{tensor}(s))(a) \times \frac{\sqrt{n(s)}}{n(\text{next}(s,a))} \] (2) In Eq. 2, \( n(s) \) stands for \( \text{num\,sims}(s) \), \( Q(s,a) = \text{avg\,reward}(\text{next}(s,a)) \), \( \text{tensor}(s) \) is a tensor representation of the state, \( P_{\text{NN}}(\text{tensor}(s)) \), output of the action head of the neural network, is also a tensor shaped as the output space - e.g. in Go or Hex the output tensor shape is the board size\(^1\). Each action \( a \) is identified to a location in the output tensor of the action head, so that \( \text{Proba}_{\text{NN}}(\text{tensor}(s))(a) \) is a real number, namely the logit of the probability of choosing action \( a \) according to the neural network in state \( s \). Please note that the log has been removed. The parameter \( k \) has been multiplied by the probabilities provided by \( \text{NN} \). Second, when a simulation reached a state in which no statistics are available (because no simulation has been performed here), instead of returning the reward of a random rollout until the end of the game, we return the reward estimated by \( V_{\text{NN}} \). The NN has therefore the double impact of (i) biasing the tree search (ii) replacing the random part of the rollouts by neural estimates of value. 1.1.4. Zero training: how the NN is modified by the MCTS Let us generate games with a zero-model as above, using e.g. \( M = 600 \) simulations per move. Each time a game is over, we have a family of 3-tuples \((s,p,r)\), one per visited state in this game: - \( r \) is the final reward of the game. - \( p \) is the tensor of frequencies in the MCTS simulations at \( s \). More precisely, with \( a \) an action identified to a location in the output tensor as above, \( p(a) = n_{\text{mcts,s,a}}/M \) where \( n_{\text{mcts,s,a}} \) is the number of the simulations which have chosen action \( a \), among the \( M \) simulations starting at \( s \). These 3-tuples are then used for training the network so that \( \pi_{\text{NN}} \) imitates \( p \) and \( V_{\text{NN}} \) approximates the reward. We also use a weight decay as a regularization, so that the overall loss is \[ \text{Loss}(\text{NN}) = \sum_{(s,a,p)} \text{CrossEntropy}(p, \text{Proba}_{\text{NN}}(s, .)) + ||r - V_{\text{NN}}(s)||^2 + c_L ||\theta||^2 \] 1.1.5. Overall architecture There is a server (typically for us 8 GPUs) and many clients (tested up to 500 GPUs and 5 000 cores in our experiments): - The server receives 3-tuples \((s,p,r)\) from the clients. It stores them in a replay buffer (with default size 1 000 000 in our implementation), in a cyclic manner. It trains the NN as in Section 1.1.4, also cycling over the replay buffer. - The clients send data (3-tuples) to the server. \(^1\)The output of the neural network is actually corrupted by a Dirichlet noise: rather than \( a = \text{Proba}_{\text{NN}}(\text{tensor}(s)) \), we use \( .75o + .25\text{DirichletNoise}(1/\text{numMoves}) \), where \( \text{numMoves} \) is the number of legal moves in \( s \). This is close to what was already done in (Silver et al., 2017). The number of clients should be tuned so that the cycles performed by the trainer are just a bit faster than the speed at which data are provided; Section 2.4 provides a robust solution for ensuring this, in particular for low computational power. 1.2. Other open-source frameworks Many teams have replicated and sometimes improved the Alpha Zero approach for different games. Elf/OpenGo (Tian et al., 2019) is an open-source implementation of AlphaGo Zero for the game of Go. After two weeks of training on 2 000 GPUs it reached superhuman level and beat professional Go players. Compared to Elf/OpenGo, Polygames features a wide library of games, boardsize-invariance, and growing architectures. Leela Zero (Pascutto, 2017) is an open-source program that uses a community of contributors who donate GPU time to replicate the Alpha Zero approach. It has been applied with success to Go and Chess. Compared to Polygames, its library of games is smaller, it is not boardsize-invariant and does not support growing architectures. Crazy Zero by Rémi Coulom is a zero learning framework that has been applied to the game of Go as well as Chess, Shogi, Gomoku, Renju, Othello and Ataxx. With limited hardware it was able to reach superhuman level at Go using large batches in self-play and improvements of the targets to learn such as learning territory in Go. While the predicted final territory information (attribution of each location on the board) is not used by the Zero player, learning territory, as a side information in Go, increases considerably the speed of learning. Polygames does not have this clever use of side information. This is under progress. Polygames, on the other hand, features a large library of games, boardsize invariance, and growing architectures. KataGo (Wu, 2019) is an open-source implementation of AlphaGo Zero that improves learning in many ways. It converges to superhuman level much faster than alternative approaches such as Elf/OpenGo or Leela Zero. It makes use of different optimizations such as using a low number of playouts for most of the moves in a game so as to have more data about the value in a shorter time. It also uses, as a side information for training the neural network, additional training targets so as to regularize the networks. Compared to KataGo, Polygames has a wide library of games, boardsize invariance, diversity of opponents for the training, and growing architectures. Galvanise Zero (Emslie, 2019) is an open-source program that is linked to General Game Playing (GGP) (Pitrat, 1968). It uses rules of different games represented in the Game Description Language (GDL) (Love et al., 2006), which makes it a truly general zero learning program able to be applied as is to many different games. The current games supported by Galvanise Zero are Chess, Connect6, Hex11, Hex13, Hex19, Reversi8, Reversi10, Amazons, Breakthrough, International Draughts. Contrarily to Galvanise Zero, Polygames is based on compiled C++ game descriptions, and uses growing architectures and boardsize-invariant neural networks. To the best of our knowledge, besides differences pointed out above, Polygames is the only framework featuring a population of models (the past checkpoints), ranked by an ELO scale and used for sampling sparring partners for the training. It is also the framework in which growing architectures are most systematized. Fig. 1. Traditional deep convolutional network. The convolutional layers preserve the spatial coordinates of the shape – possibly not the depth, i.e. the number of feature maps. The number of parameters of those layers, therefore does not depend on the board size, and a same neural network can work in 9x9 Go and 19x19 Go (or Hex, etc). On the other hand, each neuron in the fully connected layers has one weight per location in its input tensor (plus the bias weight), so that a neural network with fully connected layers is not boardsize-independent. This can be solved in two different manners: (1) using only convolutional layers, which is possible when the output tensor has the same spatial shape as the input tensor – which is the case, for many games, in the case of the action head of the neural network in zero-learning (2) applying global pooling, before the fully connected layers, for reducing the spatial dimensions to (1x1). In Polygames, we apply (1) for the actor head of the neural network and (2) for the critic head, so that our neural networks are boardsize invariant (Section 2.1.2). The resulting architecture is presented in Fig. 2. Fig. 2. Structure of a boardsize invariant neural network in Polygames. $H \times W$ is the board size (spatial coordinates). $F_i$ is the number of input channels. $F_1$ and $F_2$ are the number of channels in the first and second convolutional layers (in real experiments there are way more layers). The input (more precisely, the tensor representation of the input state) is on the left. The output has two heads: the policy (also known as actor, at the top) and the critic, which approximates the value. The only fully connected layer is for predicting the value: importantly, it is based on the output of a global pooling layer, so that it does not depend on the boardsize. Using global pooling before the fully connected layer and convolutional layers everywhere else, we get a boardsize invariant neural network. $F_1$, $F_2$, $F_3$, $F_4$ are fixed parameters, independent of the board size. 2. INNOVATIONS Some innovations in Polygames consist in using ideas from computer vision inside Zero learning (Section 2.1): global pooling, fully convolutional neural networks. We also use a diverse set of opponents for filling the replay buffer (i.e. the current model plays against previous models sampled according to their ELO rank as described in Section 2.3), and growing architectures (Section 2.2). These elements are detailed in the present section. 2.1. Structured zero learning 2.1.1. Fully convolutional models: taking into account the natural mapping between inputs and outputs Many zero-learning methods are based on traditional convolutions, followed by fully connected layers. However, policy learning in board games is often closer to image segmentation (i.e. several output values per input pixel) than to classical image classification (fixed output size, independent of the number of pixels) as actions are naturally mapped on boards (a vector of logits for each location on the board). More precisely, for many games: - The input has various channels, and two dimensions matching the board size (spatial dimensions; see Fig. 1). - Similarly, the output of the network has various channels, corresponding to various possible moves, and two dimensions matching the board size as well. Therefore, we can apply fully convolutional models - not a single fully connected layer is necessary in the policy part. This contributes to scale invariance (Section 2.1.2) but also takes into account the partial translation invariance of policy learning. Polygames features fully convolutional networks (Shelhamer et al., 2017), residual fully convolutional networks (He et al., 2015), including with global pooling (Wu, 2019) as detailed below for scale invariance, and U-networks (Ronneberger et al., 2015). 2.1.2. Scale invariant models The widely cited (Marcus, 2018) pointed out that zero-learning frameworks are not boardsize invariant. In the present section, we point out that Polygames learns in a boardsize invariant manner. As usually in zero learning, our neural network has two heads: one for the policy and one for the value. The one for the policy is fully convolutional (Section 2.1.1), and therefore it works independently of the input size, i.e. independently of the board size. The value part, however, does not have this property if it is fully connected. We therefore use global pooling as in (Wu, 2019). Global pooling replaces each channel $c$, of shape possibly $\text{boardsize} \times \text{boardsize} \times 1$, by several channels, such as the maximum and the average of $c$. This maximum and this average are computed over the $\text{boardsize} \times \text{boardsize}$ entries. We therefore get a boardsize-independent representation, of shape $1 \times 1 \times \text{number of channels}$. Our Hex19 model was trained in $13 \times 13$ and was immediately strong in $19 \times 19$ – though we needed a bit of fine tuning for the success story presented in Section 3.1. 2.2. Neuroplasticity Several modifications are almost neutral when initialized close to zero: - addition of residual blocks (i.e. switching from 12 blocks of 3 convolutional layers to 13 or 14 blocks of 3 convolutional layers); - addition of new channels; - extension of the kernel size (from $3 \times 3$ to $5 \times 5$ or $5 \times 5$ to $7 \times 7$, etc). Polygames provides a script “convert” that makes such a growth of the neural network easy. Training can be resumed after such extensions of the neural architectures; we can train, then grow (while preserving the quality of the model as it remains almost equal to the previous model as new weights are close to 0), then resume the training with more degrees of freedom. 2.3. Tournament mode In order to fight catastrophic forgetting (McCloskey and Cohen, 1989) or the red queen effect (oscillations of performance (Johansson, 2011)), we add a tournament mode, as follows. Each completed game is used for evaluating the ELO rank of players. Each client, when it must start a new simulated game, randomly draws a checkpoint in the archive. With dev the most recent checkpoint and its ELO rank, the probability that a model with ELO rank $ELO$ is drawn is proportional to $\exp(-\frac{ELO_{dev} - ELO}{400})$. Then the client simulates games between the MCTS using PUCT (Eq. 2) using “dev” and a MCTS using PUCT with that model. We experimentally notice that sometimes old models come back in the race. 2.4. Other features 2.4.1. Checkpoints We provide checkpoints\(^2\) for many games: Einstein Würfelt Nicht, Breakthrough, Havannah8, Havannah10, MiniShogi, Othello8, Othello10, Hex, and others. 2.4.2. Against overfitting Heuristically, we consider that an example, in the replay-buffer, should never be seen more than 8 times. When the clients are not fast enough for filling the replay buffer, for example because of preemption of clients or slow game, we artificially add delays in the learning when an example is seen more than 8 times. 2.4.3. Easy addition of games Adding a new game can be made by writing a new class that inherits from State and overrides a few methods (see the implementation of Connect Four\(^3\) as an example). 2.4.4. Stochastic games and some partially observable games Polygames can handle stochastic games (see e.g. our bot “randototoro” on LittleGolem, playing the game of Einstein Würfelt Nicht), which is not that common in existing frameworks but not conceptually hard: we just need random nodes, instead of only minimization and maximization nodes. The adaptation of MCTS to partial observation is more tricky: we cannot simulate the underlying dynamics from a given state, because players have only a partial observation. Actually, (Buffet et al., 2012) has presented a class of partially observable games that can be converted to an equivalent stochastic game, and therefore can be handled by MCTS (and, equivalently, by Zero learning as shown by Polygames): all games in which the visible information is the same for all players. This class includes a few two-player games: for example Chinese Dark Chess: there is hidden information, but the observable part is the same for both players. It also includes all one-player games, in particular Minesweeper and Mastermind. In those games, the undecidability results such as (Auger and Teytaud, 2012) do not apply. For self-containedness, let us explain the reasoning in (Buffet et al., 2012). We do not know the state, so instead of a state, $s$ is the history of observations: we must make a decision in a given $s$. This concept of history of observations is correctly defined because, by assumption, the history of observations is the same for all players. Given an history of observations $s$ (including actions) and an action $a$, we can solve the transition by simulating next($s, a$) by rejection sampling as follows: 1. Consider $r_1, \ldots, r_n$ all sources of randomness up to “state” $s$. 2. Randomly draw $r_1, \ldots, r_n$ and simulate a game with actions as in $s$ and random outcomes as in $r_1, \ldots, r_n$. \(^2\)http://dl.fbaipublicfiles.com/polygames/checkpoints/list.txt \(^3\)https://github.com/facebookincubator/polygames/blob/master/games/connectfour.h (3) If results are not consistent with the observation in $s$, go back to step 2. This is fully developed in (Buffet et al., 2012) in the case of MCTS and implemented inside Polygames for Minesweeper and Mastermind. In the case of Minesweeper, we implemented the more sophisticated method from (Buffet et al., 2012), faster than the rejection sampling. 3. SUCCESS STORIES Results here are obtained using (i) trainings stabilized by the tournament mode (Section 2.3), (ii) architectures growing in order to double the number or parameters each time the ELO rank stagnates (Section 2.2) (iii) boardsize-invariant architectures (Section 2.1), with top performance typically obtained after 9 days with 500 GPUs (usually playing at a very strong level after 3 days). 3.1. Beating humans at Hex19 According to (Bonnet et al., 2016), “Since its independent inventions in 1942 and 1948 by the poet and mathematician Piet Hein and the economist and mathematician John Nash, the game of Hex has acquired a special spot in the heart of abstract game aficionados. Its purity and depth has lead Jack van Rijswijck to conclude his PhD thesis with the following hyperbole (van Rijswijck, 2006): « Hex has a Platonic existence, independent of human thought. If ever we find an extraterrestrial civilization at all, they will know Hex, without any doubt. » ” The rules are simple. Black and white fill an empty cell, in turn (Fig. 3). Black wins if it connects North and South, White wins if it connects West and East. The game is made more fair by a pie rule: at the second move, the second player can decide to swap colors. The game is hard because, as a connection game, its reward is based on a global criterion (no local criterion to sum). Fig. 3 shows the first win against a top level human. 3.2. TAAI competition At TAAI 2019, Polygames was ranked first in Othello10 (3 programs), Breakthrough (4 programs) and Connect6 (2 programs). For more statistical significance, it was also successfully tested against - WZebra and Ltbel (Othello8), - the winner of TAAI 2018 at Connect6, namely Kavalan, and won all games (4 games against Ltbel, 4 games against WZebra 15 games against Kavalan). We show in Fig. 5 one of the games won against Kavalan at Connect6. 3.3. Havannah Havannah was invented specifically for being hard for computers (Fig. 6). It follows the game play of Hex, also with hexagonal cells but now on an hexagonal board, and winning conditions are more diverse: - Connecting two of the six corners wins (15 possibilities); \[^4\text{https://www.tcga.tw/taai2019/en/}\] Fig. 3. Game of Hex 19x19 with Pie rule played by Polygames against Arek Kulczycki, Winner of the last LG tournament, ranked first for the ELO ranking (ELO 2248) on the LittleGolem server. First: opening. Second: at that stage, the human (White) seems to win - two solid groups are connected to East and West respectively, and look close to connect each other. However (third), the situation in the center is quite complicated and later it turns out that Black can win by one of two possible paths (last: White can not block both H7 and an attack on the left side). Source: LittleGolem. Our bot played multiple games since this win and never lost one, including games against Galvanise-Zero (Fig. 4). We use a single Quadro GP100, 5 minutes per move. ![Game of Hex 19x19 with Pie rule played by Polygames against Arek Kulczycki.](image) Fig. 4. Results of our bot Mootwo since its game in Fig. 3. Fig. 5. Game won by Polygames against Kavalan: move 26 (left), which made the situation excellent for Polygames, and final position (right). Polygames won 15 games out of 15. - Connecting three of the six sides wins (20 possibilities); - Realizing a loop (even if it does not contain empty cells) wins. According to (Lorentz, 2015), “The state of Havannah programming is still in its early stages. Though the top programs do play at a reasonable level, about the level of somebody who has played the game for 6 months or a year, they still play with a very unnatural style, and often win their games by virtue of tactical shots missed by the human opponent. Our feeling is that Havannah programs cannot be expected to play at an elite level until they learn to play a more natural, human-like game.” Draws are theoretically possible but very rare. The game is also played with a pie rule. Some decent players have been defeated by computers, but never the best humans until Polygames. On Littlegolem, an early version won 3 games out of 4 against Mirko Rahn (Elo rank 2133), and 2 games out of 2 against tony (Elo rank 2167), who belong to the top four players on this website (see Figure 6 middle and right for examples of games). Then, a new version was trained and won two games out of two against Eobllor (Elo rank 2415): both games were quite short, with excellent opening moves by Polygames (Fig. 7). This bot, our strongest version, was using a single Quadro GP100, 40 minutes per move with a shared GPU (used simultaneously 50% for other applications), whereas the human could use a lot of times as by default on LittleGolem (there were several days per move, using vacation days and the 10 days main time + 24 hours per move - our bot was slowed down as requested by Eobllor just for reducing the overall speed but we did not use the additional thinking time). 4. CONCLUSIONS We propose a state-of-the-art framework, called Polygames, that can play to various games. It is based on zero learning, with innovations detailed in Section 2, and had success stories detailed in Section 3. Polygames contains new architectures, allows architecture search thanks to neutral components addition and is stabilized by a tournament mode and an overfitting fighting method. It was widely tested in the TAAI competition and on LittleGolem (www.littlegolem.net). The source code is publicly available under an open-source license. Our plan includes ablation studies and experimenting some of the innovations - e.g. the one-player case, including partial observability as in Section 2.4.4. Fig. 6. Left: The game of Havannah and the three different ways for winning. Middle: a win by Polygames against Tony, Elo rank 2167. Right: a win against Mirko Rahn, Elo rank 2133, winner of many Havannah tournaments. Sources: wikipedia (left) and LittleGolem (middle and right). Polygames plays Black. Last move (white cross) excellent move according to human analysis. http://www.trmph.com/diagram/1,havannah,8,o6m8l4h12n3o7k9j5l7l6m6m5o4,-1 Polygames blocked a bridge at the top, and has a double threat cycle (close to the center, left) + strong fork structure. http://www.trmph.com/havannah/game/lg-2153398 Fig. 7. The two games won against Eobllor (Elo rank 2415 before starting to play against Polygames), who dominates most competitions in Havannah. Eobllor won most games against the previous version of Polygames (Fig. 6), so we retrained it and got the present results. We note that the 3 fastest games lost by Eobllor, who rarely loses a game, are these 2 games and the game he lost against the previous version. 5. ACKNOWLEDGEMENTS We are grateful to Wikipedia and LittleGolem as image sources and platforms for running experiments. We are grateful to human players who accepted to play with our bot, in particular Eobllor who beat successive preliminary versions of our bot. This work was supported in part by MOST under contracts 109-2634-F-259-001-through Pervasive Artificial Intelligence Research (PAIR) Labs, Taiwan. Tristan Cazenave is supported by the PRAIRIE institute. REFERENCES
{"Source-Url": "https://basepub.dauphine.psl.eu/bitstream/handle/123456789/22072/Polygames.pdf;jsessionid=D81705CF2294FB2CA93BC210E2890507?sequence=2", "len_cl100k_base": 7600, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 34947, "total-output-tokens": 9940, "length": "2e12", "weborganizer": {"__label__adult": 0.002109527587890625, "__label__art_design": 0.0018405914306640625, "__label__crime_law": 0.00226593017578125, "__label__education_jobs": 0.0021533966064453125, "__label__entertainment": 0.0011854171752929688, "__label__fashion_beauty": 0.0012025833129882812, "__label__finance_business": 0.0010738372802734375, "__label__food_dining": 0.002498626708984375, "__label__games": 0.333740234375, "__label__hardware": 0.004528045654296875, "__label__health": 0.002094268798828125, "__label__history": 0.0018978118896484375, "__label__home_hobbies": 0.0005340576171875, "__label__industrial": 0.00229644775390625, "__label__literature": 0.0012950897216796875, "__label__politics": 0.0011959075927734375, "__label__religion": 0.0018815994262695312, "__label__science_tech": 0.20166015625, "__label__social_life": 0.0003383159637451172, "__label__software": 0.0128021240234375, "__label__software_dev": 0.41552734375, "__label__sports_fitness": 0.003269195556640625, "__label__transportation": 0.0016765594482421875, "__label__travel": 0.0009212493896484376}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34039, 0.04005]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34039, 0.20485]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34039, 0.89062]], "google_gemma-3-12b-it_contains_pii": [[0, 3478, false], [3478, 6439, null], [6439, 9808, null], [9808, 13205, null], [13205, 16164, null], [16164, 19425, null], [19425, 22522, null], [22522, 25107, null], [25107, 26006, null], [26006, 28595, null], [28595, 30092, null], [30092, 33290, null], [33290, 34039, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3478, true], [3478, 6439, null], [6439, 9808, null], [9808, 13205, null], [13205, 16164, null], [16164, 19425, null], [19425, 22522, null], [22522, 25107, null], [25107, 26006, null], [26006, 28595, null], [28595, 30092, null], [30092, 33290, null], [33290, 34039, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34039, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34039, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34039, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34039, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34039, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34039, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34039, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34039, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34039, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34039, null]], "pdf_page_numbers": [[0, 3478, 1], [3478, 6439, 2], [6439, 9808, 3], [9808, 13205, 4], [13205, 16164, 5], [16164, 19425, 6], [19425, 22522, 7], [22522, 25107, 8], [25107, 26006, 9], [26006, 28595, 10], [28595, 30092, 11], [30092, 33290, 12], [33290, 34039, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34039, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
6fa102fcd38ce1cccba85fd7ac017cf86621c255
Asynchronous Distributed Genetic Algorithms with Javascript and JSON Juan Julián Merelo-Guervós, Pedro A. Castillo, JLJ Laredo, A. Mora García, A. Prieto Abstract—In a connected world, spare CPU cycles are up for grabs, if you only make its obtention easy enough. In this paper we present a distributed evolutionary computation system that uses the computational capabilities of the ubiquitous web browser. Asynchronous Javascript and JSON (Javascript Object Notation, a serialization protocol) allows anybody with a web browser (that is, mostly everybody connected to the Internet) to participate in a genetic algorithm experiment with little effort, or none at all. Since, in this case, computing becomes a social activity and is inherently impredictable, in this paper we will explore the performance of this kind of virtual computer by solving simple problems such as the Royal Road function and analyzing how many machines and evaluations it yields. We will also examine possible performance bottlenecks and how to solve them, and, finally, issue some advice on how to set up this kind of experiments to maximize turnout and, thus, performance. The experiments show that we can obtain high, and to a certain point, reliable performance from volunteer computing based on AJAJ, with speedups of up to several (averaged) machines. I. INTRODUCTION Application–level networks (ALNs), are configured as a set of clients/servers (servents) that can provide their spare CPU cycles by means of a downloadable application, establishing a distributed computation network which can provide ad hoc computational power. Some ALN like SETI@Home have been quite successful [1], creating a virtual computer that has processed a high number of teraflops, while other experiments such as Popular Power (and most others, in fact) have not [2]. The key feature of these application–level networks is the simplicity of use: we believe that the best way to obtain the participation of as many users as possible is to make it as simple as possible. In particular, it will be easier if they do not need to download a special application (such as a screen-saver) to participate, as is needed in BOINC, the core of SETI@Home. For this reason, we are exploring the use of applications that are commonly installed in the user’s computer, such as the web browser, which is available even in PDAs and some cellular phones1. Moreover, most browsers natively include a JavaScript interpreter [3], [4], [5] or virtual machine. JavaScript is an interpreted language2, initially proposed by Netscape, and later adopted as an ECMA standard [6], [7], [8], [9]. In this way, most browsers are compatible, at least at a language level (not always at the level of browser objects, where there exists a reasonable compatibility, anyway). Most browsers also include elements such as a Java virtual machine and a Flash plugin, which, with ActionScript, has more or less the same capabilities. However, there are several disadvantages to these: they might or might not be present (they are not native), they are noisy in the sense that, since they act as plugins, their execution is always noted by the user, their programs are more heavyweight than simple text code, and, finally, its integration with the browser is more awkward than the seamless integration that JavaScript offers. By itself, an interpreted language is not enough for creating a metacomputer if there is no way to convey information back from the client to the server in a seamless way. The ability to use the virtual machine included in browsers for distributed computing appeared with the XMLHttpRequest object, which allows asynchronous petitions to the server, in what has been called AJAX, Asynchronous JavaScript and XML [10]. AJAX is just one of the possible ways to perform asynchronous client-server communication, the others being AJAJ (Asynchronous Javascript and JSON), and remoting using applets or embedded objects. However, it is quite popular, and a wide user base and documentation is available for it, using any of these asynchronous client/server communication protocols. The traditional client/server model becomes then more egalitarian, or closer to a peer to peer model, since a bidirectional communication line appears: the browser can make calls to the server, do some computation and later send the results to the server. AJAX (and AJAJ, which differ only in the way data is serialized) works as follows: the XMLHttpRequest is provided with a request to the server and a pointer to a callback function. The request generates an event, which is asynchronously activated when a reply is received making use of the callback function. Following this approach the browser is not locked, providing a way to program applications that are similar to the ones used at the desktop, in the sense that they do not have to wait for the application response to be loaded and rendered on the screen every time a request is made. It also means that a user clicking on the Submit button is no longer needed to initiate communication with the server, any JavaScript thread can do so, with the constraint that the only server they can communicate with is the one that hosts the page in which the script is included. On the other side, this provides a way to use the browser for application level networks that create distributed computing systems, since the request-response loop does not need the user participation in a fashion very similar to any 1Whose computing power is similar to four-year-old desktop machines 2Which has nothing to do with Java, other than the name and its syntactic similarity other distributed computing application; these ALN can be controlled from the server with any programming language. Of course, it can also be combined with other distributed programming frameworks based on OpenGrid [11] or other distributed computing paradigms. We previously used Ruby on Rails for making this kind of distributed AJAX application [12]; however, performance and scaling behavior were not too satisfactory mainly for two reasons: the nature of the Ruby on Rails server, which required the setup of a load-balancing server, and the (probably determinant) fact that part of the genetic algorithm was done on the server, with a low degree of parallelism and thus a high impact on performance. Latest experiments (after publication of that paper) yielded a maximum of 40 chromosome evaluation per second². In this paper, following the same concept of distributed evolutionary computation on the browser via AJAX, we have redesigned the application using Perl and PostgreSQL (on the server) and Javascript, as before, on the client, and renamed it AGAJAJ (pronounce it A-gah-yai), which stands for Asynchronous Genetic Algorithm with Javascript and JSON; in this case, the genetic algorithm (solving the well known Royal Road problem) is carried out only on the clients, with the server used just for interchange of information among them. We will perform several experiments in which clients donate computing power by just loading a web page to find out what kind of performance we can expect from this kind of setup, from the number of machines that will be made available by their users to the number of evaluations each one of them can perform; in these experiments, we have improved two orders of magnitude the performance achieved in the previous experiments which used Ruby on Rails, and also the number of concurrent machines available to perform them, showing that this kind of setup is ready to take more computing-intensive experiments without the need of an expensive server setup. This paper follows our group’s line of work on distributing evolutionary computation applications, which has already been adapted to several parallel and distributed computing paradigms (for example, Jini [13], JavaSpaces [14], Java with applets [15], service oriented architectures [16] and P2P systems [17], [18]). Evolutionary computation is quite adequate for this kind of distributed environment for several reasons: it is a population based method, so computation can be distributed among nodes (via distribution of population) in many different ways; besides, some works suggest that there are synergies among evolutionary algorithms and parallelization: isolated populations that are connected only eventually avoid the loss of diversity and produce better solutions in fewer time obtaining, in some cases, superlinear accelerations [19]. Of course, with a suitable work division method, many other algorithms could be adapted to browser-based distributed computation; however, in this paper will solve only genetic algorithms, and concentrate on raw performance, rather than algorithmic behavior. The rest of the paper is organized as follows: next section concentrates on the application of volunteer/involuntary computing to evolutionary computation; the setup is described in Section III. Experiments and results are shown in Section IV and discussed in V, along with future lines of work. II. STATE OF THE ART So called volunteer computing [20], [21] systems are application-level networks set up so that different people can donate CPU cycles for a joint computing effort. The best known project is SETI@home⁴, which, from the user’s point of view, is a screen-saver which has to be downloaded and installed; when the user’s CPU is not busy it performs several signal analysis operations. Some companies related to volunteer computing, such as Popular Power (and others; they are referenced, for example, in [22]) did some experimentation with Java based clients, but none has had commercial success; on the other hand, the SETI@Home program has been open-sourced and extended as the BOINC (Berkeley Open Infrastructure for Network Computing) framework [23]. This kind of volunteer computing has been adapted to evolutionary computation in several occasions, using frameworks such as DREAM [24], which includes a Java-based virtual machine, GOLEM@Home, Electric Sheep [25] and G2-P2P [26]. Both approaches acknowledge that to achieve massive scalability, a peer to peer (P2P) approach is advisable, since it eliminates bottlenecks and single points of failure. There are mainly two problems in this kind of volunteer networks: first of all, it is important not to abuse the CPU resources of volunteers; secondly, a sufficient number of users is needed in order to be able to do the required computation, which can be a problem on its own if there are too many of them, bringing the network, or at least the solution-collecting node, to its knees. A third problem is that performance prediction is difficult when neither the number of participants nor their individual node performances are known in advance. Finally, fault-tolerance [27] and cheating [28] are also important issues; if the environment is competitive, or any single computation is important, they will have to be taken into account. In any case, we believe that the best way to obtain a good amount of users is to make it easy for them to participate, using technologies available in their computers, as the browser is. In fact, some suggestions were published (for example, the one from Jim Culbert in his weblog [29], and in some mailing lists), and, besides our own [12], there have been some recent papers and reports on similar setups. For instance, W. Langdon has been running for some time an interactive evolution experiment using Javascript in the browser [30], which was mainly intended for achieving high diversity in a fractal snowflake design than high performance. Even more recently, Klein and Spector [31] present a system based on the Push3 language, which is compiled to Javascript ²The published figure was even lower. ⁴See http://setiathome.berkeley.edu/ for downloading the software and some reports. Fig. 1. Experiment running on two different browsers (Firefox and Epiphany) in the same machine. User interface is written in Spanish, since in this initial experiment was addressed to audience speaking that language. The colored (or gray-scale) horizontal bar is a graphical representation of the chromosome, with green (darker shade) standing for 1 and red for 0. The inset windows, which shows the experiment running on the Epiphany web browser started in second place, and thus, the number of generations run so far (Generacion) is lower, although the state of the best solution is almost the same. The proposed approach could also be considered as parasitic computing since, as stated in Section I, the only participation from the user will be to load a web page and click on a button; in fact, any AJAX-based could use these resources without his consent (and, in any case, it would be desirable to run without causing much trouble). The concept was introduced by Barabási in [32], and followed by others (for instance, Kohring in [33]). In that work they proposed to use the Internet routers to compute a checksum by means of a set of specially crafted packets, whose aggregated result would be used to solve the SAT problem. Anyway, although the concept is interesting, there seems not to be a continuation for this work (at least openly), probably due to its inherent dangers (as analyzed in a paper by Lam et al. [34]). The virtual machine embedded into the browser provides a way to easily do that kind of sneaky/parasitic computing, but JavaScript faces the handicap of being an interpreted language, which means that the efficiency of different implementations varies wildly. Moreover, it is not optimized for numerical computation but for object tree management (the so called DOM, document object model) and strings. Nevertheless its wide availability makes us think about considering it. III. METHODOLOGY AND EXPERIMENTAL SETUP For this experiments we have designed and implemented a client-server program written in Perl (server-side) and Javascript (client-side), communicating using JSON via the XMLHttpRequest object. This object requires than the website and the AJAX requests are served from the same host, which is a constraint. Code for both is available, under the GPL, from http://rubyforge.org/projects/dconrails/. The algorithm runs on the client for a fixed number of generations, as shown in Figure 1; running parameters are set from the server and are downloaded from it along with the webpage from which the experiment is run. After the preset number of generations, a request is made to the server with the best individual in the last generation. The algorithm stops and waits for the answer from the server. The server receives the request, stores it in a database, and sends back the best individual stored in the server. This individual is incorporated in the client population, which starts again to run. Several clients acting at the same time make requests asynchronously, using the facilities of the standard Apache web server. The server is thus used as a clearinghouse for interchange of information among the different clients; however, there’s no explicit communication or topology among the different nodes running the genetic algorithm. Besides, the fact that the server always contain the best individuals generated so far guarantees that the best solution (with a fixed number of evaluations resolution) is always kept. The server also sends back the number of generations the client should run; which is usually the same number as before, but turns to 0, thus stopping the client, when the stopping condition is met. Clients leave the experiment by the expedito method of surfing away to another page or closing the web browser; in tabbed browsers (most browsers nowadays), a tab (or several) can run the experiment while the browser is available for other tasks. When the experiment has been running for a predetermined number of evaluations (which were set, for this experiment, to 750000), all clients get a message to stop running, and change their user interface to a message offering them to reload the (new) experiment and start all over again. Besides, there is a watching daemon running on the server which checks the database for the number of individuals evaluated, and resets the experiment by incrementing the experiment ID by one and eliminating the population. Thus, experiments can run unchecked on a server while this watchgad daemon\(^6\) is running. Several additional utilities are also provided via several webpages, that inform on the state of the experiment, or allow to set the GA parameters. Experimental subjects were gathered by several methods: sending it via email to department and project coworkers, using the URL for the experiment as a Google Talk status line, as a Twitter (http://twitter.com) message, as a blog post, and, eventually, it was picked up by a wildly popular Spanish blog\(^5\), which managed to gather the highest number of machines. The experiment consisted in optimizing the 256-bits Royal Road function, and each instance consisted in a maximum of 750000 evaluations (which were barely enough to find the solution). The algorithm was steady state (with incorporation of the immigrant every 20 generations), with rank-based selection and substitution; every generation, 50% of the population was generated, substituting the worst 50% individuals. Crossover priority was set to 80%, and mutation to 20%, changing 1% of the bits. However, these settings will have no influence on performance, other than the fact that, if the solution is found before reaching the maximum number of evaluations, the users will get bored and change to a new page\(^6\), instead of staying to see if the solution is found. Data was gathered from two different sources: the watch- daemon logs, which mainly gave data about the number of individuals evaluated and the time needed for each experiment, and the Apache daemon log; the relevant lines were extracted just by using grep. It should be noted that the server was not running exclusively the experiment, but doing it along with the usual tasks. The server was a 700-MHz, 1 Gigabyte-RAM machine, with the database running in a different, dual processor, 450-MHz machine. Both machines were running obsolete RedHat 7.x and 9.x Linux operating systems\(^7\). Results of the set of experiments that have been performed will be commented in the following section. IV. EXPERIMENTAL RESULTS Eventually, the experiment was running for several days, with different degrees of intensity. Several hundred machines participated in different instances, coming from all over the world, although mainly from Spain. The first question we wanted to answer was, how many machines should we expect in this kind of experiment? BOINC and SETI@home have gathered thousands of machines in its 10+ year run, but our experiments were limited in time (several hours, at most, if no machine was available, a few minutes if it was), so a limited number of machines should also be expected. The distribution of the number of machines is shown in Figure 2. The exact number of machines and generations will vary in every experiment, but it seems clear that the median number of machines will hover around 1/4 of the maximum number. Besides, it is quite easy to obtain 5 clients for a certain number of evaluations; most experiments have less than 10 clients. On the other hand, the number of evaluations each client contributes are quite different, as is shown in Figure 3. This figure reveals that most clients contribute just a few generations, be it because the browser stops running the program if it takes more than a predetermined number of seconds (which can happen in IE or in Firefox if clients are too slow; usually the predetermined number of generations will be less than this number), because the user gets bored and moves on, or because statistically most people join when the experiment has been running for some time and show up only when there are a few evaluations left. Each one of these problems would have to be tackled separately, but the baseline is that, even when a good number of machines joins an experiment, they will do so only for a limited \(^{6}\)And this is just an example of how social factors in this kind of experiments affect performance. \(^{7}\)Both machines host our group web server and home pages; we thought it was better to run the experiment in our standard setup instead of a dedicated one, since the whole point of the experiment is to prove that no big modifications have to be made to run it, be it from the client or the experimenter point of view. amount of time on average. Besides, these limitations in number of generations translate also to limitations in time, so that experiments will have to be designed in a way that sufficient information is transmitted from clients during the expected time they are going to spend in the experiment. However, different clients have different performance, so it is also interesting to measure what is the average time (and thus average performance) it takes the clients between two communications to the server (that is, $20 \times 50$ evaluations + waiting time). This is interesting for two main reasons: server performance will have to be tuned to be able to serve this level of requests, and second, the generation gap will also have to be fine-tuned so that waiting time and the possibility that the script is blocked due to overtime is minimized. The performance figures obtained in the experiment are shown in Figure 4. The graph represented in Figure 4 shows that, for this kind of problem, the vast majority of clients will have a gap smaller than two seconds. This quantity will vary for different problems, but the conclusion is that most clients will have high or median performance, with few clients having lower performance. This measure also gives us an estimate of the average performance (2.906 seconds/20 generations). However, at the end of the day the setup is intended to achieve high performance when running an evolutionary computation experiment. This data is presented in Figure 5. This figure is quite similar to Figure 4. Median is at 1000 seconds, with a minimum at 292 and 3rd quartile at 2323; 75% of runs will last less than 2323 seconds. Taking into account that the average 20-generation step is 2.906 seconds, and a single experiment needs 375 such steps, the average single machine run would take 1089.75 seconds; this means that the maximum speedup achieved is 1089.75/292 = 3.73 clients running simultaneously, and the median is approximately a single average machine. This will probably vary for experiments of different duration, but, on average, we could say that significative (albeit small) speedups can be achieved using spontaneous volunteer computing. In general, however, several machines will sequentially provide CPU cycles to an experiment, adding up to a single machine doing all the work. In general also, the fact that there are up to 44 machines working in a single experiment, or that the range of running times can vary in a factor of up to one hundred, indicates that, for this experiment, no bottleneck has been found. Of course, more simultaneous machines will have to be tested to find the limit. Finally, the fact that all contributions are volunteer means that the evaluation rate is not constant, yielding figures like Figure 6, where the steepness of each line is roughly equivalent to the evaluation speed, since the $x$ axis corresponds to time, and the $y$ axis number of individuals evaluated. V. CONCLUSIONS, DISCUSSION AND FUTURE WORK While in previous papers [12] we proved that this kind of AJAX based, volunteer, and potentially sneaky, computation could be used profitably for performing genetic algorithm experiments, in this paper we have proved that, without an expensive or far-fetched setup, it can achieve high performance, equivalent, at most, to several computers of average performance. The code used to perform the experiment is publicly available and is modular so that creating different experiments is just a matter of writing a new JavaScript fitness function and tuning the GA parameters accordingly. The experiments have proved that there is a good amount of computational power that can be easily tapped and used for evolutionary computation experiments, however, the nature of AGAJAJ constrains also the way users donate computing power, as well as the number of clients available for an experiment. In this paper we have found some figures, which will undoubtedly vary for other experiments; however, the general shape of the curves will probably be the same, following a very steep decrease from the maximum values obtained. The GA, being asynchronous, faces some problems that have not been tackled in this paper. What is the best approach to preserve diversity? To generate a new population in each client, and receive immigrants as soon as possible, which are incorporated into the population? Or is it better to create new client populations based on existing populations? What is really the algorithmic contribution of new clients? These issues will be explored as future work. We will also try to measure the limits of this technology, and test the impact of servers of varying performance and workload on overall performance. Eventually, we will also try to perform a sneaky experiment, to check what kind of performance can be expected in that kind of setups. Another venue of work will be to examine the algorithmic performance of AGAJAJ; even as new clients are added to an experiment, what’s the improvement obtained from them? In order to check that, a controlled experiment using known computers will be used, adding them one at a time, so that the Fig. 4. Distribution of gap time, that is, time between two requests from a particular client; it has been rounded to the closest number of seconds (since the webserver logs it has been extracted from have that resolution). Most clients take 3 second or less to process 20 generations, with a majority taking 2 or less. Gaps bigger than 10-15 seconds are probably non-significant, in fact, negative gaps and those bigger than 100 have been eliminated from the data set. For this set of experiments, the median is at 2. Fig. 5. Distribution of running times for a fixed amount of evaluations. Some outliers have been cut off; approximately 10% of runs took more than 10000 seconds. The mode is between 500 and 1000 seconds, and most runs end before 2000 seconds. Time starts to count from the moment the first client connects to the server. real impact on the genetic algorithm is evaluated. Once that is know, it would be interesting to experiment with adaptive client parameters, instead of the one-size-fits-all parameter settings used so far. ACKNOWLEDGEMENTS This paper has been funded in part by the Spanish MICYT project NoHNES (Spanish Ministerio de Educación y Ciencia - TIN2007-68083) and the Junta de Andalucía P06-TIC-02025. We are also grateful to the editors of the Microsiervos.com blog for talking about our experiment, and to the readers of that article for participating in it. We are also grateful to the anonymous people who have known about the experiment via several possible ways and participated in it. REFERENCES 2008 IEEE Congress on Evolutionary Computation (CEC 2008)
{"Source-Url": "https://pdfs.semanticscholar.org/453a/0268cd508006639cbfa45bf97f43edff2546.pdf", "len_cl100k_base": 5521, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25845, "total-output-tokens": 8837, "length": "2e12", "weborganizer": {"__label__adult": 0.000377655029296875, "__label__art_design": 0.00044846534729003906, "__label__crime_law": 0.00043582916259765625, "__label__education_jobs": 0.0014944076538085938, "__label__entertainment": 0.00017249584197998047, "__label__fashion_beauty": 0.00021827220916748047, "__label__finance_business": 0.00037479400634765625, "__label__food_dining": 0.0004153251647949219, "__label__games": 0.0007257461547851562, "__label__hardware": 0.0017919540405273438, "__label__health": 0.00092315673828125, "__label__history": 0.0005426406860351562, "__label__home_hobbies": 0.00017821788787841797, "__label__industrial": 0.0007185935974121094, "__label__literature": 0.0004379749298095703, "__label__politics": 0.0004167556762695313, "__label__religion": 0.0006799697875976562, "__label__science_tech": 0.37890625, "__label__social_life": 0.0002007484436035156, "__label__software": 0.017852783203125, "__label__software_dev": 0.59130859375, "__label__sports_fitness": 0.00033783912658691406, "__label__transportation": 0.0007805824279785156, "__label__travel": 0.00024890899658203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35554, 0.03389]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35554, 0.50992]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35554, 0.9193]], "google_gemma-3-12b-it_contains_pii": [[0, 5647, false], [5647, 11866, null], [11866, 16109, null], [16109, 20703, null], [20703, 23824, null], [23824, 25861, null], [25861, 29828, null], [29828, 35554, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5647, true], [5647, 11866, null], [11866, 16109, null], [16109, 20703, null], [20703, 23824, null], [23824, 25861, null], [25861, 29828, null], [29828, 35554, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35554, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35554, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35554, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35554, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35554, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35554, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35554, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35554, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35554, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35554, null]], "pdf_page_numbers": [[0, 5647, 1], [5647, 11866, 2], [11866, 16109, 3], [16109, 20703, 4], [20703, 23824, 5], [23824, 25861, 6], [25861, 29828, 7], [29828, 35554, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35554, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
2243c6de12dd12222493873b4b5937297bfe37b3
QuickChick Speeding up Formal Proofs with Property-Based Testing Cătălin Hrițcu INRIA Paris-Rocquencourt (Prosecco team, Place d’Italie office) About me and how I use Coq - Working on formal methods for security, broadly - Still rather naïve Coq user (after ~4 years of learning) - Some teaching: Software Foundations and a bit of CPDT - “Mechanized Metatheory for the Masses” Soundness of static & dynamic enforcement mechanisms - expressive type systems using SMT solvers: ZKTypes [CCS 2008], F5 [TOSCA 2011, JCS2014], DMinor [ICFP 2010, JFP 2013], recently joined F* effort - verification condition generator: DVerify [CPP 2011] - certified translations: Expi2Java [NFM 2012] machine with dynamic IFC [S&P 2013, POPL 2014] - micro-policies: generic hardware-accelerated tagging schemes About me and how I use Coq • Working on formal methods for security, broadly • Still rather naïve Coq user (after ~4 years of learning) • Some teaching: Software Foundations and a bit of CPDT • “Mechanized Metatheory for the Masses” • Devising correct security mechanisms is hard – full confidence only with mechanized proofs – this is why I’m a Coq addict machine with dynamic IFC [S&P 2013, POPL 2014] – micro-policies: generic hardware-accelerated tagging schemes Problem: proving is very costly - My proofs are boring, but designing security mechanisms is not - definitions and properties often broken, and evolve over time - Proving does aid design ... but only at a very high cost - most enlightenment comes from failed, not from successful proofs - a failed proof attempt is a very costly way to discover a design flaw - fixing flaws not always easy, might require serious redesign - failed proof attempt will generally not convince an engineer - proving while designing is frustrating, tedious, time consuming Even when design correct & stable, proving still costly - countless iterations for discovering lemmas and invariants - my proofs are often “fragile”, so the cost of each iteration is high Problem: proving is very costly - My proofs are boring, but designing security mechanisms is not - definitions and properties often broken, and evolve over time - Proving does aid design ... but only at a very high cost most enlightenment comes from failed, not from successful proofs - This is the itch I’m trying to scratch - other people might have similar itches though Even when design correct & stable, proving still costly countless iterations for discovering lemmas and invariants - my proofs are often “fragile”, so the cost of each iteration is high Could **testing** help with this problem? • Can property-based testing – lower the cost of formal proofs? – become an important part of the theorem proving process in Coq? • Yes, I believe / hope so – own recent positive experience with testing – I’m not the only one (e.g. Isabelle, FocalTest, ...) • We are basically just starting on this – A lot of research & engineering work left This talk • Introduction to property-based testing with QuickCheck • Testing noninterference, quickly • Polarized mutation testing • A simple QuickCheck clone for Coq (prototype) • Some ideas for deeper integration with Coq/SSReflect Collaborators Arthur Azevedo de Amorim (UPenn, now INRIA intern) Maxime Dénès (UPenn) John Hughes (Chalmers) Leo Lampropoulos (UPenn) Zoe Paraskevopoulou (NTU Athens, soon INRIA intern) Benjamin Pierce (UPenn) Antal Spector-Zabusky (UPenn) Dimitris Vytiniotis (MSR Cambridge) An introduction to PROPERTY-BASED TESTING WITH QUICKCHECK [Claessen & Hughes, ICFP 2000] ```haskell import Test.QuickCheck import QuickCheckWithDiscards f :: Int -> Int f n = g 0 n where g a b | a==b = a | c*c > n = g a (c-1) | otherwise = g c b where c = (a+b+1) `div` 2 prop_int_sqrt x = x >= 0 ==> (f x * f x <= x && (f x + 1) * (f x + 1) > x) ``` *Main> quickCheck prop_int_sqrt +++ OK, passed 100 tests. *Main> *Main> quickCheckWithDiscards (stdArgs {maxSuccess = 1000}) prop_int_sqrt +++ OK, passed 1000 tests. Success {numTests = 1000, labels = [], output = "+++ OK, passed 1000 tests.\n"} Actual tests run: 1000 Discards: 1014 Discard ratio: 0.503 import Test.QuickCheck import QuickCheckWithDiscards f :: Int -> Int f n = g 0 n where g a b | a==b = a | c*c > n = g a (c-1) | otherwise = g c b where c = (a+b+1) `div` 2 prop_int_sqrt x = x >= 0 ==> (f x * f x <= x && (f x + 1) * (f x + 1) > x) prop_int_sqrt_small = forAll (choose (0, 10000)) prop_int_sqrt *Main> quickCheckWithDiscards (stdArgs {maxSuccess = 1000}) prop_int_sqrt_small +++ OK, passed 1000 tests. Success {numTests = 1000, labels = [], output = "+++ OK, passed 1000 tests. Actual tests run: 1000 Discards: 0 Discard ratio: 0.000 Custom input data generator import Test.QuickCheck f :: Int -> Int f 0 = 0 f 1 = 1 f x = 6 prop_int_sqrt x = x >= 0 ==> (f x * f x <= x && (f x + 1) * (f x + 1) > x) prop_int_sqrt_small = forall (choose (0, 10000)) prop_int_sqrt *Main> quickCheck prop_int_sqrt_small *** Failed! Falsifiable (after 1 test): 8709 *Main> quickCheck prop_int_sqrt_small *** Failed! Falsifiable (after 1 test): 2036 import Test.QuickCheck f :: Int -> Int f 0 = 0 f 1 = 1 f x = 6 prop_int_sqrt x = x >=0 ==> (f x * f x <= x && (f x + 1) * (f x + 1) > x) prop_int_sqrt_small = forAll (choose (0, 10000)) prop_int_sqrt prop_int_sqrt_small_shrink = forAll Shrink (choose (0, 10000)) shrink prop_int_sqrt *Main> quickCheck prop_int_sqrt_small_shrink *** Failed! Falsifiable (after 1 test and 11 shrinks): 2 *Main> quickCheck prop_int_sqrt_small_shrink *** Failed! Falsifiable (after 1 test and 10 shrinks): 2 Small counterexample Own experience with TESTING NONINTERFERENCE, QUICKLY Can we quickcheck noninterference? [ICFP 2013 and beyond] • Context – designing real machine with dynamic IFC (>100 instructions) • Experiment – very simple stack machine (10 instructions) – standard end-to-end noninterference property – manually introduced 14 plausible IFC errors, and measured how fast they are found • Encouraging results – however, not using QuickCheck naively 3 secret ingredients 1. Fancy program generation strategies - $s_1 \approx s_2$ – generate $s_1$ then vary secrets to get $s_2 \approx s_1$ - distributions, instruction sequences, smart integers - best one: “generation by execution” • 19 instructions counterexample takes minutes to find 2. Strengthening the tested property - best one: “unwinding conditions” (next slide) • all errors found in milliseconds, even with simple generation - requires finding stronger invariants, like for proving 3. Fancy shrinking 1. Rather simple custom generator ```haskell frequency $ \begin{array}{l} [ (1, \text{pure Noop}) ] ++ \\ [ (1, \text{pure Halt}) ] ++ \\ [ (10, \text{pure Add}) | \text{nstk} \geq 2 ] ++ \\ [ (10, \text{Push <$> lint}) ] ++ \\ [ (10, \text{pure Pop}) | \text{nstk} \geq 1 ] ++ \\ [ (20, \text{pure Store}) | \text{nstk} \geq 2 \\ \quad , \text{absAdjustAddr vtop `isIndex` mem} ] ++ \\ [ (20, \text{pure Load}) | \text{nstk} \geq 1 \\ \quad , \text{absAdjustAddr vtop `isIndex` mem} ] ++ \\ [ (10, \text{liftM2 Call (choose (0, (nstk-1) `\text{min}` maxArgs)) arbitrary) \\ \quad | \text{nstk} \geq 1 \\ \quad , cally } ] ++ \\ [ (20, \text{liftM Return arbitrary}) | \text{Just r } \leftarrow [ \text{fmap astkReturns $} \\ \quad \text{find (not . isAData) stk} ] \\ \quad , \text{nstk} \geq \text{if r then 1 else 0} \\ \quad , \text{cally} ] ++ \\ [ (10, \text{pure Jump}) | \text{nstk} \geq 1 \\ \quad , \text{jumpy} ] ++ \end{array} ``` Guarded (add needs 2 arguments) Store only to valid address Return only after call Biased generator .... 2. Stronger: Unwinding conditions inductive invariants for noninterference are easiest to test When should one stop? How to test the testing infrastructure? POLARIZED MUTATION TESTING Testing ... when should one stop? • When testing finds no bugs – either there are indeed none – or our testing is simply not good enough • “testing can only show the presence of bugs, not their absence” – Dijkstra • Mutation testing: automatically introduce realistic bugs – test the testing infrastructure (e.g. the generator) – in ICFP 2013 experiments we added bugs manually • does not scale, tedious and turns code into spaghetti • One should stop testing and start proving – when testing finds all mutants but no new bugs Extended IFC experiment • More realistic IFC machine – **extra features**: registers, public labels, dynamic allocation – unwinding conditions use **more complex invariants**: • noninterference uses stamp-based memory indistinguishability • H stamped regions cannot be reached through L labeled pointers – prior (paper) proof attempt timed out after 3 weeks of work • Easy to enumerate all missing taints and missing checks • especially easy when IFC split into separate “rule table” # Rule table <table> <thead> <tr> <th></th> <th>Allow</th> <th>Result</th> <th>PC</th> </tr> </thead> <tbody> <tr> <td>OpLab</td> <td>TRUE</td> <td>BOT</td> <td>LabPC</td> </tr> <tr> <td>OpMLab</td> <td>TRUE</td> <td>Lab1</td> <td>LabPC</td> </tr> <tr> <td>OpPcLab</td> <td>TRUE</td> <td>BOT</td> <td>LabPC</td> </tr> <tr> <td>OpBCall</td> <td>TRUE</td> <td>JOIN Lab2 LabPC</td> <td>JOIN Lab1 LabPC</td> </tr> <tr> <td>OpBRet</td> <td>LE (JOIN Lab1 LabPC) (JOIN Lab2 Lab3)</td> <td>Lab2</td> <td>Lab3</td> </tr> <tr> <td>OpFlowsTo</td> <td>TRUE</td> <td>JOIN Lab1 Lab2</td> <td>LabPC</td> </tr> <tr> <td>OpLJoin</td> <td>TRUE</td> <td>JOIN Lab1 Lab2</td> <td>LabPC</td> </tr> <tr> <td>OpPutBot</td> <td>TRUE</td> <td>BOT</td> <td>LabPC</td> </tr> <tr> <td>OpNop</td> <td>TRUE</td> <td>__</td> <td>__</td> </tr> <tr> <td>OpPut</td> <td>TRUE</td> <td>BOT</td> <td>LabPC</td> </tr> <tr> <td>OpBinOp</td> <td>TRUE</td> <td>JOIN Lab1 Lab2</td> <td>LabPC</td> </tr> <tr> <td>OpJump</td> <td>TRUE</td> <td>__</td> <td>__</td> </tr> <tr> <td>OpBNZ</td> <td>TRUE</td> <td>__</td> <td>__</td> </tr> <tr> <td>OpLoad</td> <td>TRUE</td> <td>Lab3</td> <td>LabPC</td> </tr> <tr> <td>OpStore</td> <td>LE (JOIN Lab1 LabPC) Lab2</td> <td>Lab3</td> <td>LabPC</td> </tr> </tbody> </table> ... Results encouraging • Our generator had tons of bugs – could only kill 9 out of 51 mutants (17.6%)! • Finding and fixing generator bugs – gathering statistics, constructing counterexamples by hand – fixing one generator bug usually killed many more mutants – sometimes found extra bugs in un-mutated artifact & property • After a couple of days only live 2 mutants – for which we still couldn’t find counterexamples by hand – we applied these mutantions, started proving ... results still pending • Mutation testing gamifies invariant finding – to the point it’s actually fun and addictive! Mutant game (final output) Extracted Fighting 52 mutants Killed mutant 0 (1 frags) Killed mutant 1 (2 frags) Killed mutant 2 (3 frags) Killed mutant 3 (4 frags) Killed mutant 4 (5 frags) Killed mutant 5 (6 frags) Killed mutant 6 (7 frags) Killed mutant 7 (8 frags) Killed mutant 8 (9 frags) Killed mutant 9 (10 frags) Killed mutant 10 (11 frags) Killed mutant 11 (12 frags) Killed mutant 12 (13 frags) Killed mutant 13 (14 frags) Killed mutant 14 (15 frags) Killed mutant 15 (16 frags) Killed mutant 16 (17 frags) Killed mutant 17 (18 frags) Killed mutant 18 (19 frags) Killed mutant 19 (20 frags) Killed mutant 20 (21 frags) Killed mutant 21 (22 frags) Killed mutant 22 (23 frags) Killed mutant 23 (24 frags) Killed mutant 24 (25 frags) Killed mutant 25 (26 frags) Killed mutant 26 (27 frags) Killed mutant 27 (28 frags) Killed mutant 28 (29 frags) Killed mutant 29 (30 frags) Killed mutant 30 (31 frags) Killed mutant 31 (32 frags) Killed mutant 32 (33 frags) Killed mutant 33 (34 frags) Killed mutant 34 (35 frags) Killed mutant 35 (36 frags) Killed mutant 36 (37 frags) Killed mutant 37 (38 frags) Missed mutant [38] (38 frags) Missed mutant [39] (38 frags) Killed mutant 40 (39 frags) Killed mutant 41 (40 frags) Killed mutant 42 (41 frags) Killed mutant 43 (42 frags) Killed mutant 44 (43 frags) Killed mutant 45 (44 frags) Killed mutant 46 (45 frags) Killed mutant 47 (46 frags) Killed mutant 48 (47 frags) Killed mutant 49 (48 frags) Killed mutant 50 (49 frags) Killed mutant 51 (50 frags) So why did this work so well? • Yes, human is in the loop (debugging, finding counterexamples) – but we don’t waste human cycles • Each unkilled mutant thought us something – either exposed real bugs in the testing – or was apparently better than the original (more permissive) • This is usually not the case for mutation testing – purely syntactic mutations (replace “+” by “-”) – human cycles wasted on silly ("equivalent") mutants that don’t break the tested property – kill count just alternative to code coverage metrics, never 100% – what we do seems to go beyond the state of the art Polarized mutation testing - Generalizing this technique beyond IFC - Started with STLC experiment - break progress by strengthening the step relation (e.g. dropping whole stepping rules) - break preservation - by strengthening positive occurrence of typing relation - or by weakening negative occurrence of typing relation - or by weakening (negative occurrence of) step relation - Used Coq relational extraction plugin [Dubois et al] - Tested against MuCheck (new Haskell mutation framework) - No-shadowing bug in fancy generator for well-typed terms Other experiments • Looking at PLT Redex for already tested large formalizations • Removed precondition for tail call optimization in CompCert – CSmith couldn’t find the bug, despite small counterexample – “This is a good example to show how much more Csmith can improve” ```c #include <stdio.h> #include <stdlib.h> int *p; int* bar() { // Signatures must match int x = 17; printf("%d %d\n", x, *p); return &x; // Need to get &x to avoid storing in register } int* foo() { // Signatures must match int q = 42; p = &q; return bar(); // Need to return for the tail call to apply } int main() { p = malloc(sizeof(int)); foo(); return 0; } ``` QUICKCHECK CLONE FOR COQ QuickCheck clone for Coq (prototype) - ICFP 2013 work used Haskell QuickCheck - Since then Leo ported Haskell QuickCheck to Coq - Largest part implemented in Coq itself - Using extraction to Haskell for - efficient evaluation, random seed, tracing - At this point no big advantage over - writing equivalent executable spec - extracting it to Haskell - using Haskell QuickCheck Same thing as before, just in Coq ```coq Definition f x := match x with | 0 => 0 | 1 => 1 | _ => 6 end%Z. Definition prop_int_sqrt x := ((x >= 0) ==> ((f x * f x <= x) && ((f x + 1) * (f x + 1) > x)))%Z. Definition prop_int_sqrt_small := forallShrink show (chooseZ (0%Z, 10000%Z)) shrink prop_int_sqrt. ``` [1 of 1] Compiling Extracted ( Extracted.hs, Extracted.o ) Linking Extracted ... ./Extracted +RTS -K1000000000 -RTS 2 Failure 1 11 0 1079681135 1 0 "*** Failed! After 1 tests and 11 shrinks" [] "Falsifiable" Custom generator in Coq ```coq frequency (pure Nop) [ (* Nop *) (1, pure Nop); (* Halt *) (0, pure Halt); (* PcLab *) (10, liftGen PcLab genRegPtr); (* Lab *) (10, liftGen2 Lab genRegPtr genRegPtr); (* MLab *) (onNonEmpty dptr 10, liftGen2 MLab (elements Z0 dptr) genRegPtr); (* FlowsTo *) (onNonEmpty lab 10, liftGen3 FlowsTo (elements Z0 lab) (elements Z0 lab) genRegPtr); (* LJoin *) (onNonEmpty lab 10, liftGen3 LJoin (elements Z0 lab) (elements Z0 lab) genRegPtr); (* PutBot *) (10, liftGen PutBot genRegPtr); (* BCall *) (10 * onNonEmpty cpotr 1 * onNonEmpty lab 1, liftGen3 BCall (elements Z0 cpotr) (elements Z0 lab) genRegPtr); (* BRet *) (if containsRet stk then 50 else 0, pure BRet); (* Alloc *) (200 * onNonEmpty num 1 * onNonEmpty lab 1, liftGen3 Alloc (elements Z0 num) (elements Z0 lab) genRegPtr); ``` Some ideas about DEEPER INTEGRATION WITH COQ/SSREFLECT Testing actual lemmas & proof goals • Currently – write executable spec in Coq – prove equivalence – test this executable variant • Ideally, switch freely between – proving and testing – declarative and executable ... SSReflect • in small-scale reflection proofs – defining both declarative and computational specs – switching freely between them ... is already the normal proving process • testing would add small(er) additional overhead • SSReflect computational specifications are often not fully / efficiently executable, but – could use CoqEAL refinement framework [Maxime et al, ITP 2012, CPP 2013] for switching to efficiently executable code Potential workflow • Reify proof goal to syntactic representation of formula (Coq plugin) • Normalize formula (DNF, classically equivalent) • Associate computations to atoms (type classes) – negative atoms (premises) get generator views – positive atoms (conclusions) get checker views • Associate Skolem functions to existentials (type class) • User would still have to provide type class instances – could try to use existing work for partially automating this – full automation not our main concern, customization is Related work (Coq) – qc tactic, part of larger a Coq plugin (rippling) – dependently-typed programming in Matthieu’s Russel – seems rather basic, no user customization – only a couple of very simple examples about lists and trees – seems discontinued since 2011 (Coq 8.3) • Plugins for Coq extracting inductives to ... – OCaml [Delahaye, Dubois, Étienne, TPHOLs 2007] – certified Coq [Tollitte, Delahaye, Dubois, CPP 2012] • anything else? THANK YOU collaborators, CRASH/SAFE team, Amin Alipour, Johannes Borgström, Thomas Braibant, Cristian Cadar, Delphine Demange, Catherine Dubois, Matthias Felleisen, Robby Findler, Alex Groce, Rahul Gopinath, Andy Gordon, Casey Klein, Ben Karel, Scott Moore, Ulf Norell, Rishiyur S. Nikhil, Michal Palka, Manolis Papadakis, John Regehr, Howard Reubenstei, Alejandro Russo, Nick Smallbone, Deian Stefan, Greg Sullivan, Andrew Tolmach, Meng Wang, Xuejun Yang, ....
{"Source-Url": "https://prosecco.gforge.inria.fr/personal/hritcu/talks/QuickChick-PPS.pdf", "len_cl100k_base": 5436, "olmocr-version": "0.1.50", "pdf-total-pages": 37, "total-fallback-pages": 0, "total-input-tokens": 49473, "total-output-tokens": 6892, "length": "2e12", "weborganizer": {"__label__adult": 0.00037026405334472656, "__label__art_design": 0.0002982616424560547, "__label__crime_law": 0.00033545494079589844, "__label__education_jobs": 0.0005998611450195312, "__label__entertainment": 5.704164505004883e-05, "__label__fashion_beauty": 0.00012153387069702148, "__label__finance_business": 0.00015676021575927734, "__label__food_dining": 0.0003292560577392578, "__label__games": 0.00048613548278808594, "__label__hardware": 0.0004427433013916016, "__label__health": 0.0004146099090576172, "__label__history": 0.00016891956329345703, "__label__home_hobbies": 8.958578109741211e-05, "__label__industrial": 0.00029850006103515625, "__label__literature": 0.0001970529556274414, "__label__politics": 0.0002551078796386719, "__label__religion": 0.0004203319549560547, "__label__science_tech": 0.006954193115234375, "__label__social_life": 0.0001220107078552246, "__label__software": 0.00414276123046875, "__label__software_dev": 0.98291015625, "__label__sports_fitness": 0.00033092498779296875, "__label__transportation": 0.00040221214294433594, "__label__travel": 0.00017070770263671875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18064, 0.04479]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18064, 0.08306]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18064, 0.75439]], "google_gemma-3-12b-it_contains_pii": [[0, 147, false], [147, 801, null], [801, 1279, null], [1279, 2039, null], [2039, 2612, null], [2612, 3010, null], [3010, 3245, null], [3245, 3522, null], [3522, 3613, null], [3613, 4220, null], [4220, 4839, null], [4839, 5218, null], [5218, 5742, null], [5742, 5796, null], [5796, 6191, null], [6191, 6732, null], [6732, 7787, null], [7787, 7883, null], [7883, 7973, null], [7973, 8521, null], [8521, 9022, null], [9022, 10114, null], [10114, 10723, null], [10723, 12224, null], [12224, 12834, null], [12834, 13404, null], [13404, 14090, null], [14090, 14115, null], [14115, 14501, null], [14501, 15024, null], [15024, 15844, null], [15844, 15900, null], [15900, 16130, null], [16130, 16572, null], [16572, 17101, null], [17101, 17602, null], [17602, 18064, null]], "google_gemma-3-12b-it_is_public_document": [[0, 147, true], [147, 801, null], [801, 1279, null], [1279, 2039, null], [2039, 2612, null], [2612, 3010, null], [3010, 3245, null], [3245, 3522, null], [3522, 3613, null], [3613, 4220, null], [4220, 4839, null], [4839, 5218, null], [5218, 5742, null], [5742, 5796, null], [5796, 6191, null], [6191, 6732, null], [6732, 7787, null], [7787, 7883, null], [7883, 7973, null], [7973, 8521, null], [8521, 9022, null], [9022, 10114, null], [10114, 10723, null], [10723, 12224, null], [12224, 12834, null], [12834, 13404, null], [13404, 14090, null], [14090, 14115, null], [14115, 14501, null], [14501, 15024, null], [15024, 15844, null], [15844, 15900, null], [15900, 16130, null], [16130, 16572, null], [16572, 17101, null], [17101, 17602, null], [17602, 18064, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18064, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18064, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18064, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18064, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18064, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18064, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18064, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18064, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18064, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18064, null]], "pdf_page_numbers": [[0, 147, 1], [147, 801, 2], [801, 1279, 3], [1279, 2039, 4], [2039, 2612, 5], [2612, 3010, 6], [3010, 3245, 7], [3245, 3522, 8], [3522, 3613, 9], [3613, 4220, 10], [4220, 4839, 11], [4839, 5218, 12], [5218, 5742, 13], [5742, 5796, 14], [5796, 6191, 15], [6191, 6732, 16], [6732, 7787, 17], [7787, 7883, 18], [7883, 7973, 19], [7973, 8521, 20], [8521, 9022, 21], [9022, 10114, 22], [10114, 10723, 23], [10723, 12224, 24], [12224, 12834, 25], [12834, 13404, 26], [13404, 14090, 27], [14090, 14115, 28], [14115, 14501, 29], [14501, 15024, 30], [15024, 15844, 31], [15844, 15900, 32], [15900, 16130, 33], [16130, 16572, 34], [16572, 17101, 35], [17101, 17602, 36], [17602, 18064, 37]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18064, 0.03899]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
ecbadf2e73b7ef19af5052e46dc837027c52fc03
Version lifecycle This page needs an update because of the changes decided at Quality Team - How to reduce the workload, mainly that we no longer use proposals branches. Goals Merged into: http://info.tiki.org/Version+Lifecycle#Goals Background information Since Tiki2, we follow time boxing approach like Gnome and Ubuntu, with a new major release every 6 months (October and April). Whatever isn’t ready in time is deferred to the following release. There are several branches (Old stable, Stable, Dev, Experimental, Legacy) If a change requires more time to implement, we break it into multiple parts and work on each chunk in each cycle. Workspaces is an example of a major change taking place over 2 versions. Since all features are bundled in Tiki (vs having hundreds/thousands of extensions/plugins), we have inherent synchronized releases of all features. When do we stop making bug and/or security fixes on previous versions? The goal is to make upgrades easy, but in many cases, site admins (ex.: in the Enterprise) do not want to upgrade... but they still want to get security fixes. If we tried to support for 2 years, we would have 4 versions to support plus the upcoming one! This would mean that a bug fix would have to done on up to 5 versions. And since code changes, the bug fix can't just be blindly copied. It must be tested. Reality check: As a community, we have to be very realistic about how much energy we’ll have for this. Developers tend to be interested in working on new versions. Thus, it was decided to mark certain versions as Long Term Support, and converge our energies on these. Tiki 3.x was the first official long term support version. 6, 9 and 12 are also LTS. Background reading: Meta-cycles: 2-3 year major cycles for free software? and the follow-up: 2 year cadence for major releases: some progress Release & branching strategy 7.x 2011-04 Tiki7 follows the same general process as Tiki 6 except that - everything is 6 months later - branches/6.x is not closed, as it will become LTS Release 7.1 (May 2011) - Tiki 6.x (which is 7-months old by now) becomes LTS - Release one final 3.x LTS - Tiki 3.x is closed. (around May 2011, thus 2 years after 3.0 release) - LTS users will upgrade (for example) from 3.7 to 6.3 - For data, it will be easy thanks to the Database Schema Upgrade and already widely tested because everyone in the community has gone though that process over last 2 years. - This being said, testing the direct 3.x to 6.x/7.x upgrade is important because there could be some issues only visible when you do such a jump. - For customizations however, they most likely need to be redone either in similar hacking way or by using the newly available possibilities. • **Semi-automatic merging period** ends and **Quality Team** goes in "strict mode". If you want to fix something for an eventual 6.4 LTS 1. commit to trunk (for 8.0) 2. commit to proposals/7.x (for 7.2, etc.) 3. commit to proposals/6.x LTS (for 6.5, etc.) ### 8.x 2011-10 **Tiki8 regular dev period (April -> September 2011)** - Everyone commits to trunk (while avoiding major changes in trunk that either won't be ready for the next release or are not intended for the next release). Use **Experimental branches** instead. - If you have changes to trunk (typically changes to a large number of files) that will increase the difficulty of the 7.x -&gt; 8.x **Semi-automatic merging period**, please do them after the merges are stopped (usually about when 7.1 is released). ### As 8.0 is approaching (September 2011) 1. **Pre** - Solve outstanding issues - Merge all experimental branches - Remove any code that should be &gt; **Endangered features** - Do a run of the "Preparatory work" of **Releasing** (better to do before branching) 2. branches/8.x is started 3. Some *tiki.org sites are migrated to branches/8.x 4. Institute **semi-automatic merging period** from branches/8.x to trunk] (future 9.x) - Everyone commits to branches/8.x (unless it's only for 9.x) - Merge from branches/8.x to trunk (future 9.0) is handled by script so you can commit to branches/8.x until 8.1 is released. All devs should try to update the sites they manage during this period, because the process is simpler than after 8.1 ### 8.0 is released (October 2011) 1. All *tiki.org sites are running branches/8.x 2. **Quality Team** starts to check all commits and rollback any issues. "soft mode" &gt; **dogfooding Code Review** 3. Release one last 7.x stable (ex.: 7.4) with everything that is in proposals/7.x (which is closed after) 4. Merge from branches/8.x to trunk (future 9.0) is handled by script so you can commit (with extreme caution!) to branches/8.x until 8.1 is released. All devs should try to update the sites they manage during this period, because the process is simpler than after 8.1 ### Release 8.1 (November 2011) 1. branches/7.x is closed, as all focus goes on branches/8.x 2. proposals/8.x is created and **Semi-automatic merging period** ceases. 3. **Quality Team** goes in "strict mode". If you want to fix something for an eventual 8.2: 1. commit to trunk (for 9.0) 2. commit to proposals/8.x (for 8.2, etc.) If you want to fix something for an eventual 6.5: (a lot of work!) 1. commit to trunk (for 9.0) 2. commit to proposals/8.x (for 8.4, etc.) 3. commit to proposals/6.x (for 6.5, etc.) 9.x 2012-04 Tiki9 follows the same general process as Tiki 8 except that - everything is 6 months later Since Tiki 9.x will be the next LTS, if you have a major change to make, avoid making it for Tiki9. After 9.1 is the best time to introduce major architectural changes for Tiki10, as it won't interfere with merge up script, and there is still plenty of time to deploy everywhere. 10.x 2012-10 Tiki10 follows the same general process as Tiki 9 except that - everything is --6-- 8 months later - branches/9.x is not closed, as it will become LTS Release 10.1 (November 2012 January 2013) - Tiki 9.x (which is 9-months old by now) becomes LTS - Release one final 6.x LTS - Tiki 6.x is closed. (around November 2012, thus about 18 months years after 6.0 release) - LTS users will upgrade (for example) from 6.7 to 9.3 - For data, it will be easy thanks to the Database Schema Upgrade and already widely tested because everyone in the community has gone though that process over last 18 months. - For customizations however, they most likely need to be redone either in similar hacking way or by using the newly available possibilities. - Quality Team goes in "strict mode". If you want to fix something for an eventual 9.4 LTS 1. commit to trunk (for 11.0) 2. commit to proposals/10.x (for 10.2, etc.) 3. commit to proposals/9.x LTS (for 9.4, etc.) 11.x and beyond 2013-04 Since Tiki2, the release process has improved dramatically and the Continuous Testing Server is expected to further improve the situation. Upgrade paths examples Moved to http://info.tiki.org/Version+Lifecycle#Upgrade_paths_examples Benefits - Only 3 branches are maintained by developers at any given time, and only one that accepts significant changes. - Permits to focus eyeballs: Anything that is not risk-free (ex.: new features, or re-factor) is always done in trunk, thus all eyeballs are focused in the same place. So, if a change has a side-effect, it can be caught early. We don't fall in the dependency-hell trap. - Provides homes both for: - Fast pace of development - Stability and limited upgrades - We can still make quick .x releases on stable and LTS branches for security fixes, bug fixes, and changes to APIs. Ex.: A reported change in the Facebook API **Drawbacks** - Previous branch (if not LTS) is dropped as soon as a .0 is released. Ex.: Once 5.x is released, 4.x is no longer maintained. - Thus, if you use 3.x LTS, and you decide to upgrade to 4.x, you may have regressions. If you want to upgrade from LTS, it should be to latest stable at the current time of the upgrade. This one will contain all the fixes that LTS has. - Except for LTS, versions are only supported for 6 months. - If you need Long Term Support on other versions, you can always get professional support from Consultants. - While it's OK to add completely new features, it is best to avoid major infrastructure developments (the ones that have a chance to introduce a lot of bugs/changes throughout the application) for LTS versions (like 6 and 9) - While in the semi-automatic merging period, sweeping changes (inc. cleanups) on trunk must be avoided (about 4-8 weeks per 6-month period) **Longer LTS period** For many years now (and for the foreseeable future!), Tiki has been the FOSS Web Application with the most built-in features. For many projects, upgrading to a new version is an annoyance because the existing version does everything needed and more. Because the 6-month release cycle was too frequent for some in our community, LTS versions were introduced. But even 18 months for LTS is also too frequent for some, so it is proposed to have a longer LTS period for security fixes. **Benefits** - You can start a project at any time and be assured of at least 2 years of security releases. (If you are lucky in your timing, you may have up to 3.5 years of support from your initial installation). Once you get in the LTS cycle, you can upgrade every LTS (18 months) or skip an LTS and upgrade every 36 months (which is also good timing for a new design and to change the OS) - At any time, you can upgrade to the regular 6-month cycle (if you need new features) - Better for inclusion in Debian and Ubuntu LTS repositories - Great for the enterprise - Great for major projects which want to use Tiki more as a Framework. - Great for web design shops which make highly customized designs. By the end of the support period, it will be time to make a new site design anyway. - Starts LTS phase later so Quality Team has less work **Drawbacks** - More work - Risk of dispersing community energy on more versions **Diagram** Moved to: http://info.tiki.org/Version+Lifecycle#Version_Lifecycle_Diagram Strategies Moved to: http://info.tiki.org/Version+Lifecycle#Strategies Things to discuss / be careful about - What is policy of included code? - Zend Framework - Smarty - jQuery - and other External Libraries - What if a future version of a Tiki requirement (PHP/MYSQL/Apache) breaks something during the LTS period? - PHP 5.2.x is no longer supported but still used by a lot of hosts. Tiki works well on PHP 5.2 and 5.3 - We should test with PHP5Compat_CodeSniffer - Tiki9LTS is supported for the PHP/MYSQL/Apache version which were available when it was being developed. ex.: Tiki9 LTS is widely tested in 5.2 and 5.3 It is expected to work in future versions, but if PHP (ex.: 5.5) makes some non-backward compatible changes, the LTS version will not be adapted to work with it. - Tiki 9 installs fine on MySQL 4.1.22 (although some of the upgrade scripts require MySQL 5) so it's safe to assume that future versions of MySQL will be fine - We should perhaps suggest to pair this with LTS servers and browsers? - What if a future version of a major browser breaks something during the LTS period? - This is very unlikely but if it happens, we'll evaluate on a case by case basis. - What if a bundled external library has a security issue and it's no longer supported? - We'll inform users about the risks and that they should discontinue the use of the features that require this library. - Update notifier - What about Auto-installers? - This should help getting us in various distros? - How do we deal with documentation? - Maybe the Preferences report script could indicate which versions have the given feature? - Perhaps we should try to have more docs in the app (easier said than done) - Use PluginVersions - Make sure that demo.tiki.org has all the supported LTS versions - Experimental features: preferences tagged as experimental in tiki-admin.php should not be covered by LTS - How do we indicate that profiles are not supported for 5 years? Definition of "security-only" phase moved to: http://dev.tiki.org/Where+to+commit#Definition_of_security-only_phase LTS: Proposal to extend security-only support period We already offer 3.5 years of support. This permits to skip an LTS, so you can go from 6LTS to 12LTS. Going forward, let's make it possible to skip 2 LTS version. So Tiki 12LTS could skip 15LTS and 18LTS, and upgrade to 21LTS. Thanks to our model, the workload for this is quite reasonable. In other web apps with all the extensions, it would be a prohibitive work load. The Tiki model is arguably already the most effective solution at solving the synchronized releases challenge in the web app world (all other systems have tons of extensions which are chronically late to update). With this proposal, Tiki will not only be the FOSS Web app with the most built-in features and the fastest release pace of the last 5 years, it will also be the one with the longest support period going forward. - Ubuntu offers 5 years - CentOS/RHEL offer 10 years - Drupal supports current and previous version and releases are not scheduled, but historically approximately every 3 years. So if you started at the very beginning of a cycle, you could theoretically get 6 years of support, but in practice, real world sites wait 1-2 year for enough modules to be ported to the new version. So the real support period is more like 4-5 years. Since Drupal 6 support ends when Drupal 8 is released, and when Drupal 8 is released, not enough modules are ported, it is not possible to skip versions. - Joomla! LTS: 2 years and 3 months - MediaWiki LTS: 3 years - WordPress project leader is "philosophically opposed" to LTS - Typo3 LTS: 3 years - Symfony: "three year period for bug fixes, and for a four year period for security issue fixes." Adding 1.5 year security-only support period to LTS version starting in Tiki6 LTS (so a total of 5 year support) <table> <thead> <tr> <th>Accept</th> <th>Undecided</th> <th>Reject</th> </tr> </thead> <tbody> <tr> <td>1</td> <td></td> <td>6</td> </tr> <tr> <td>• dirschneid</td> <td>• marclaporte</td> <td>• daniam</td> </tr> <tr> <td></td> <td>• xavi</td> <td>• lphuberdeau</td> </tr> <tr> <td></td> <td></td> <td>• arild</td> </tr> <tr> <td></td> <td></td> <td>• amette</td> </tr> <tr> <td></td> <td></td> <td>• Torsten</td> </tr> <tr> <td></td> <td></td> <td>• pascalstjean</td> </tr> </tbody> </table> Adding 1.5 year security-only support period to LTS version starting in Tiki9 LTS (so a total of 5 year support) <table> <thead> <tr> <th>Accept</th> <th>Undecided</th> <th>Reject</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>5</td> <td>1</td> </tr> <tr> <td>• daniam</td> <td>• marclaporte</td> <td>• pascalstjean</td> </tr> <tr> <td>• lphuberdeau</td> <td>• arild</td> <td></td> </tr> <tr> <td>• xavi</td> <td></td> <td></td> </tr> </tbody> </table> Adding 1.5 year security-only support period to LTS version starting in Tiki12 LTS (so a total of 5 year support) <table> <thead> <tr> <th>Accept</th> <th>Undecided</th> <th>Reject</th> </tr> </thead> <tbody> <tr> <td>6</td> <td>1</td> <td>0</td> </tr> <tr> <td></td> <td></td> <td></td> </tr> </tbody> </table> Version Lifecycle Diagram This info was moved to info Related links - http://linuxlifecycle.com/ - http://drupal.org/handbook/version-info - http://fedoraproject.org/wiki/LifeCycle - http://docs.joomla.org/Release_and_support_cycle STS: 7 months LTS: 21 months - http://wiki.ubuntu.com/TimeBasedReleases - https://wiki.mozilla.org/RapidRelease - Software G Forces: The Effects of Acceleration (video) - O'Reilly MySQL CE 2010: Brian Aker, "State of Drizzle" -> very fast release - Where to commit - Releasing - When to release - think popcorn - RoadMap - Freeze and Slush - Model - http://symfony.com/doc/master/contributing/community/releases.html (very similar to Tiki!) - Do Faster Releases Improve Software Quality? An Empirical Case Study of Mozilla Firefox. - http://www.h-online.com/open/features/Comment-1-for-rapid-releases-1916446.html Alias - LTS - Long Term Support - Meta-cycles - Release schedule - cycle - lifecycle - release cycle - extended support - extended LTS - Maintenance Policy - Version lifecycle to
{"Source-Url": "https://dev.tiki.org/tiki-print.php?display=pdf&page=Version+lifecycle", "len_cl100k_base": 4287, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 15507, "total-output-tokens": 4607, "length": "2e12", "weborganizer": {"__label__adult": 0.0002703666687011719, "__label__art_design": 0.00029540061950683594, "__label__crime_law": 0.00019550323486328125, "__label__education_jobs": 0.000823974609375, "__label__entertainment": 6.830692291259766e-05, "__label__fashion_beauty": 9.54270362854004e-05, "__label__finance_business": 0.0006575584411621094, "__label__food_dining": 0.00020933151245117188, "__label__games": 0.0004699230194091797, "__label__hardware": 0.00030994415283203125, "__label__health": 0.00015437602996826172, "__label__history": 0.0001569986343383789, "__label__home_hobbies": 7.778406143188477e-05, "__label__industrial": 0.00011873245239257812, "__label__literature": 0.00021028518676757812, "__label__politics": 0.00017464160919189453, "__label__religion": 0.00018787384033203125, "__label__science_tech": 0.0011005401611328125, "__label__social_life": 0.00012576580047607422, "__label__software": 0.0232696533203125, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.00018107891082763672, "__label__transportation": 0.00015079975128173828, "__label__travel": 0.00014841556549072266}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16271, 0.03809]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16271, 0.07301]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16271, 0.91919]], "google_gemma-3-12b-it_contains_pii": [[0, 2755, false], [2755, 5212, null], [5212, 7261, null], [7261, 10126, null], [10126, 13100, null], [13100, 14890, null], [14890, 16271, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2755, true], [2755, 5212, null], [5212, 7261, null], [7261, 10126, null], [10126, 13100, null], [13100, 14890, null], [14890, 16271, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16271, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16271, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16271, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16271, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16271, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16271, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16271, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16271, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16271, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16271, null]], "pdf_page_numbers": [[0, 2755, 1], [2755, 5212, 2], [5212, 7261, 3], [7261, 10126, 4], [10126, 13100, 5], [13100, 14890, 6], [14890, 16271, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16271, 0.08962]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
5f71ecc186443389b4e590b92ec261a5135d9dc9
Chapter 8 Dialogue The remainder of the course (we’re trying something a little different this year) is about the problem of natural language dialogue. We’ll start with a brief overview of the topic and then the various parts of a typical task-oriented dialogue system will motivate the rest of the topics of the course. Dialogue systems can be divided broadly into two different types: chatbots and task-oriented dialogue systems. Chatbots are supposed to be able to have a conversation about anything, whereas task-oriented dialogue systems are supposed to help users complete a specific task or tasks. Siri, Alexa, Cortana, and Google Assistant are task-oriented dialogue systems. 8.1 Chatbots Turing’s imitation game. The idea of a chatbot goes back (at least) to Turing’s paper on artificial intelligence (Turing, 1950), in which he proposed to redefine whether a computer is intelligent by whether it can carry on a conversation well enough to be indistinguishable from a human. Turing predicted that in 50 years (that is, in 2000), computers would be able to store $10^{10}$ bits and pass a five-minute test with 70% success. ELIZA. The oldest chatbot (that I’m aware of) was ELIZA (Weizenbaum, 1966). It used the pretense of being a Rogerian psychotherapist to turn the user’s utterances back into questions, rather than trying to contribute new ideas to the conversation. The original source code was rediscovered only very recently, but there are many approximate reimplementations; in any copy of Emacs, you can type M-x doctor (that is, Escape x doctor Return) to run one version. Here’s the conversation from the original paper (punctuation added): P: Men are all alike. E: In what way? P: They’re always bugging us about something or other. --- 1 A dialogue is a conversation between two people, or a person and a machine; a dialog box is a window that pops up in a GUI that provides information and asks for a response. 2 https://elizagen.org Chapter 8. Dialogue E: Can you think of a specific example? P: Well, my boyfriend made me come here. E: Your boyfriend made you come here? P: He says I’m depressed much of the time. E: I am sorry to hear that you are depressed. P: It’s true. I am unhappy. E: Do you think coming here will help you not to be unhappy? P: I need some help, that much seems certain. E: What would it mean to you if you got some help? P: Perhaps I could learn to get along with my mother. E: Tell me more about your family. P: My mother takes care of me. E: Who else in your family takes care of you? P: My father. E: Your father? P: You are like my father in some ways. E: What resemblance do you see? P: You are not very aggressive but I think you don’t want me to notice that. E: What makes you think I am not very aggressive? P: You don’t argue with me. E: Why do you think I don’t argue with you? P: You are afraid of me. E: Does it please you to believe I am afraid of you? P: My father is afraid of everybody. E: What else comes to mind when you think of your father? P: Bullies. E: Does that have anything to do with the fact that your boyfriend made you come here? ELIZA is extremely simple. Given a string \( w \), for example, “You are very helpful,” it performs the following steps: 1. Find the most important keyword, \( K \), in the string. In the example, \( K \) is “you”. 2. Find a template \( D \) for \( K \) that matches the string. In the example, \( D \) is “\( X \) you are \( Y \)”, so \( X = \varepsilon \) and \( Y = \text{very helpful} \). 3. Choose a response \( R \) for \( D \). For example, one response for “\( X \) you are \( Y \)” is, “What makes you think I am \( Y \)?” 4. In $X$, $Y$, etc., change pronouns “me” to “you” and “you” to “me,” etc., and substitute them into $R$. So the example $R$ becomes “What makes you think I am very helpful?”. Additionally, there are certain keywords, like “my,” that make ELIZA save a response for later. If it later has nothing to say, it will generate a saved response (like the last line of the above dialogue). Essentially, what passes for syntax/semantics are templates like “$X$ you are $Y$” and “What makes you think I am $Y$?” Although this is a far cry from the parse trees and logical forms we saw in earlier chapters, these templates get the job done, and we will see that they are still being used today. **A simple neural chatbot.** A modern, but also simple, chatbot was based on an early neural machine translation system (Vinyals and Le, 2015). The model was just an encoder RNN whose final state was used to initialize the state of a decoder RNN (that is, there was no attention mechanism). It was trained to “translate” a line spoken by one movie character to a response from another movie character, or a line spoken by a customer to a response from a helpdesk. Given how simple the model was, some of the results were surprisingly good: Human: what is the purpose of life ? Machine: to serve the greater good. Human: what is the purpose of living ? Machine: to live forever. Human: what is the purpose of existence ? Machine: to find out what happens when we get to the planet earth. Human: where are you now ? Machine: i’m in the middle of nowhere. Human: what is the purpose of dying ? Machine: to have a life. Human: what is the purpose of being intelligent ? Machine: to find out what it is. Human: what is the purpose of emotions ? Machine: i don’t know. ### 8.2 Task-Oriented Dialogue In this section I’m relying heavily on Chapter 24 of *SLP* (Jurafsky and Martin, 2021). The reason I’ve dwelt so much on ELIZA is that many of the elements of how ELIZA works are still part of the task-oriented dialogue systems that we use today. GUS (Bobrow et al., 1977) was another early example. **Dialogue act classification.** The first thing the computer does is to determine what kind of dialogue act the user input is. In the MultiWOZ data set (Budzianowski et al., 2018), for example, some possible dialogue acts are: - book_hotel - book_restaurant - book_train - find_attraction - find_hospital - find_hotel - find_restaurant - find_taxi - find_train If the user asks, “I need a place to dine in the center that’s expensive,” that’s a find_restaurant. Recall that in ELIZA, the first step was to identify the most important keyword; this corresponds (very) roughly to dialogue act classification. **Slot filling.** In ELIZA, each keyword was associated with one or more patterns, and ELIZA would try to match the user input against each one. Similarly, in the task-oriented dialogue system, each dialogue act corresponds to one or more frames that contain slots. For example, a frame for find_restaurant might have slots for - restaurant-area - restaurant-food - restaurant-name - restaurant-pricerange For the user input “I need a place to dine in the center that’s expensive,” the slot fillers would be restaurant-area = centre, restaurant-pricerange = expensive. In both cases, the frame (find_restaurant(restaurant-area = centre, restaurant-pricerange = expensive)) is what serves as both syntax and semantics. It’s a lot simpler than we we studied in past weeks of the course, but it gets the job done! **Dialogue state tracking.** The system maintains a dialogue state that persists from turn to turn. In ELIZA, the dialogue state took the form of a “memory” that stored $X$ whenever the user says “my $X$.” In a task-oriented dialogue system, the state might, for example, contain a partial frame. If the frame is incomplete, the computer will request more information from the user until the frame is complete. Dialogue policy and content planning. The final step for ELIZA was to choose a response pattern (by cycling through a little of possible responses) and copying the slot fillers from the user input to the response slots (changing pronouns as needed). A task-oriented dialogue does the same thing, using the current dialogue state to choose a dialogue act for the response and to fill the slots in the response. For example, the dialogue act might be recommend_restaurant and the filler might be restaurant-name = Bedouin. Of course, ELIZA's strategy of copying slot fillers from the user input to the response won’t work very well, since the point of the system is to provide new information to the user. We’re going to assume that this is taken care of by a database system and won’t say any more about this step! Content generation. Finally, the computer has to translate the response frame, e.g., recommend_restaurant(restaurant-name = Bedouin), into text, like “There is a place named Bedouin in the centre. How does that sound?” (In ELIZA, this step was trivial because the frames are already literal text; or, you could consider the mapping of pronouns to be the content generation step.) In the following sections, we will talk about dialogue act classification (and other classification tasks), slot filling (and other sequence labeling tasks), and finally generation. 8.3 Slot Filling (Sequence Labeling) 8.3.1 Problem In slot-filling, the input is a sentence like ``` Please find me a train from cambridge to stansted airport ``` and the output is ``` Please find me a train from [cambridge]departure to [stansted airport]destination ``` The most common way to formulate this kind of problem, where the computer has to identify a number of non-overlapping substrings of the input, is called BIO tagging. ``` O O O O O O B-departure O B-destination I-destination ``` B stands for “begin” and is used for the first word in each slot-filler; I stands for “inside” and is used for the second and subsequent words in each slot-filler. O stands for “outside” and is used for any word that does not belong to a slot-filler. Other schemes exist, like BILOU (L for the last word an entity, U "unit" for the only word in an entity), but this is the simplest and most common. Now we’ve reduced slot-filling to a sequence labeling task. Other examples of sequence labeling tasks are: • Word segmentation: Given a representation of a sentence without any word boundaries, reconstruct the word boundaries. In some languages, like Chinese, words are written without any spaces in between them. (Indeed, it can be difficult to settle on the definition of a "word" in such languages.) • Part of speech tagging: Given a sentence, label each word with its part of speech. • Named entity detection/recognition: Given a sentence, identify all the proper names (Notre Dame, Apple, etc.) and classify them as persons, organizations, places, etc. One of the hallmarks of sequence labeling problems is dependencies between the labels. For example, if we’re doing named entity recognition, a model might learn that Dame has a high probability of being tagged I-org, as the last word of Notre Dame (and University of Notre Dame, Cathedral of Notre Dame, etc.). But in a sentence like In 2004, Dame Julie Andrews voiced Queen Lillian in Shrek 2, Dame should be tagged B-person. Maybe the model can get enough clues from the surrounding words to tag it correctly, but the strongest clue should be that I-org absolutely cannot follow O. 8.3.2 Sequence labeling as parsing The classic models used to solve sequence labeling problems are, in historical order, hidden Markov Models (HMMs), conditional random fields (CRFs), and biLSTM-CRFs. HMMs and CRFs are usually formulated as either finite automata or matrix operations. But since you have parsing with CFGs fresh in your mind, let’s formulate them as CFGs. It may be overkill, but it’s arguably the cleanest way to write them. What would a parse tree look like for this task? The labels (B-*, I-*, O) would be like parts of speech, and since we don’t have any other kind of structure, it only makes sense to use a purely right-branching or left-branching structure. Let’s choose right-branching: So the grammar has the following kinds of rules: - $X \rightarrow X \bar{Y}$ - $X$ and $Y$ are labels - $X \rightarrow a$ - $X$ is a label and $a$ is a word (incl. BOS) - $\bar{E} \rightarrow \text{EOS}$ The start symbol is $\bar{S}$ (not $S$). Let’s call the first kind of rules *transition rules* and the last two kinds *emission rules*. (The reason we have to special-case the last rule is because it’s the only emission rule whose left-hand side has a bar over it. Otherwise, it’s not really that special.) ### 8.3.3 Hidden Markov models If the grammar is a probabilistic CFG, then this is equivalent to a hidden Markov model. The probabilities of the transition rules $X \rightarrow X \bar{Y}$ measure the probability of one label coming after another and are called *transition probabilities*. The emission rules, $X \rightarrow a$, measure the probability of generating a word given a label and are called *emission probabilities*. ### 8.3.4 Conditional random fields If the grammar is a weighted (not necessarily probabilistic) CFG, then this is equivalent to a conditional random field. A rule can have any weight $p > 0$; we also call $\log p$ its *score*. Note that in order for this CFG to be equivalent to a CRF, we have to include all rules of the form $X \rightarrow a$ and $\bar{X} \rightarrow X \bar{Y}$, even if they were not observed in the training data. In the parsing chapter, we skipped straight from PCFGs to neural weighted CFGs, so we won’t dwell on non-neural weighted CFGs any longer. 8.3.5 RNN+CRFs In the parsing chapter and in HW3, we built a neural parser by using a neural network to compute the rule scores of a weighted CFG. We can do the exact same thing here, but with a slightly different neural network. We start off, as usual, with a sequence encoder. Let \( F \) be the input string, with \( F_1 = \text{BOS} \) and \( F_n = \text{EOS} \). Let \( \Gamma \) be the set of possible labels. \[ \begin{align*} V & \in \mathbb{R}^{n \times d} \\ V_i & = \text{Embedding}(w_i) & i = 1, \ldots, n \\ H & \in \mathbb{R}^{n \times d} \\ H & = \text{RNN}(V). \end{align*} \] (8.1) Usually the encoder is a fancier kind of RNN called a bidirectional LSTM, but we’re sticking to a simple, left-to-right RNN here. Each \( H_i \) is the encoding of \( w_i \). So far, this is the same as the neural parser from before. Next, we need to define a function that assigns a score to every rule, possibly depending on its position in the string. We define this function for the three kinds of rules as follows: \[ \begin{align*} s(\bar{X} \rightarrow \bar{Y} a_i) & = T_{X,Y} & 0 \leq i \leq n - 2 \\ s(X \rightarrow a_i) & = O_{i,X} & 1 \leq i < n \\ s(\bar{E} \rightarrow \text{EOS} a_i) & = O_{i,E} \end{align*} \] (8.3) where \( T \in \mathbb{R}^{\left|\Gamma\right| \times \left|\Gamma\right|} \) is a matrix of learnable parameters, so that every transition rule gets an independent score; and \( O \) is computed from the RNN encodings as \[ \begin{align*} O & \in \mathbb{R}^{n \times \left|\Gamma\right|} \\ O_i & = \text{LinearLayer}(H_i). \end{align*} \] (8.6) Now both training and labeling (= parsing) can be done exactly as before. But, this is an extremely inefficient way of implementing a RNN+CRF. Since the grammar includes all rules with the forms shown above, even if they were not observed in the training data, the grammar is quite large. In the next section, we’ll see how to optimize this. 8.3.6 RNN+CRFs made more efficient Recall that during training, we maximize \[ L = \sum_{(w,\text{tree}) \in \text{data}} \log P(\text{tree} \mid w) \] \[ = \sum_{(w,\text{tree}) \in \text{data}} \log \frac{\exp s(\text{tree})}{\sum_{\text{tree}' \in \text{data}} \exp s(\text{tree}')} \] \[ = \sum_{(w,\text{tree}) \in \text{data}} \left( s(\text{tree}) - \log \sum_{\text{tree}' \in \text{data}} \exp s(\text{tree}') \right) \] where \( s(\text{tree}) \) is the score of the tree, \( \text{tree}' \) are all possible subtrees of the original tree, and \( \sum \) is the partition function. \[ CSE 40657/60657: Natural Language Processing Version of November 9, 2021 \] and the partition function is computed using a modified CKY algorithm. And during parsing, we use the CKY algorithm. As a reminder, here’s the algorithm, where we’ve plugged in the rule scores computed by the neural network. The symbol ⊕ is a generic operator that is max if we’re looking for the best parse and logaddexp if we want the total score of all parses. 1: for all 0 ≤ i < j ≤ n do 2: for all X ∈ Γ do 3: \( \text{chart}[i, j][X] \leftarrow -\infty \) 4: \( \text{chart}[i, j][X] \leftarrow -\infty \) 5: end for 6: end for 7: \( \triangleright \) rules of the form \( X \rightarrow w_i \) 8: for all \( i \leftarrow 1, \ldots, n - 1 \) do 9: for all X ∈ Γ do 10: \( \text{chart}[i - 1, i][X] \leftarrow O_{i,X} \) 11: end for 12: end for 13: \( \triangleright \) rule \( \tilde{E} \rightarrow \text{EOS} \) 14: \( \text{chart}[n - 1, n][\tilde{E}] \leftarrow O_{n,E} \) 15: \( \triangleright \) rules of the form \( \tilde{X} \rightarrow X \tilde{Y} \) 16: for \( \ell \leftarrow 2, \ldots, n \) do 17: for i ← 0, \ldots, n − \( \ell \) do 18: j ← \( i + \ell \) 19: for k ← \( i + 1, \ldots, j - 1 \) do 20: for all X ∈ Γ do 21: \( \triangleright \) rule \( \tilde{E} \rightarrow \text{EOS} \) 22: \( \text{chart}[i, j][X] \leftarrow \text{chart}[i, j][\tilde{X}] \) 23: \( \triangleright \) rules of the form \( \tilde{X} \rightarrow X \tilde{Y} \) 24: for all Y ∈ Γ do 25: \( \triangleright \) rule \( \tilde{E} \rightarrow \text{EOS} \) 26: \( \text{chart}[i, j][X] \leftarrow \text{chart}[i, j][\tilde{X}] \) 27: \( \triangleright \) rules of the form \( \tilde{X} \rightarrow X \tilde{Y} \) 28: return \( \text{chart}[0, n][\tilde{S}] \) **Linear time.** This is \( O(n^3) \), but we would like to reduce this to \( O(n) \). Remember that the cubic time complexity comes from the triple loop involving \( i, j, \) and \( k \). But in the trees that our grammar generates, it’s always the case that if \( j - i > 1 \), then \( i \leq n - 2, k = i + 1, \) and \( j = n \). We didn’t even define the rule-scoring function for other values of \( i \) and \( j \). So the above triple loop can be rewritten as a single loop: 16: for \( i \leftarrow n - 2, \ldots, 0 \) do 17: for all X ∈ Γ do 18: for all Y ∈ Γ do Vectorization. The other thing that is special about our grammar is that it’s very dense, in the sense that it has a rule $X \rightarrow a$ for every single $X$, and a rule $X \rightarrow X \bar{Y}$ for every single $X$ and $Y$. Instead of thinking of $chart[i, j]$ as a hash table from labels to numbers, we think of it as a vector of numbers. Instead of all those loops over labels, we can now use vector operations. $$\begin{align*} \text{for } i &\leftarrow 1, \ldots, n - 1 \text{ do} \\ \text{chart}[i - 1, i] &\leftarrow \mathcal{O}_{1,*} \\ \text{end for} \\ \text{for all } X \neq E \text{ do} \\ \text{chart}[n - 1, n]_X &\leftarrow -\infty \\ \text{end for} \\ \text{for } i &\leftarrow n - 2, \ldots, 0 \text{ do} \\ \text{chart}[i, n]_X &\leftarrow \bigoplus_Y (T_{X,Y} + chart[i, i + 1]_X + chart[i + 1, n]_Y) \\ \text{end for} \\ \text{return } chart[0, n]_S \end{align*}$$ That’s the whole algorithm! But line 9 requires some explanation. - The meaning of the index $X$ on the left-hand side means that this assignment should be performed for all values of $X$. - The meaning of $\bigoplus_Y$ is to perform $\oplus$ over all values of $Y$. In other words: During training, $\oplus$ is logaddexp, so $\bigoplus$ is the log-sum-exp over all values of $Y$ (PyTorch logsumexp). During labeling, $\oplus$ is max, so $\bigoplus$ takes the maximum over all values of $Y$. This line can be coded in PyTorch without any loops using broadcasting.\(^3\) - $T$ has rows indexed by $X$ and columns indexed by $Y$. - $chart[i, i + 1]$ is indexed by $X$, so we can think of it as a column vector, or a matrix of size $|\Gamma| \times 1$. In PyTorch, there are many ways to reshape it to be $|\Gamma| \times 1$; I usually use unsqueeze. - $chart[i + 1, n]$ is indexed by $Y$, so we can think of it as a row vector, or a matrix of size $1 \times |\Gamma|$. Unlike in math, in PyTorch you can add matrices of size $|\Gamma| \times |\Gamma|$, $|\Gamma| \times 1$, and $1 \times |\Gamma|$. The column vector and row vector are effectively replicated until they have size $|\Gamma| \times |\Gamma|$; then they are added to form a matrix of size $|\Gamma| \times |\Gamma|$. \(^3\)Please see https://pytorch.org/docs/stable/notes/broadcasting.html for more details. The above algorithm with $\oplus = \max$ finds the score of the best-scoring labeling, but it doesn’t actually find the labeling itself. To do that, we use back-pointers as in CKY: 1. for $i \leftarrow 1, \ldots, n - 1$ 2. chart[$i - 1, i$] $\leftarrow O_{i,i}$ 3. end for 4. for all $X \neq E$ 5. chart[$n - 1, n$] $\leftarrow -\infty$ 6. end for 7. chart[$n - 1, n$] $\leftarrow O_{n,E}$ 8. for $i \leftarrow n - 2, \ldots, 0$ 9. chart[$i, n$] $\leftarrow \max_{Y} (T_{X,Y} + chart[i, i + 1]_{X} + chart[i + 1, n]_{Y})$ 10. back[$i, n$] $\leftarrow \arg \max_{Y} (T_{X,Y} + chart[i, i + 1]_{X} + chart[i + 1, n]_{Y})$ 11. end for Reconstructing the best label sequence is just like reconstructing the best parse tree in CKY. Namely, \text{back}[$i, n$] $= Y$ means that the best labeling of $w_{i+1} \cdots w_{n}$ that starts with $X$ continues with $Y$. So we can start with back[0, n] and follow the pointers to reconstruct the whole label sequence. One final note: It’s a little weird that this algorithm runs right-to-left. If we had made our original tree left-branching instead of right-branching, the final algorithm would run left-to-right. References
{"Source-Url": "https://www3.nd.edu/~dchiang/teaching/nlp/2021fa/notes/chapter8v4.pdf", "len_cl100k_base": 6057, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 32965, "total-output-tokens": 7209, "length": "2e12", "weborganizer": {"__label__adult": 0.0008077621459960938, "__label__art_design": 0.001827239990234375, "__label__crime_law": 0.000759124755859375, "__label__education_jobs": 0.04046630859375, "__label__entertainment": 0.0010156631469726562, "__label__fashion_beauty": 0.0004417896270751953, "__label__finance_business": 0.0004010200500488281, "__label__food_dining": 0.0009050369262695312, "__label__games": 0.002162933349609375, "__label__hardware": 0.0013427734375, "__label__health": 0.0015306472778320312, "__label__history": 0.0008997917175292969, "__label__home_hobbies": 0.00023424625396728516, "__label__industrial": 0.001007080078125, "__label__literature": 0.0119781494140625, "__label__politics": 0.0008215904235839844, "__label__religion": 0.0013704299926757812, "__label__science_tech": 0.43603515625, "__label__social_life": 0.0007300376892089844, "__label__software": 0.031097412109375, "__label__software_dev": 0.46240234375, "__label__sports_fitness": 0.0005464553833007812, "__label__transportation": 0.0009069442749023438, "__label__travel": 0.0003345012664794922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22717, 0.02597]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22717, 0.62984]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22717, 0.87132]], "google_gemma-3-12b-it_contains_pii": [[0, 1967, false], [1967, 3657, null], [3657, 5749, null], [5749, 7587, null], [7587, 9982, null], [9982, 11837, null], [11837, 13366, null], [13366, 15975, null], [15975, 18205, null], [18205, 20471, null], [20471, 22717, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1967, true], [1967, 3657, null], [3657, 5749, null], [5749, 7587, null], [7587, 9982, null], [9982, 11837, null], [11837, 13366, null], [13366, 15975, null], [15975, 18205, null], [18205, 20471, null], [20471, 22717, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22717, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22717, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22717, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22717, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22717, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22717, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22717, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22717, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22717, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 22717, null]], "pdf_page_numbers": [[0, 1967, 1], [1967, 3657, 2], [3657, 5749, 3], [5749, 7587, 4], [7587, 9982, 5], [9982, 11837, 6], [11837, 13366, 7], [13366, 15975, 8], [15975, 18205, 9], [18205, 20471, 10], [20471, 22717, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22717, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
d297ee6895a43dde71b663f869134cc953cb99f1
The second thing that I want to say by way of transition is that process of having – developing a set of instructions, having two coders read independently every answer, negotiate out their disagreements after reaching sufficiently high agreement rates, took us, pause, four years. So it’s 2012 and we’re just releasing now the codes of the 2008 data. That’s how embarrassing this is. We had lots of effort, lots of false starts, lots of problems, and lots of examples of low levels of agreement until we finally got the whole thing working. Now, hopefully what we’ve done in figuring all this out would go more quickly the second time around with human coders. But, the next set of presentations explore the potential approach to make it much more efficient. That instead of having human coders code 2,200 responses to every question, could you have human coders code a small number of the responses, and then use that coding to train a computer to do the coding the rest of the time for you? Is it straightforward enough to do that? And so natural language processing is a field that came up a lot in the conference that Skip alluded to that we organized at the University of Michigan some years ago, where we brought lots of experts together, including experts from the firm called Language Logic. So, Rick, should we call it “Ascribe” or “Language Logic”? [Response from audience]. Okay. So the company’s name is Language Logic. And so Language Logic was one of the organization’s there. There was a gentleman named Fabrizio Sebastiani, who’s an Italian scholar, who had, at that point, written some software to do this kind of coding and developed an algorithm, which has been a – and Ascribe is the software that implements his - [response from audience]. So since we completed the human coding a little while ago, we’re now gonna welcome Dean Kotchka upfront here so that he can tell people quickly about the initial foray into can we use just the first set of human coding, and teach a machine to do the rest and have the machine finish it up. And after lunch, Mark Leiberman is gonna talk much more broadly about the potential of natural language processing and computers to do this kind of coding. **Dean Kotchka:** Thank you. Thank you all for squeezing us in. We are not part of the agenda that has been published normally. Yes, we are now, so I can take as much time as I want. No, I can't – I can't take – but I do have a few minutes and Jon and Skip did want me to point out a couple of things that we’ve done subsequent to the coding work that’s been described. I also might point out that we didn’t make any money on this job that gone over a four-year period of time, but we also learned an awful lot. And so with that, let me get started. I also have to say this is – I’ve given a lot of presentations, but this is a very intimidating thing because I’ve not been before with a scholarly group such as this presenting. So I really do appreciate the opportunity. I’m probably not presenting it in a way you typically might see a paper being done. I’m also going to go through it fairly quickly because I don’t want to stand between you and lunch, nor any question times that you might have for Skip’s presentation. I do want to take a minute though since I’m not a researcher, and since I’m not in academia or have any published work, at least run down – a few seconds to run down what Language Logic is and who we are to maybe provide at least some assemblance of credibility as to why I’m standing up here in front of you. We were founded in 1999, and I am gonna go through these fast. We published a verbatim management software system called Ascribe. I use the term “verbatim management” because in addition to coding, which is its principle functionality, it’s also used for transcription and translation work. We’re used by over half of the Honomichl 50 research firms. For those that may not know Honomichl, it’s an annual listing of the top market research firms in the world organized by revenue. So we have a pretty good footprint embedded in these company systems. We have about 12,000 users in 56 different countries. This year, we’ll process about 300 million verbatim comments through our application. And in addition to selling the software and licensing it, we have a solutions group, which uses our own software to provide coding services and translating services. That’s the group that the ANES people worked with to get the coding work done, and we had two dedicated people working specifically on this project. It was an interesting exercise for us because it tended – because of some of the issues about reliability and the need for documentation and good procedures and so forth, tends to go against the way we work. We have always had encouraged coders to talk back and forth as coding was going on. In this case, as you heard, it was specifically intended to not have that, to keep a far wall between them until such point as the instructions were determined to be understood, and then you got into the reconciliation process. We have about 100 full-time contractors scattered out through the world doing this work on our behalf. And this group, itself, will process about 20 million comments this year. I’m gonna skip through these because this has already been discussed. The open-end dilemma of balancing the various costs and factors against needs for consistency and accuracy speed, and so forth. I will point out that we have seen over the last couple of years, in addition to being a very labor-intensive cost element of open-ended coding, speed is becoming increasingly more important, at least to our users. Since data collection times have been compressed via Internet and mobile-type surveys, the time that people are asking for results to be processed is also being compressed down. So if it took two weeks to collect the data, it was reasonable for somebody to take an additional week or so to tabulate it. But now if the data is being collected in two days or one day, then the coding groups, the operations groups in these firms are being asked to turn it around. And in addition to that, we’re seeing an increase rise in what I call tactile surveys, one/two question, one open-end, two closed ends, that go out and response to a Twitter spike somewhere, where they are trying to find and narrow down some specific information. Those kinds of surveys are also increasing the need for speed. We have been working for the last couple of years, as it was alluded to, to building some answer to this, to continue to provide a reasonable degree of accuracy, consistency, reliability, but yet improve the speed and reduce the cost. And the solution that we have built into our software, I’m referring to it generically as an automated coding model that’s not our brand name. And we built it in about two years ago. It’s a “learn by example” process. You feed it trained, coded examples, and process it and build binary classifiers, and then use that list of classifiers to code additional data. As Jon mentioned, it’s based on the work of Dr. Fabrizio Sebastiani. He’s with the Institute of Information and Science Technology based in Pisa, Italy. He was, I think, at this conference – a similar conference in 2008 in Ann Arbor. And we have licensed their technology and built into our platform. I have some published papers from Dr. Sebastiani. I was gonna put links in there, but realized that those links are embedded in our software and you’d have to have credentials to get to them. But I’ll be happy if you want to exchange a business card, I’d be happy to get you the information and the links to get those papers. Or you could probably just Google him and find them as well. This I’m going to run through fairly quickly. Basically it’s just a systematic of how the automated coding model works. You start with uncoded verbatims, and a human coder codes those. We consider those training examples. Binary classifiers are created, and then the process is used to code/uncode the verbatims. A cycle – it’s an iterative process as you review some of the results and resubmits them to the model. I’m gonna go through this again pretty quickly. I’ve got the five-minute note. What we decided is that if we were sitting on a ton of coded data that had been vetted through a very rigorous process, and one of the big problems with an automated coding model is if you put ambiguous or bad data in, your model is not going to perform very well. So we took it upon ourselves to take one of the questions – we actually selected two, but it’s essentially the same question, a dislikes question, republican and democrat. The question text is: “Is there anything in particular you dislike about the Republican Party?” I don’t know if you can see this or not, the codebook. This was the code frame that was used to code the information. I put it up here only simply to point out that this is the same code frame, that there are a couple of categories. One up there: general. That’s a very hard – the general category is a very hard category to give specific examples about, so that may or may not be a good working code in the model. When I put up that on the plane today coming in, I was wishing that I had more than a couple hours to talk. But let me run through here. Reconciled comments: That’s the number of the comments that were coded by the coding team for Skip and John. Of those, we selected 1,489 to be training examples, that’s what we used to train the model. The remaining 1,425, we pushed through the model to be coded. So there are the 1,425 that we pushed through the model. Of the 1,425, the model coded 1,137 of them. When we set the parameters and the knobs and the dials of the model, we can establish a confidence level, below which we don’t want any codes applies, above which will apply a code. So, in this case, a few hundred of them didn’t have any codes at all applied. Seven hundred and thirty-seven of the 1,137 were coded identically. But, really, we look at the results at the code level, there were 1,734 codes applied to that base, 1,425 by the human coders. The ACM, the model code applied 1,618 codes. The model applied 271 codes that were not applied by the base coders. In our software, we don’t call it precision; we call it over coding and under coding, but it similarly – it missed 368 codes. We calculated an overall accuracy percentage at 84 percent. This – again, I don’t know if you can read this. This is my final slide and just a shot of some of the differences. This is obviously the verbatim. The second column is the model coding, and the third column is the human coding. The blue indicates codes that the coders put in; red indicates codes that the model put in. I guess the conclusion here is that there really isn’t a conclusion other than I think it’s at least encouraging that – and this had no iterative update process and this was just done by our coding group. And so I think there are some opportunities – we are not researchers, but there are some opportunities to formulize an experiment, a good experimental design. I think there’s an opportunity to further refine the model by building in some of the errors that were found. And then also to apply it across all the questions that share that same codebook because you would think that everything that is gonna get coded in that codebook, and that codebook has had that set of examples built for it. It’d be interesting to see how it runs against all the data. So I hope I didn’t go too far over my time. Thank you. **Jon Krosnick:** Okay. So I think we’ll do ten minutes for questions either for Dean, who is leaving already, or for Skip, who I hope is still there. Can we put Skip on the screen, Dave? There he is. Okay, so question/comments about coding? **Male:** Just a small question about coding the knowledge of political office. Since you said earlier, at the beginning, that people use these to access general knowledge in politics, why did you decide to exclude the general knowledge information that was contained in the answers? So you said you coded it correct, only the ones that get the job right, whereas if the purpose of the question was to access general political knowledge, then, presumably, you’d want to communicate that in the codes. Skip Lupia: So it’s a great question. The problem with the question that’s written is that’s actually not designed to assess general knowledge. A question that was assigned to address general knowledge would ask something like: “Tell us everything you know about Person X.” But that question doesn’t ask that. It says: “Do you know what job or political office this person holds?” And so to use it as a source of information about what general recall an individual has about a candidate, really, it has to be skewed because to really get an unbiased sample of general knowledge, the respondents would have to systematically choose not to answer the question. And they’d have to choose not to answer the question we asked in the same way. And so ultimately what we decided to do was code answers with respect to the question that we asked. I mean my sense is like yours that I think most people want to use this as a general knowledge question, and this is not a good general knowledge question. Male: And so did you consider adding a second code for the questions that said – got the answer wrong for the question, but indicated some knowledge of the subject? Skip Lupia: So we’re releasing two sets of codes. So this is – in the past, we used to just – for every question, there would be one code and it would have two values: correct or incorrect. So now we have a first initial set of codes, which basically just characterize the answer. So there’s a variable. Did they say something correct about the political office? There’s a variable. Did they say something correct about the job? And then what we’re manufacturing are aggregations of those, which did they say anything correct about the political office or the job. And I think the distinctions that we’re gonna use is did they say anything correct and nothing incorrect. Did they say something correct and something incorrect, or did they say nothing of the kind? And so if I say that William Rehnquist is a bozo, I get put in “other”, and we don’t judge the truth value of that claim. Male: So, Jon, I’m a dislikes quotation – likes/dislikes. The way it’s done now, could one replicate what congress did on levels of conceptualization back when this question was used in the sixties? Jon Krosnick: Well, first of all, we can't replicate what congress did in the sixties. So the idea of trying to somehow map what we’ve done back into those earlier categories was – just so people know what we’re talking about here; so levels of conceptualization were inferred for people by placing them into a set of buckets. If they were at the highest level, they were ideological thinkers, so they were the words “liberal” and “conservative” in ways that showed that they understood what they meant. One of the lower levels was group thinking, so talking about how this candidate is good for the working man. So it would be very different – you would have to implement a different coding exercise in order to produce those, I think. You’d have to define what qualifies as ideological, what qualifies as group, and so on. And as you know, those categories were treated as mutually exclusive. So a respondent was placed either in the ideological bucket or in the group’s bucket or some other bucket, and respondents give combinations of those instances. Probably if we were to develop a new set of coding instructions, people could mention group things and ideological things, and we would represent that. Male: Yeah, I think you guys had to draw the line between digesting this stuff for the users, saying this is correct and this is incorrect, versus giving them the verbatims and saying you do what you like. I mean you’re sort of trying to strike a balance, I guess, or some way of getting the benefit of both of those approaches in some ways. But it’s kind of like the AAPOR standard definitions. You could report a response rate, defined as a AAPOR response rate three. Or you could provide people with all of the outcome codes and have them figure out whether what you got was – using a different rule, what you’ve might have gotten. So I’m wondering how did you come to the balance that you wanted to? John Krosnick: Well, we tried to get away with the simple thing first. We tried to put out all the verbatims and leave it at that and let people do whatever they wanted. But it became really clear that people didn’t want to have to deal with that, but they want codes. So then in order to produce codes, I mean you heard – Skip and I bring different perspectives and strengths and values to what we did. And one of the focuses for him in our collaboration has been principle basis for what we provide to people, rather that just saying let’s find some buckets. So a bunch of people said, “Unemployment; let’s create an unemployment bucket,” which felt impossible to implement comparably across multiple surveys. So we struggled – based on partly advice from user committees of people who actually used these data in print. And this is maybe one of the most striking things of all; I don’t think Skip said the number of like/dislike categories. There were – do you remember? There were like 500, 700, or some gigantic number of categories. So we gathered up every publication we could find, and nobody used the 700 categories. They had all combined them down into like five. And so the question is why are the coders struggling to create 700 distinctions when users really only want to know did he like the personality, did he like the foreign policy, and did he like the performance history, and so on. So we were informed partly also by what users seem to want to do in the past. But there is no doubt we didn’t – what we’ve come up with is only one possible approach, and there would be others that would be legitimate. **Male:** So in some cases, you used an inductive approach looking at the responses that came over the transom. And then in the budget case, you imposed an external template. So I’m wondering how did you decide those two different approaches? **Skip Lupia:** I was gonna say we tried to go with an “x” anti set of categories first, and likes/dislikes, we just struggled. It was a black hole. **John Krosnick:** That’s the answer. **Male:** Yeah this is slightly related. Because I’m thinking if I were doing an open-ended question, the alternative is a closed-ended question. And it would be nice to have a close-ended question covering somewhat the same space as the open-ended, and then you’ve got a check, and then the other is with a focus group. Because you went to the U.S. budget thing, but if you had some focus group people, you might get a better sense particularly of the ambiguous things. So people are talking about Rehnquist, or whomever it is, and you quickly understand, yeah, the guy who calls him a bozo, he knows what he does. And that would be an alternative way of – sometimes matching the open-ended with the coding with now close-ended or focus group, you could go and ask somebody in the focus group afterwards. Or you could have them answer your open-ended, and then you query them afterwards and get some other measures. Yeah, just to draw a link with what Collin said earlier and with what you’re saying now, one of the things that we discovered was that when – we have the audio recordings of these conversations between the interview and the respondent, and tomorrow, Hector Santa Cruz is gonna talk about what he discovered from those open ended – those recordings. But one of the things that we found were instances where the question is: “What job or political office does Dick Chaney hold?” And then the respondent says: “the devil incarnate”. And then the interviewer says, “Well, I have to record an answer here, what answer do you want me to record?” And the response is, “The devil incarnate.” And so the problem is that we haven’t told the respondent the purpose of the question. That if we told the respondent that the researchers want to know if you actually know the job – not how you evaluate him, but if you know the job or political office this guy holds, could you also tell me that as best you know? And if the respondent then said, “Oh, he’s vice president, and I wish he wasn’t,” or something of that sort. So it raises a really interesting insight about the idea that maybe the interviewer does know. And wait until you hear what Hector has to say about that tomorrow. And that certainly the respondent doesn’t know what the purpose of the question is, and they may handicap themselves. But I think to come back to your question about if we’re trying to measure general political knowledge, the notion of these open-ended questions about jobs or political offices is typically, and I think now being viewed from an IRT framework, that is this is a question that has a right answer, and it’s a difficult one or an easy one. And so it turns out that lots more people know who Dick Chaney was than know who John Roberts was. And so if you know a right answer and you’re getting a certain percent correct, you can build a battery of items on lots of different topics. If you rephrase the questions as do you know anything – “John Roberts works in the federal government. Do you happen to know anything about him?” And imagine somebody says, “He’s a male.” Do you – is that a right answer, or did you sort of give them that by the name? So that’s a little silly illustration of the problem we ran into where when we thought about trying to create a category that represented anything they said that might be correct, you start getting off into some silly territory. *Male:* My point really was given you did an awful lot of work on these open-ended, so you have an assessment of whether they contain knowledge like liberal devil anti-Christ, which suggest an intimate understanding of American politics, so that you could give people information that then don’t have to replicate themselves. So it was really not so much question of what you did wrong, or that you’ve done more work that that, and it would be generous to share that work with users so that they could take advantage of it, and then perhaps rethink the question for the next time. *Male:* Using the IRT model, you might want to think in terms of a partial credit model. So the person says that he’s on the Supreme Court, but they don’t say Supreme Court, you know, the chief justice of the Supreme Court, then characterize that has a partial credit. And you use kind of that approach. But to simply say they are incorrect. It’s either correct or incorrect. It seems that you really do abandon some information that’s a response. *Jon Krosnick:* You’re validating what we did. So in other words, the old version was what you just critiqued, the correct or incorrect with a very strict criterion. And then in the codes we’re releasing, there is completely concrete, and then there’s partially correct, and then there’s other kinds to dissemble and people can recover those. *Male:* Jon, I’ll go back to the process that was described for developing coding that started with the theoretical framework and then ended up with a reliable coding procedure. It struck me as – and this may not be new to anyone, but is very similar to what happens in item development for educational testing. The National Assessment for Educational Progress, for example, had spent a lot of years and a lot of time doing very similar kinds of work to develop items that can then be coded that can again start from a theoretical framework. And then end up with a set of procedures that can be coded accurately for these questions that have a right or wrong answer or degrees of correctness. And there’s a literature that that you may already be familiar with, but I thought that I would mention that there has been a lot of work done on that kind of processes. *Jon Krosnick:* Skip, do you want to say something? Skip Lupia: Yeah, just when we were talking about the chief – Jon, it’s this other punch line about the chief justice question, which is – I don't know if you know this, but the correct title of this constitutional office is not chief justice of the Supreme Court; it’s chief justice of the United States, and that’s what’s in the –. So if you had said that, which is actually its constitutional title, you would have been wrong. Male: I want to come back to the issue of documentation, which Skip said is so very important, and I think we all agree. But I want to gee you just an instance of it’s even broader than I think maybe what you’ve been thinking about. And it’s from an example that my mentor, Howard Shuman, wrote about. He did a study with Dudley Duncan in the Detroit area study in 1971. This survey has been repeated every year. And so in ’71, they used it as an opportunity to repeat – I’m sorry, it’s a survey done every year, but on a different topic. And in ’71, they decided to repeat modules from preceding years. And one of the modules has been about gender roles, and they had been done in the early fifties, and so this is twenty years later. And they had an open question and they discovered some – and I can't recall exactly what it was, but very large differences that made them wonder what was going on. And so somebody hit on the idea, and I wouldn’t be surprised if it was Howard, who is really sensitive to these kinds of things, and they said, well, you know, maybe it’s not the coding instructions, but it’s the fact that the nature of the coders, that twenty years ago when a coder read this particular answer, they coded it this way. Today, in the minds of coders who live in a different kind of environment, they coded it differently. So they could investigate that by – they still had the verbatims from the earlier study, and so they could have the 1971 coders go back and recode the information from 1952, or whatever it was, and, in fact, they discovered very different results from the earlier study. And so this is a wonderful illustration of how particularly in the National Election Studies, but any study is going to look at this over time, that one has to be concerned about this. But it does suggest more generally, even on long shot studies that maybe one wants to record, in terms of documentation, something about the attributes of the coders. Because, after all, you could imagine at one point in time, maybe the coders are all males, just by happens there’s another – and males might – let’s say on this gender question, might code the same set of answers somewhat differently than females might. And so information, more generally, in terms of documentation, I think is really essential.
{"Source-Url": "https://iriss.stanford.edu/sites/g/files/sbiybj6196/f/kotchka_transcript.pdf", "len_cl100k_base": 5844, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 26699, "total-output-tokens": 6359, "length": "2e12", "weborganizer": {"__label__adult": 0.0005359649658203125, "__label__art_design": 0.0011014938354492188, "__label__crime_law": 0.0012073516845703125, "__label__education_jobs": 0.03399658203125, "__label__entertainment": 0.0003592967987060547, "__label__fashion_beauty": 0.0003745555877685547, "__label__finance_business": 0.0016937255859375, "__label__food_dining": 0.000659942626953125, "__label__games": 0.0010890960693359375, "__label__hardware": 0.001373291015625, "__label__health": 0.0011186599731445312, "__label__history": 0.000751495361328125, "__label__home_hobbies": 0.0002636909484863281, "__label__industrial": 0.000912189483642578, "__label__literature": 0.00341033935546875, "__label__politics": 0.0119781494140625, "__label__religion": 0.001026153564453125, "__label__science_tech": 0.10089111328125, "__label__social_life": 0.00075531005859375, "__label__software": 0.035491943359375, "__label__software_dev": 0.7998046875, "__label__sports_fitness": 0.00041794776916503906, "__label__transportation": 0.0007486343383789062, "__label__travel": 0.00019741058349609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26989, 0.00354]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26989, 0.11903]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26989, 0.9765]], "google_gemma-3-12b-it_contains_pii": [[0, 2497, false], [2497, 4952, null], [4952, 7374, null], [7374, 9882, null], [9882, 12339, null], [12339, 14590, null], [14590, 17096, null], [17096, 19484, null], [19484, 21861, null], [21861, 24245, null], [24245, 26805, null], [26805, 26989, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2497, true], [2497, 4952, null], [4952, 7374, null], [7374, 9882, null], [9882, 12339, null], [12339, 14590, null], [14590, 17096, null], [17096, 19484, null], [19484, 21861, null], [21861, 24245, null], [24245, 26805, null], [26805, 26989, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26989, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26989, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26989, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26989, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26989, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26989, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26989, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26989, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26989, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26989, null]], "pdf_page_numbers": [[0, 2497, 1], [2497, 4952, 2], [4952, 7374, 3], [7374, 9882, 4], [9882, 12339, 5], [12339, 14590, 6], [14590, 17096, 7], [17096, 19484, 8], [19484, 21861, 9], [21861, 24245, 10], [24245, 26805, 11], [26805, 26989, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26989, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
da6151c6a9cfeb0782320c868b5b9500afc3a6d4
SOFTWARE TOOL ARTICLE NaijaCovidAPI: an application programming interface for retrieval of COVID19 data from the Nigerian Center for Disease Control web platform [version 1; peer review: awaiting peer review] Emmanuel Baldwin Mbaya, Babatunde Alao, Philip Ewejobi, Innocent Nwokolo, Victoria Oguntosin, Emmanuel Adetiba 1Department of Electrical & Information Engineering, Covenant University, Ota, Ogun State, Nigeria 2Covenant Applied Informatics and Communication Africa Center of Excellence, Covenant University, Ota, Ogun State, Nigeria 3Department of Computer Science, Federal University Gashua, Gashua, Yobe State, Nigeria 4HRA, Institute for Systems Science, Durban University of Technology, Durban, South Africa First published: 02 Dec 2021, 10:1227 https://doi.org/10.12688/f1000research.74998.1 Abstract Background: In this work, a COVID19 Application Programming Interface (API) was built using the Representational State Transfer (REST) API architecture and it is designed to fetch data daily from the Nigerian Center for Disease Control (NCDC) website. Methods: The API is developed using ASP.NET Core Web API framework using C# programming language and Visual Studio 2019 as the Integrated Development Environment (IDE). The application has been deployed to Microsoft Azure as the cloud hosting platform and to successfully get new data from the NCDC website using Hangfire where a job has been scheduled to run every 12:30 pm (GMT + 1) and load the fetched data into our database. Various API Endpoints are defined to interact with the system and get data as needed, data can be fetched from a single state by name, all states on a particular day or over a range of days, etc. Results: The results from the data showed that Lagos and Abuja FCT in Nigeria were the hardest-hit states in terms of Total Confirmed cases while Lagos and Edo states had the highest death causalities with 465 and 186 as of August 2020. This analysis and many more can be easily made as a result of this API we have created that warehouses all COVID19 Data as presented by the NCDC since the first contracted case on February 29, 2020. This system was tested on the BlazeMeter platform, and it had an average of 11Hits/s with a response time of 2905milliseconds. Conclusions: The extension of NaijaCovidAPI over existing COVID19 APIs for Nigeria is the access and retrieval of previous data. Our contribution to the body of knowledge is the creation of a data hub for Nigeria's COVID-19 incidence from February 29, 2020, to date Keywords COVID19 API, NCDC, REST architecture, ASP.NET Core Web This article is included in the Emerging Diseases and Outbreaks gateway. Introduction The Coronavirus disease 2019, known as COVID-19, was termed by the team of Taxonomy of Viruses as SARS-COV-2. The first case occurred in December 2019 in a place known as WUHAN at the Hubei province in China. Like a wildfire, it grew and reached 233 countries with millions of confirmed cases and deaths. Nigeria recorded its first case on February 28, 2020, when an Italian man working in Nigeria returned from Italy to Lagos. This was confirmed by the Lagos University Teaching Hospital. Since the incidence of the index case, the number of deaths, confirmed, active and discharged cases, has been on the increase in Nigeria and the rest of the world. Data has become increasingly important in research to produce discoveries with the introduction of computers, which brought with them computing, simulation, and modelling. Furthermore, because most of the research on COVID-19 are funded by national governments, the data and outcomes are publicly available. The World Health Organization (WHO) has designed a dashboard that gives real-time data about the global statistics of the COVID-19 outbreak. In addition, Coronatracker was developed to provide real-time news, statistics and to curb the spread of fake information about the disease. For instance, in Nigeria, the Presidential Task Force (PTF) on COVID-19 gave daily updates from the case's onset for about 8 months. The NCDC has developed a platform to provide information in respect of the daily incidences on its website and other social media platforms. These platforms were developed to provide real-time data for public health and research purposes. To analyze research datasets, Python has continued to increase its popularity amongst researchers in scientific domains where MATLAB and SPSS previously dominated. Most often, Python’s Panda’s library uses Comma Separated Values (CSV) file format to provide data for downloads from online platforms. Another alternative is to provide an Application Programming Interface (API) for users to have access to data from web portals. API has existed since the advent of the Internet for the exchange of data between two or more programs. It has gotten a high level of acceptance recently that “pundits” are saying we are in the API economy. This position might likely be because people are now more interconnected through different types of applications like never, and individuals are now demanding online services at a higher rate than ever. Notably, API has now made it possible for businesses to connect for enhanced capacity and profitability far beyond what has been known historically. Big corporations such as Uber, Airbnb, PayPal, and a host of others depend on several APIs and developer services, with Uber paying Google the sum of $58 million for its map services. Nigeria is not left behind in this race as several start-ups have jumped into the API economy with companies such as Paystack, who developed APIs to fast-track payments using Cards (credit and debit) and direct bank transfers. In this paper, we report the development and deployment of a RESTful API to scrape COVID-19 data from the NCDC website and expose its response in JavaScript Object Notation (JSON) format. We built a web scraper that scrapes the daily updates from the NCDC website and archives them on a cloud database. Furthermore, we developed a Web API that allows users/developers to consume the data in JSON format. The rest of this paper contains a literature review, the methods, the results, and the conclusions. Literature review Application Programming Interface (API) is the intermediary between software; it states how the interaction will be as well as the data formats. This section focuses on the previous works done by scholars concerning different application domains. A RESTful API developed using JavaScript; scrapes data from the NCDC website and exposes its response in JSON. The application gives real-time data as it appears on the NCDC website with details across each state; details such as deaths, confirmed cases, cases on admission, discharged and the total number for each segregated data at the national level. Breast cancer is the second leading cause of mortality among women worldwide; nevertheless, a tiny proportion of men are also at risk. Early detection of this condition is critical since therapy can then be started, enhancing the chance of survival. The authors created an application that can assist health professionals in the prediction and complement the detection of breast cancer. This prediction will be accomplished by training a model using Google Prediction API. The API has integration with various programming languages such as Java, GO, JavaScript, Python, PHP and Ruby. Researchers, physicians, and the general public, more than ever, face a massive challenge in keeping up with the rapid pace of findings and advancements of the influx of clinical trials. To address this issue, researchers combed the ClinicalTrials.gov database for COVID-19-related clinical trials, created unique reports to summarize results, and made meta-data accessible through APIs.\textsuperscript{17} The research was carried out on characterizing illicit COVID-19 product sales. This involved collecting COVID-19-related data from Twitter and Instagram posts using a combination of site scraping on Instagram and filtering the public streaming Twitter API for keywords correlated with suspect COVID-19 marketing and sales. Data were analyzed using Natural Language Processing (NLP) and deep learning to classify possible buyers, who were then manually annotated for desired characteristics. They had used a personalized data dashboard to visualize illegal trading posts to provide public health information.\textsuperscript{18} Also, a reusable dataset using publicly accessible and crowd-verified data on the COVID-19 outbreak in Kerala from government bulletins and media outlets as part of a citizen science initiative via a front-end Web application and a JSON repository, which acts as an API for the front end, this was visualized and produced as a dashboard.\textsuperscript{19} The traditional APIs were previously built using Simple Object Access Protocol (SOAP) web service, an XML (Extensible Markup Language) based protocol for accessing web service over HTTP. This comes with some disadvantages that have been addressed in a new trend of APIs. API is now being migrated towards REST interface because of its simplicity, ability to support more straightforward programmatic access by returning either XML or JSON.\textsuperscript{20} However, REST API is now becoming a new norm across various web services because of its improvement over performance, scalability and flexibility compared to SOAP services. Another advantage of REST that cannot be ignored over SOAP is that it consumes fewer resources, as well as its operation, do not include different standard and heterogeneous procedures that make it easier to tear domain and compose it when compared to SOAP. It is understood that there is a significant difference in using SOAP web service and REST web devices. A web service was developed by TogoWS that explained these differences, an integrated web service that gives uniform access to database resources, parsers for database entries and converters for major data formats.\textsuperscript{21} Web services can be categorized into data-retrieval services and analysis services. Both types of services can be exposed using either the REST or the SOAP architecture. In some cases, REST is better suited for data-retrieval services, and SOAP is more suitable for analysis services because REST is easily mapped to resource URIs. In contrast, SOAP usually requires a long execution time or complex parameters.\textsuperscript{22} Furthermore, SOAP architecture brings output in XML, while REST gives output in JSON in most cases. From our extensive review, REST architecture is mostly utilized in the literature and is was adopted for this study. In the existing COVID API, gaps identified include the previous day's data are unavailable and so one cannot go back in time to assess the curve of the virus, data is retrieved in bulk, and one cannot select a particular state to inspect data for that state only. The work at hand overcomes these gaps by adding features that will make available useful data to researchers. **Methods** To realize the objective of this work, the following tools and frameworks were engaged to develop the web scraper, which fetches data from the NCDC website then the data is warehoused on MS SQL Server. Furthermore, a Web API that has endpoints for data to be consumed in JSON is developed. 1. **ASP.NET Core Web API framework:** This is a framework used for building HyperText Transfer Protocol (HTTP) services that are accessible from various platforms/clients including but not limited to web browsers, mobile devices, desktop applications etc. 2. **Entity Framework (EF) Core:** It is an Object and Relational Mapping (ORM) framework that supports code first and database first approach. It is an open-source, lightweight, extensible and cross-platform with support for several database engines. It is used to communicate with the database and EF Core makes it possible to carry out Create, Read, Update and Delete (CRUD) operations and much more without explicitly writing Structured Query Language (SQL) scripts. 3. **HTML Agility Pack:** It is an agile HTML parser that we used for the web scrapping component of the system. It supports XPath (XML Path Language) and uses it to build a read/write Document Object Model (DOM). 4. **SQL Server Management Studio:** This is an IDE for writing SQL queries and scripts; design, deploy and manage databases from MS SQL Server to Azure SQL Database. 5. **Postman**: It is an open-source tool for API testing, monitoring and publishing. 6. **Swagger**: this is an Interface Description Language (IDL) used for describing RESTful API. It helps to give a visual look at the various API endpoints and to execute the commands. 7. **Hangfire**: An easy way to perform background processing in .NET and .NET Core applications without the need of Windows Service or a separate process required. 8. **Blazemeter**: BlazeMeter is an online open source-based, enterprise-ready platform that unifies all the functionality needed in carrying out various tests such as functional testing, performance testing, API testing and much more. The application block diagram is shown in Figure 1. The different procedures that were carried out to develop the application are hereafter described. **Web scraper** To web scrape a web page, one has to understand the Hypertext Mark-up Language (HTML) structure and then use the XPath of the element(s). The NCDC daily update has details for every state concerning their COVID cumulative data and the national summary of cases. Figure 2 is a screenshot of the NCDC website showing the data that can be gotten from the webpage. --- **Figure 1. Application block diagram.** --- **Figure 2. NCDC COVID-19 website.** Upon study of the source code of the NCDC website, the HTML elements containing the confirmed cases, active cases, deaths, and discharges have an XPath of "//h2[@class = 'text-right text-white']" and that can be used to fetch all nodes that have such structure. First, the package was downloaded using NuGet Packages, then added to the project automatically by Visual Studio, and then imported into the class that will handle the scraping of data. After data was scrapped, a validation was done to ensure that the date of the data was yesterday's date (this is because the real-time data on the NCDC website is the previous day's data). If the data validation failed, the scraping was rescheduled for another two hours, but if the validation passed, the data was stored in the database. Figure 3 shows the Unified Modelling Language (UML) Class Diagram for this system. The HomeController is the class that handles all HTTP requests, and the various endpoints are represented in the form of methods. Annotation was used to state the HTTP method to respond and the GET method for all the endpoints. The DatabaseContext class is the intermediary between the database and the system. The created tables are defined as properties with a returned type of DbSet. The DatabaseContext class inherits from the DbContext superclass and the superclass as a method of OnModelCreating, which is used to state the properties (such as Primary Key, Foreign Key, etc.) as defined in the tables. DataService class implements the IDataService interface. It is good programming practice to use interfaces to model multiple inheritances, a feature that some object-oriented languages support that allows a class to have more than one superclass. The DataService type makes use of Language Integrated Query (LINQ) statements to execute queries on the database. The DataModel is the model of data fetched from the NCDC website and the data model to be saved in the database. The DataModel class has the properties that capture the data needed from the NCDC website. Other details such as Sampled Cases, Confirmed Cases, Active Cases, Discharged, and Deaths at a national level are captured by the ScraperData() method. After data has been scrapped, a test was performed to ensure that the data fetched is the latest from the website. To achieve that, the date of the scraped data is compared with the previous day's data (since that data in the NCDC website is a reflection of data in the last day) to ensure that it is not already captured. If the information does not pass this test, it will not be processed further, and then the routine is scheduled after two hours because it means the website has not been updated. Figure 3. UML class diagram of the application. If the data passes the test, the `SaveData()` method will be called to store the data in the database. Entity Framework was adopted to handle database transactions rather than writing Structured Query Language (SQL) scripts directly into the code. **Automation** Since data will need to be fetched from the NCDC website daily, the scrapping process was automated. Hangfire was adopted to achieve this automation as it is an open-source library that supports background jobs, which perform fire-and-forget, delayed and recurring, long-running, short-running, CPU or I/O intensive tasks inside ASP.NET applications and No Windows Service or Task Scheduler is required. We utilized the delayed and recurring jobs option. Under the recurring option, the job was scheduled to run by 12:30 pm (GMT +1). This decision was made because the NCDC website is not normally updated in the early hours of the day. The delayed jobs option happen in instances where after data has been fetched but failed the date test (i.e., the data fetched has not been updated), it will be rescheduled to run after two hours. The Hangfire package (`Hangfire.AspNetCore` and `Hangfire.SqlServer`) has to be installed from the NuGet Packages before it can be used in a project. After installation, it was registered with the Startup class in the ConfigureServices to use SQL Database using the already created connection while in the Configure method. Afterwards, it registered the Hangfire Dashboard and Server, which are additional services to monitor the background service and status. The scraper is scheduled to run at 12:30 pm (GMT +1), it uses what is known as cron expression. Cron expression uses five (5) asterisks (*) and asterisk mean every minute of every hour of every day of every month and every day of the week as shown in Table 1. If for any reason it fails the date test, it is rescheduled to run in the next two (2) hours. After successfully scheduling a job(s), the Hangfire Dashboard gives a graphical overview and status of each job on whether it was successfully executed or not, the scheduled jobs, the recurring jobs and much more. Figure 4 shows the Hangfire Dashboard showing the recurring job. **API endpoints** API endpoints are the channels through which other applications can communicate or consume an API. Table 2 lists the various endpoints that this API will provide. Endpoints are the point of entry, i.e., the Universal Resource Locator (URL). The state parameter is expecting a valid name of a state in Nigeria even states with more than one word such as Akwa Ibom are also acceptable. The date parameters take a date in the following format only DD-MM-YYYY. ### Table 1. Cron expressions. <table> <thead> <tr> <th>S/N</th> <th>Field</th> <th>Values</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>min</td> <td>0–59</td> <td>For each minute</td> </tr> <tr> <td>2</td> <td>hour</td> <td>0–23</td> <td>Uses 24 hours format</td> </tr> <tr> <td>3</td> <td>day of month</td> <td>1–31</td> <td>1 for 1st day of the month and increases till month-end</td> </tr> <tr> <td>4</td> <td>month</td> <td>1–12</td> <td>1 is for January and increases till December</td> </tr> <tr> <td>5</td> <td>day of week</td> <td>0–6</td> <td>0 is for Sunday and increase till Saturday</td> </tr> </tbody> </table> ![Hangfire dashboard showing recurring jobs.](image) Implementation The API can be accessed using any Web Browser or Software for consuming APIs such as Katalon Studio, Parasoft SOAtest, Eggplant, Postman etc. Figure 7 shows a Graphical User Interface powered by Swagger that makes the API usable even by non-technical people, while software such as Postman, Eggplant etc. need a level of technical understanding to know how to consume the API. Operation The API was designed with the following properties: - Processor: Intel(R) Core (TM) i5-4300U CPU @ 1.90 GHz 2.50 GHz - RAM: 12GB - OS: Windows 10 Pro x64 - IDE: Visual Studio 2019 - DBMS: Microsoft SQL Server Management Studio 18.9.2 A minimum software requirement needed to run this should be able to run Visual Studio 2019.23 Results & discussion This section briefly presents the results of the various tests conducted on the NigeriaCovid19API. Since this system is designed to fetch COVID19 data as showcased on the NCDC website and give a JSON response to API calls, data here is only warehoused and no manipulations were done on it. The tests were carried out to ascertain the functionality of the various API endpoints and ensure that data has been fetched successfully. This section presents the test we carried out with Postman, Swagger and MS SQL Server on a localhost computer. Test with MS SQL Server After data has been fetched, it was warehoused on an MS SQL Server and to confirm that the data has been stored successfully, we ran a READ operation (i.e., a SELECT instruction to fetch data from the database) to ascertain that data was successfully fetched and stored as shown in Figure 5. Test with Postman Postman was developed to help with monitoring and testing APIs. We engaged it to inspect the API endpoints and their responses. Figure 6 shows the JSON response for the /api/data endpoint which gives the latest data in the database. Test with Swagger Swagger is a User Interface (UI) that gives a graphical interface for interacting with a REST API. It allows a user to visually interact and test an API. It uses the XML tags and annotations to get the output alongside the additional features for testing the API. Figure 7 shows the general UI for Swagger and Figure 8 shows the JSON response for /api/data. <table> <thead> <tr> <th>Url/Endpoint</th> <th>Verb</th> <th>Operation</th> <th>Description</th> <th>Success</th> <th>Failure</th> </tr> </thead> <tbody> <tr> <td>/api/data/</td> <td>GET</td> <td>READ</td> <td>Read all resources. Show only the latest data</td> <td>200 OK</td> <td>400 Bad Request</td> </tr> <tr> <td>/api/data/s/(state)</td> <td>GET</td> <td>READ</td> <td>Read all the data of a particular state</td> <td>200 OK</td> <td>400 Bad Request</td> </tr> <tr> <td>/api/data/d/(date)</td> <td>GET</td> <td>READ</td> <td>Read all the data of a particular date</td> <td>200 OK</td> <td>404 Not Found</td> </tr> <tr> <td>/api/data/(date) &amp;</td> <td>GET</td> <td>READ</td> <td>Read all the data of a particular date range</td> <td>200 OK</td> <td>405 Not Allowed</td> </tr> <tr> <td>/api/data/(state) &amp;</td> <td>GET</td> <td>READ</td> <td>Read all the data of a particular state on a certain date range</td> <td>200 OK</td> <td>405 Not Allowed</td> </tr> </tbody> </table> Figure 5. SQLQuery output. Figure 6. Postman JSON response for api/data. Figure 7. Swagger UI for the API. Testing with Blazer BlazeMeter is an online open source-based, enterprise-ready platform that unifies all the functionality needed in carrying out various tests such as functional testing, performance testing, API testing and much more. API monitoring enables you To assess the impact of application performance enhancements. We may compare performance before and after the adjustments using historical response timing data. To create a Test, logon to the BlazeMeter Website, Select Create Test, Select the Performance Test, Select the Enter URL and type in the URL request for the API, select the number of users to be used for testing, the duration (for the free package, you have to use the default location if you want to use a different location, you have to upgrade your account) and attach the necessary header variables were needed. The API was tested on this platform and Figure 10 shows the summary, Figure 11 shows the response time and Figure 12 shows the hits. Throughput is the number of requests completed in a time interval and the system had an average of 11 Hits/s with a response time of 2905 ms. The volume of transactions created overtime during a test is referred to as throughput. It can also be described as the maximum capacity of a website or application. The number of concurrent users is the number of users who are actively using the app or website at any given moment, where we had 20 for the test. The number of hits per second generated by those concurrent users will be determined solely by their interactions with the app. In Figure 11, Response Time is the time that passed to perform the request and receive a full response while the Latency is the time from sending the request, processing it on the server-side, to the time the client received the first byte. The time it **Figure 10. BlazeMeter result summary.** <table> <thead> <tr> <th>Request</th> <th># Samples</th> <th>Avg Time</th> <th>90% Time</th> <th>Max Time</th> </tr> </thead> <tbody> <tr> <td>Get_Recent_Data_Test</td> <td>4713</td> <td>2472.39 ms</td> <td>3455 ms</td> <td>4971 ms</td> </tr> <tr> <td>Get_Stake_Data_Test</td> <td>4716</td> <td>1288.87 ms</td> <td>1791 ms</td> <td>23135 ms</td> </tr> <tr> <td>Get_Date_Data_Test</td> <td>4704</td> <td>1210.28 ms</td> <td>1766 ms</td> <td>2451 ms</td> </tr> </tbody> </table> **Figure 11. BlazeMeter response time results.** takes for data or a request to travel from the source to the destination is known as network latency. Network latency is measured in milliseconds. In Figure 12, the number of HTTP requests sent by the user(s) to the Web server in a second is referred to as hits per second. The overall load put by concurrent virtual users on the server, regardless of whether they are executed successfully or not on the server-side, is measured in hits per second. Plots from extracted data Upon successful deployment and testing of the API, data were extracted, and charts plotted to show the rate of infection, death and recovery rate and state with highest numbers as shown in Figures 13, 14 and 15 respectively. Figures 13 and 14 have two vertical lines, red and blue. The red line indicates the end of the first wave and commencement of the second wave while the blue signifies the end of the second wave and the start of the third wave. ![Figure 12. BlazeMeter hits per user result.](image1) ![Figure 13. Covid-19 Nigeria - infection rate.](image2) Initially as seen in Figure 13, the spread in Nigeria was slow with single digits after the first case was recorded, mainly at Lagos and Abuja but as the days goes by the number of active cases grew to reach a maximum of 12,915 (as at August 2020) cases in Lagos state and the nationwide total confirmed cases kept raising to around 182,000 as at August 2021. The first death in Nigeria as a result of COVID-19 was recorded on the 23rd of March 2020, the person had returned from medical treatment from the United Kingdom and with the rising number of cases, the Federal Government of Nigeria under the advice of the Presidential Task Force on the Control of COVID-19 placed a ban on all international flights in and out of the country except for emergency and essential flights. The death has continued to increase from Ogun state to other experiencing deaths of patients and hitting a peak of 465 deaths in Lagos state, however, many patients were able to recover and were discharged from the centres with a total of 166,826 persons across the nation as seen in Figure 14. The Line graph in the Figure looks like a straight line and that is because, in comparison with the Total Rate of Discharge, the total deaths seems small with a peak value of 2219 as of August 2021. Lagos state is the hardest hit in terms of the number of confirmed cases with a total of 68220 cases, FCT Abuja reached 20158 cases, Kaduna 9195 cases, Plateau having 9205, Rivers having 8486 cases, Oyo with 7571 and Ogun having 5035 cases. Figure 15 shows the most affected states with a total confirmed case above 5,000 as of August 2021. The high level of cases in Abuja and Lagos can be attributed to the active community case search and testing. This public health response has led to the detection of more asymptomatic cases at the community level as stated by Ref. 2. Nigeria shares land borders with the Republic of Benin in the west, Chad and Cameroon in the east, and Niger in the north. Figure 16 shows the confirmed cases in comparison with border countries and Nigeria having the highest number of confirmed cases and followed by Cameroun with cases above 75,000 cases as of August 2021 while Benin, Nigeria and Chad having cases below 25,000 cases. ![Nigeria and its boundary countries: Confirmed Cases](image1) Figure 16. Nigeria and boundary countries COVID19 confirmed cases. ![Nigeria and Most hit African Countries](image2) Figure 17. Nigeria vs top 5 most affected African countries. As shown in Figure 17, in comparison with the top (5) hit countries on the continent of Africa, Nigeria is the tenth on the continent as of August 2021 in terms of the number of confirmed cases with 180,000 cases while South Africa was the most affected on the continent with over 2.5 million cases, followed by Morocco over 750,000 cases, then Tunisia coming forth with about 600,000 cases and Ethiopia coming fifth on the continent with around 300,000 cases. Conclusion Knowing the importance of data to research, we have designed and developed an API that gives a JSON response to HTTP requests. It was built using the REST architecture and it is designed to fetch data from the NCDC website daily with the data being warehoused on MS SQL Server. Postman and Swagger were used to testing that Data endpoints to ensure the appropriate data is fetched as defined by the API endpoint, SQL Queries to select data were used to ensure that the database is properly populated with the right data and BlazeMeter was used to test the performance of the whole system and it achieved an 11Hits/s with a response time of 2905 ms. Its advantage over existing COVID19 APIs for Nigeria is the access to previous data which means that researchers and data enthusiasts can use the API to data as required by their various research needs with a simple URL rather than having to comb through the archives of the NCDC website to get the data they need. Simply put, this system makes Nigeria’s COVID19 data access to be easily accessible even by non-technical individuals with our user interface powered by swagger. The source codes and the dataset used in this paper are available as open-source on GitHub.25,26 Data availability Software availability Software available from: https://zenodo.org/record/572685126 Source code available from: https://github.com/emmaraj/CovidApiNigeria25 Archived source code as at time of publication: https://doi.org/10.5281/zenodo.572685126 License: MIT License Acknowledgements The authors acknowledge the financial support offered by Covenant University in the actualization of this research work for publication. References 14. Reference Source 15. Reference Source 16. Reference Source 17. Reference Source The benefits of publishing with F1000Research: - Your article is published within days, with no editorial bias - You can publish traditional articles, null/negative results, case reports, data notes and more - The peer review process is transparent and collaborative - Your article is indexed in PubMed after passing peer review - Dedicated customer support at every stage For pre-submission enquiries, contact research@f1000.com
{"Source-Url": "https://f1000researchdata.s3.amazonaws.com/manuscripts/78809/a6a5d178-2420-4e9c-9352-4f4d6b2de086_74998_-_victoria_oguntosin.pdf?doi=10.12688/f1000research.74998.1&numberOfBrowsableCollections=29&numberOfBrowsableInstitutionalCollections=4&numberOfBrowsableGateways=32", "len_cl100k_base": 6969, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 35611, "total-output-tokens": 8742, "length": "2e12", "weborganizer": {"__label__adult": 0.0012722015380859375, "__label__art_design": 0.0006146430969238281, "__label__crime_law": 0.0011310577392578125, "__label__education_jobs": 0.0046234130859375, "__label__entertainment": 0.00018978118896484375, "__label__fashion_beauty": 0.0005335807800292969, "__label__finance_business": 0.0004744529724121094, "__label__food_dining": 0.001750946044921875, "__label__games": 0.0017080307006835938, "__label__hardware": 0.002819061279296875, "__label__health": 0.0413818359375, "__label__history": 0.0006022453308105469, "__label__home_hobbies": 0.00024211406707763672, "__label__industrial": 0.0005812644958496094, "__label__literature": 0.0007152557373046875, "__label__politics": 0.0004968643188476562, "__label__religion": 0.0008292198181152344, "__label__science_tech": 0.125732421875, "__label__social_life": 0.00032258033752441406, "__label__software": 0.035491943359375, "__label__software_dev": 0.7763671875, "__label__sports_fitness": 0.0007686614990234375, "__label__transportation": 0.0006871223449707031, "__label__travel": 0.0004584789276123047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34836, 0.03937]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34836, 0.29619]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34836, 0.91659]], "google_gemma-3-12b-it_contains_pii": [[0, 2398, false], [2398, 2670, null], [2670, 7415, null], [7415, 12480, null], [12480, 13777, null], [13777, 16530, null], [16530, 19871, null], [19871, 23095, null], [23095, 23204, null], [23204, 23469, null], [23469, 25503, null], [25503, 26547, null], [26547, 27822, null], [27822, 29031, null], [29031, 33028, null], [33028, 34405, null], [34405, 34836, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2398, true], [2398, 2670, null], [2670, 7415, null], [7415, 12480, null], [12480, 13777, null], [13777, 16530, null], [16530, 19871, null], [19871, 23095, null], [23095, 23204, null], [23204, 23469, null], [23469, 25503, null], [25503, 26547, null], [26547, 27822, null], [27822, 29031, null], [29031, 33028, null], [33028, 34405, null], [34405, 34836, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34836, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34836, null]], "pdf_page_numbers": [[0, 2398, 1], [2398, 2670, 2], [2670, 7415, 3], [7415, 12480, 4], [12480, 13777, 5], [13777, 16530, 6], [16530, 19871, 7], [19871, 23095, 8], [23095, 23204, 9], [23204, 23469, 10], [23469, 25503, 11], [25503, 26547, 12], [26547, 27822, 13], [27822, 29031, 14], [29031, 33028, 15], [33028, 34405, 16], [34405, 34836, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34836, 0.10556]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
06e4ef1744cae6ffaa8595ff2ab0b0abb0d1297b
What Is An Algorithm? Algorithms are the ideas behind computer programs. An algorithm is the thing which stays the same whether the program is in Pascal running on a Cray in New York or is in BASIC running on a Macintosh in Kathmandu! To be interesting, an algorithm has to solve a general, specified problem. An algorithmic problem is specified by describing the set of instances it must work on and what desired properties the output must have. Example: Sorting Input: A sequence of N numbers $a_1...a_n$ Output: the permutation (reordering) of the input sequence such as $a_1 \leq a_2 \ldots \leq a_n$. We seek algorithms which are correct and efficient. Correctness For any algorithm, we must prove that it always returns the desired output for all legal instances of the problem. For sorting, this means even if (1) the input is already sorted, or (2) it contains repeated elements. Correctness is Not Obvious! The following problem arises often in manufacturing and transportation testing applications. Suppose you have a robot arm equipped with a tool, say a soldering iron. To enable the robot arm to do a soldering job, we must construct an ordering of the contact points, so the robot visits (and solders) the first contact point, then visits the second point, third, and so forth until the job is done. Since robots are expensive, we need to find the order which minimizes the time (ie. travel distance) it takes to assemble the circuit board. You are given the job to program the robot arm. Give me an algorithm to find the best tour! Nearest Neighbor Tour A very popular solution starts at some point \( p_0 \) and then walks to its nearest neighbor \( p_1 \) first, then repeats from \( p_1 \), etc. until done. Pick and visit an initial point \( p_0 \) \( p = p_0 \) \( i = 0 \) While there are still unvisited points \( i = i + 1 \) Let \( p_i \) be the closest unvisited point to \( p_{i-1} \) Visit \( p_i \) Return to \( p_0 \) from \( p_i \) This algorithm is simple to understand and implement and very efficient. However, it is not correct! Always starting from the leftmost point or any other point will not fix the problem. Closest Pair Tour Always walking to the closest point is too restrictive, since that point might trap us into making moves we don’t want. Another idea would be to repeatedly connect the closest pair of points whose connection will not cause a cycle or a three-way branch to be formed, until we have a single chain with all the points in it. Let \( n \) be the number of points in the set \[ d = \infty \] For \( i = 1 \) to \( n - 1 \) do For each pair of endpoints \((x, y)\) of partial paths If \( \text{dist}(x, y) \leq d \) then \( x_m = x, \ y_m = y, \ d = \text{dist}(x, y) \) Connect \((x_m, y_m)\) by an edge Connect the two endpoints by an edge. Although it works correctly on the previous example, other data causes trouble: This algorithm is **not correct**! A Correct Algorithm We could try all possible orderings of the points, then select the ordering which minimizes the total length: \[ d = \infty \] For each of the \( n! \) permutations \( \Pi_i \) of the \( n \) points \[ \text{If } (\text{cost}(\Pi_i) \leq d) \text{ then } \\ \quad d = \text{cost}(\Pi_i) \text{ and } P_{\text{min}} = \Pi_i \] Return \( P_{\text{min}} \) Since all possible orderings are considered, we are guaranteed to end up with the shortest possible tour. Because it tries all \( n! \) permutations, it is extremely slow, much too slow to use when there are more than 10-20 points. No efficient, correct algorithm exists for the traveling salesman problem, as we will see later. Efficiency "Why not just use a supercomputer?" Supercomputers are for people too rich and too stupid to design efficient algorithms! A faster algorithm running on a slower computer will always win for sufficiently large instances, as we shall see. Usually, problems don’t have to get that large before the faster algorithm wins. Expressing Algorithms We need some way to express the sequence of steps comprising an algorithm. In order of increasing precision, we have English, pseudocode, and real programming languages. Unfortunately, ease of expression moves in the reverse order. I prefer to describe the ideas of an algorithm in English, moving to pseudocode to clarify sufficiently tricky details of the algorithm. The RAM Model Algorithms are the only important, durable, and original part of computer science because they can be studied in a machine and language independent way. The reason is that we will do all our design and analysis for the RAM model of computation: - Each "simple" operation (+, -, =, if, call) takes exactly 1 step. - Loops and subroutine calls are not simple operations, but depend upon the size of the data and the contents of a subroutine. We do not want "sort" to be a single step operation. - Each memory access takes exactly 1 step. We measure the run time of an algorithm by counting the number of steps. This model is useful and accurate in the same sense as the flat-earth model (which is useful)! Best, Worst, and Average-Case The *worst case complexity* of the algorithm is the function defined by the maximum number of steps taken on any instance of size $n$. ![Graph showing worst, average, and best case complexities] The *best case complexity* of the algorithm is the function defined by the minimum number of steps taken on any instance of size $n$. The *average-case complexity* of the algorithm is the function defined by an average number of steps taken on any instance of size $n$. Each of these complexities defines a numerical function – time vs. size! Insertion Sort One way to sort an array of $n$ elements is to start with an empty list, then successively insert new elements in the proper position: $$a_1 \leq a_2 \leq \ldots \leq a_k \mid a_{k+1} \ldots a_n$$ At each stage, the inserted element leaves a sorted list, and after $n$ insertions contains exactly the right elements. Thus the algorithm must be correct. But how efficient is it? Note that the run time changes with the permutation instance! (even for a fixed size problem) How does insertion sort do on sorted permutations? How about unsorted permutations? Exact Analysis of Insertion Sort Count the number of times each line of pseudocode will be executed. <table> <thead> <tr> <th>Line</th> <th>InsertionSort(A)</th> <th>#Inst.</th> <th>#Exec.</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>for j:=2 to len. of A do</td> <td>c₁</td> <td>n</td> </tr> <tr> <td>2</td> <td>key:=A[j]</td> <td>c₂</td> <td>n-1</td> </tr> <tr> <td>3</td> <td>/* put A[j] into A[1..j-1] */</td> <td>c₃=0</td> <td>/</td> </tr> <tr> <td>4</td> <td>i:=j-1</td> <td>c₄</td> <td>n-1</td> </tr> <tr> <td>5</td> <td>while i &gt; 0 &amp; A[1] &gt; key do</td> <td>c₅</td> <td>tj</td> </tr> <tr> <td>6</td> <td>A[i+1]:= A[i]</td> <td>c₆</td> <td>c7</td> </tr> <tr> <td>7</td> <td>i := i-1</td> <td>c₇</td> <td>n-1</td> </tr> <tr> <td>8</td> <td>A[i+1]:=key</td> <td>c₈</td> <td>n-1</td> </tr> </tbody> </table> The for statement is executed \((n-1)+1\) times (why?) Within the for statement, "key:=A[j]" is executed \(n-1\) times. Steps 5, 6, 7 are harder to count. Let \(t_j = 1+\) the number of elements that have to be slide right to insert the \(j\)th item. Step 5 is executed \(t_2 + t_3 + ... + t_n\) times. Step 6 is \(t_{2-1} + t_{3-1} + ... + t_{n-1}\). Add up the executed instructions for all pseudocode lines to get the run-time of the algorithm: \[ c_1n + c_2(n-1) + c_4(n-1) + c_5 \sum_{j=2}^{n} t_j + c_6 \sum_{j=2}^{n} (t_j - 1) + c_7 \sum_{j=2}^{n} (t_j - 1) + c_8 \] What are the \( t_j's \)? They depend on the particular input. **Best Case** If it's already sorted, all \( t_j's \) are 1. Hence, the best case time is \[ c_1 n + (c_2 + c_4 + c_5 + c_8)(n - 1) = Cn + D \] where \( C \) and \( D \) are constants. **Worst Case** If the input is sorted in descending order, we will have to slide all of the already-sorted elements, so \( t_j = j \), and step 5 is executed \[ \sum_{j=2}^{n} j = (n^2 + n)/2 - 1 \] Exact Analysis is Hard! We have agreed that the best, worst, and average case complexity of an algorithm is a numerical function of the size of the instances. However, it is difficult to work with exactly because it is typically very complicated! Thus it is usually cleaner and easier to talk about upper and lower bounds of the function. This is where the dreaded big O notation comes in! Since running our algorithm on a machine which is twice as fast will effect the running times by a multiplicative constant of 2 - we are going to have to ignore constant factors anyway. Names of Bounding Functions Now that we have clearly defined the complexity functions we are talking about - $g(n) = O(f(n))$ means $C \times f(n)$ is an upper bound on $g(n)$. - $g(n) = \Omega(f(n))$ means $C \times f(n)$ is a lower bound on $g(n)$. - $g(n) = \Theta(f(n))$ means $C_1 \times f(n)$ is an upper bound on $g(n)$ and $C_2 \times f(n)$ is a lower bound on $g(n)$. Got it? All of these definitions imply a constant $n_0$ beyond which they are satisfied. We do not care about small values of $n$. The value of $n_0$ shown is the minimum possible value; any greater value would also work. (a) $f(n) = \Theta(g(n))$ if there exist positive constants $n_0$, $c_1$, and $c_2$ such that to the right of $n_0$, the value of $f(n)$ always lies between $c_1 \cdot g(n)$ and $c_2 \cdot g(n)$ inclusive. (b) $f(n) = O(g(n))$ if there are positive constants $n_0$ and $c$ such that to the right of $n_0$, the value of $f(n)$ always lies on or below $c \cdot g(n)$. (c) $f(n) = \Omega(g(n))$ if there are positive constants $n_0$ and $c$ such that to the right of $n_0$, the value of $f(n)$ always lies on or above $c \cdot g(n)$. Asymptotic notation ($O, \Theta, \Omega$) are as well as we can practically deal with complexity functions. What does all this mean? \[3n^2 - 100n + 6 = O(n^2) \text{ because } 3n^2 > 3n^2 - 100n + 6\] \[3n^2 - 100n + 6 = O(n^3) \text{ because } .01n^3 > 3n^2 - 100n + 6\] \[3n^2 - 100n + 6 \neq O(n) \text{ because } c \cdot n < 3n^2 \text{ when } n > c\] \[3n^2 - 100n + 6 = \Omega(n^2) \text{ because } 2.99n^2 < 3n^2 - 100n + 6\] \[3n^2 - 100n + 6 \neq \Omega(n^3) \text{ because } 3n^2 - 100n + 6 < n^3\] \[3n^2 - 100n + 6 = \Omega(n) \text{ because } 10^{10^{10}} n < 3n^2 - 100 + 6\] \[3n^2 - 100n + 6 = \Theta(n^2) \text{ because } O \text{ and } \Omega\] \[3n^2 - 100n + 6 \neq \Theta(n^3) \text{ because } O \text{ only}\] \[3n^2 - 100n + 6 \neq \Theta(n) \text{ because } \Omega \text{ only}\] Think of the equality as meaning \textit{in the set of functions}. Note that time complexity is every bit as well defined a function as \text{sin}(x) or you bank account as a function of time. Testing Dominance $f(n)$ dominates $g(n)$ if $\lim_{n \to \infty} g(n)/f(n) = 0$, which is the same as saying $g(n) = o(f(n))$. Note the little-o – it means “grows strictly slower than”. Knowing the dominance relation between common functions is important because we want algorithms whose time complexity is as low as possible in the hierarchy. If $f(n)$ dominates $g(n)$, $f$ is much larger (ie. slower) than $g$. - $n^a$ dominates $n^b$ if $a > b$ since \[ \lim_{n \to \infty} \frac{n^b}{n^a} = n^{b-a} \to 0 \] - $n^a + o(n^a)$ doesn’t dominate $n^a$ since \[ \lim_{n \to \infty} \frac{n^b}{n^a + o(n^a)} \to 1 \] <table> <thead> <tr> <th>Complexity</th> <th>10</th> <th>20</th> <th>30</th> <th>40</th> </tr> </thead> <tbody> <tr> <td>$n$</td> <td>0.00001 sec</td> <td>0.00002 sec</td> <td>0.00003 sec</td> <td>0.00004 sec</td> </tr> <tr> <td>$n^2$</td> <td>0.001 sec</td> <td>0.004 sec</td> <td>0.009 sec</td> <td>0.016 sec</td> </tr> <tr> <td>$n^3$</td> <td>0.01 sec</td> <td>0.08 sec</td> <td>0.27 sec</td> <td>0.64 sec</td> </tr> <tr> <td>$n^5$</td> <td>0.1 sec</td> <td>3.2 sec</td> <td>24.3 sec</td> <td>1.7 min</td> </tr> <tr> <td>$2^n$</td> <td>0.01 sec</td> <td>1.0 sec</td> <td>17.9 min</td> <td>12.7 days</td> </tr> <tr> <td>$3^n$</td> <td>0.59 sec</td> <td>58 min</td> <td>6.5 years</td> <td>3855 cent</td> </tr> </tbody> </table> Working with the Notation Suppose \( f(n) = O(n^2) \) and \( g(n) = O(n^2) \). What do we know about \( g'(n) = f(n) + g(n) \)? Adding the bounding constants shows \( g'(n) = O(n^2) \). What do we know about \( g''(n) = f(n) - g(n) \)? Since the bounding constants don’t necessary cancel, \( g''(n) = O(n^2) \). We know nothing about the lower bounds on \( g' + g'' \) because we know nothing about lower bounds on \( f, g \). Suppose \( f(n) = \Omega(n^2) \) and \( g(n) = \Omega(n^2) \). What do we know about \( g'(n) = f(n) + g(n) \)? Adding the lower bounding constants shows \( g'(n) = \Omega(n^2) \). What do we know about \( g''(n) = f(n) - g(n) \)? We know nothing about the lower bound of this! Problem 2.1-4: (a) Is $2n + 1 = O(2^n)$? (b) Is $2^{2n} = O(2^n)$? (a) Is $2n + 1 = O(2^n)$? Is $2n + 1 \leq c \cdot 2n$? Yes, if $c \geq 2$ for all $n$ (b) Is $2^{2n} = O(2^n)$? Is $2^{2n} \leq c \cdot 2n$? note $2^{2n} = 2n \cdot 2n$ Is $2n \cdot 2n \leq c \cdot 2n$? Is $2n \leq c$? No! Certainly for any constant $c$ we can find an $n$ such that this is not true.
{"Source-Url": "https://users.cs.duke.edu/~reif/courses/alglectures/skiena.lectures/lecture1.pdf", "len_cl100k_base": 4299, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 37502, "total-output-tokens": 4968, "length": "2e12", "weborganizer": {"__label__adult": 0.0003733634948730469, "__label__art_design": 0.0004897117614746094, "__label__crime_law": 0.0005459785461425781, "__label__education_jobs": 0.000995635986328125, "__label__entertainment": 0.0001360177993774414, "__label__fashion_beauty": 0.0001983642578125, "__label__finance_business": 0.0003135204315185547, "__label__food_dining": 0.0006189346313476562, "__label__games": 0.0008859634399414062, "__label__hardware": 0.0025577545166015625, "__label__health": 0.0007691383361816406, "__label__history": 0.0004029273986816406, "__label__home_hobbies": 0.0002157688140869141, "__label__industrial": 0.0006742477416992188, "__label__literature": 0.0004620552062988281, "__label__politics": 0.00033092498779296875, "__label__religion": 0.0006175041198730469, "__label__science_tech": 0.1563720703125, "__label__social_life": 0.0001176595687866211, "__label__software": 0.0091705322265625, "__label__software_dev": 0.822265625, "__label__sports_fitness": 0.0003879070281982422, "__label__transportation": 0.0006456375122070312, "__label__travel": 0.00023353099822998047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13476, 0.03198]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13476, 0.55422]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13476, 0.86854]], "google_gemma-3-12b-it_contains_pii": [[0, 898, false], [898, 1561, null], [1561, 2166, null], [2166, 2959, null], [2959, 3670, null], [3670, 4398, null], [4398, 5123, null], [5123, 5696, null], [5696, 6273, null], [6273, 7819, null], [7819, 8498, null], [8498, 9079, null], [9079, 9594, null], [9594, 10328, null], [10328, 11223, null], [11223, 12386, null], [12386, 13098, null], [13098, 13476, null]], "google_gemma-3-12b-it_is_public_document": [[0, 898, true], [898, 1561, null], [1561, 2166, null], [2166, 2959, null], [2959, 3670, null], [3670, 4398, null], [4398, 5123, null], [5123, 5696, null], [5696, 6273, null], [6273, 7819, null], [7819, 8498, null], [8498, 9079, null], [9079, 9594, null], [9594, 10328, null], [10328, 11223, null], [11223, 12386, null], [12386, 13098, null], [13098, 13476, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13476, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13476, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13476, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13476, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 13476, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13476, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13476, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13476, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13476, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13476, null]], "pdf_page_numbers": [[0, 898, 1], [898, 1561, 2], [1561, 2166, 3], [2166, 2959, 4], [2959, 3670, 5], [3670, 4398, 6], [4398, 5123, 7], [5123, 5696, 8], [5696, 6273, 9], [6273, 7819, 10], [7819, 8498, 11], [8498, 9079, 12], [9079, 9594, 13], [9594, 10328, 14], [10328, 11223, 15], [11223, 12386, 16], [12386, 13098, 17], [13098, 13476, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13476, 0.0973]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
640633e3b01ac041465709051f3f21990b2246a1
By posing a continual stream of pertinent questions, a nonmathematical computer program can prod freshman physics students toward an analytical solution to one-dimensional kinematics problems. The use of computers in physics pedagogy is almost entirely focused on “parameter-based” software: students focus on a set of initial variables, which result in a subsequent output. For example, we can investigate the effects of launch angle, speed, and air drag (the parameters) on a projectile’s motion¹ (the output) in several ways, including the PHeT Web-based projectile simulator (http://phet.colorado.edu), Physics Academic Software’s Windows-based simulator (http://webassign.net/pas), or, for an example in Mathematica, Wolfram’s Projectile Motion (http://demonstrations.wolfram.com/ProjectileMotion). However, despite vast differences in software form (symbolic, numeric, graphical, and so on), perhaps spanning generations of computer technology, the user experience remains fixed in this parameter-based mode, giving much of the physics pedagogical software the same “feeling.” From a programming perspective, this commonality results from the procedural programming languages used to build the software. That is, the programmer identifies the problem to be solved, chooses a desired implementation scheme, and gives the computer specific step-by-step instructions on how to proceed, subject to some terminal condition(s). Such software is fundamentally weak, however, where input parameters, data, or desired outcomes are not well defined.²,³ Consider a typical end-of-chapter physics problem, such as problem 1.34 in Randall Knight’s book:⁴ A Porsche at rest accelerates from a stop light at 5 m/s² for 5 seconds, then coasts for 3 more seconds. How far has it traveled? Or take problem 2.47 from a book by Douglas Giancoli:⁵ A falling stone takes 0.30 s to travel past a window 2.2 m tall. From what height above the top of the window did the stone fall? Any number of available software packages can numerically, symbolically, or graphically simulate such problems, reinforcing relevant concepts in the procedural programming domain. But there’s another desirable software mode not being addressed. What about software that “understands” the laws of physics to the point where it can generate a textual, step-by-step, analytical solution to such problems? (By “analytical” we mean solutions that are found through direct, often symbolic, manipulation of the relevant physical laws, using algebra, trigonometry, and calculus.) This question motivated our work for two primary reasons. First, there’s a lack of software available to tutor students in this manner. Developing heuristic software is apparently an untapped research area in computational physics pedagogy. A literature search yielded volumes on “intelligent tutoring systems,”⁶,⁷ but very little on solving physics problems.⁸,⁹ (Mastering Physics is a homework submission system tied to many current physics textbooks; while it appears to offer heuristic-like help with end-of-chapter problems, it’s not a generalized problem solver. See www.masteringphysics.com.) Second, current physics texts continue to be written in an analytical fashion, with hundreds of problems calling for analytical solutions following hundreds of pages of analytical lessons. A disconnect appears to exist between parameter-based physics pedagogical software and the analytical emphasis of current physics texts. We’ve developed a software tool that can generate a textual, step-by-step, analytical solution to an end-of-chapter physics problem without any prior knowledge of the problem itself. Our software has successfully generated solutions to one-dimensional kinematics (1DK) problems involving a single object—such as the Porsche and the stone in the problems above. The software works by asking questions about the problem and, based on the user’s responses, offers instructions that lead to a solution—similar... to how an expert system works.\textsuperscript{10,11} In short, our software simulates, at a simple level, the professor-student interaction that might occur during a one-on-one tutorial session. **Software Implementation** Procedural languages themselves stand as an enormous hindrance in developing the type of heuristic software that we propose. Such software is vague in terms of inputs and outputs and raises many questions: What is the student confused about? What is the nature of the problem? How should we present the problem’s facts to the computer? What solution do we need? Attempting to code this software in a rigid, step-by-step procedural language would be unnecessarily labored. Clearly, we need another programming approach. **Why Prolog?** A long-ago interest in AI led to our awareness of Prolog and its ability to manipulate symbols while emphasizing a strict logical plan. Prolog enables an entirely different approach to computer programming, using a declarative rather than procedural method. The declarative method doesn’t require the programmer to supply the computer with step-by-step instructions. Instead, we must present rules to satisfy and data to be tested against those rules. The Prolog “engine” then works to find relationships among the data that are consistent with the rules. Already this sounds a lot like how we solve a physics problem. Luckily, in the limited domain of freshman-level 1DK problems, the text of each problem always contains sufficient logical connections to enable a step-by-step analytical solution. This approach doesn’t perform any numeric or mathematical computation, nor does it deliver a specific result; it’s decidedly nonparameter-based. Instead, our approach focuses on three key elements for analytically solving a physics problem: - understanding the proper inter-relationships between kinematic variables, - eliciting all possible knowledge from the problem text, and - organizing knowledge learned along the way. Figure 1 shows a session with our software, in which a student solves the “Porsche” problem above (the student’s input is shown in bold). We wrote the Prolog program to contain the rules, which are essentially the 1DK equations.\textsuperscript{4} The student provides the data that the rules are applied to when prompted by the software; the data itself emerges from facts and other clues found in the problem text. The software never requires specific numbers. Instead, it works with either yes/no questions (such as, “Do you know the acceleration of the Porsche?”) or short-answer questions (such as, “Where is the Porsche when it starts?” for which the answer might be “at the traffic light”). This data set can grow internally as the program learns things along the way. Prolog is uniquely suited to these tasks, as it’s rule-based, with built-in database capabilities. The rule-based character provides a natural structure for translating the laws of physics (the rules) into the computer, while the database element permits careful knowledge organization as the solution unfolds. The entire program—which can lead to a solution for any freshman-type, single-object, 1DK problem—consists of approximately 100 lines of Prolog code. **How Prolog Works** Prolog reached its most widespread adoption in the late 1980s as the language of “artificial intelligence,” but it never went mainstream. It has largely become an academic computer language, although several commercial implementations exist including Visual Prolog (www.visual-prolog.com), Quintus Prolog (www.sics.se/quintus), Logic Programming Associates Prolog (www.lpa.co.uk), and Arity Prolog (www arity.com). There are two open-source Prologs: Gnu Prolog (www.gprolog.org) and Swi Prolog (www.swi-prolog.org), which we used in our work. Here, we provide a brief synopsis of the language; interested readers can refer to the books *Programming in Prolog\textsuperscript{12} or Prolog Programming for Artificial Intelligence\textsuperscript{13} for more details. Prolog is a remarkably simple language with almost no formal constructs. At its core, Prolog has a string, called an *atom*, that can’t be further processed. An atom is a sequence of lowercase letters, such as *position*, *acceleration*, or *car*. From atoms, developers can build predicates. For example, if we know a car’s position in a physics problem, we might declare a *know* predicate which, when connected with the car’s position, would be *know(car,position)*. This predicate is a valid Prolog statement, ending in a period, and represents the most basic representation of knowledge about a system (that is, “we know the position of the car.”). From predicates, we can define rules, as in \begin{verbatim} can_find(X,v):-know(X,v0),know(X,a),know(X,duration_a_is_applied). \end{verbatim} This is a valid Prolog rule; it states that we “can find” \( v \) (the speed) of object \( X \) if (\( \cdot \) means “if”) we know both \( X \)’s \( v_0 \) (initial speed) and \( a \) (acceleration), as well as the time over which \( a \) was applied to the object. \begin{verbatim} know(car,position). know(car,acceleration). know(car,acceleration). \end{verbatim} Do you know about a time interval that involves the porsche? [y/n]: What is the porsche doing at the start of this time interval? at_rest. What is the porsche doing at the end of this time interval? going_fast. Do you know the acceleration of the porsche during the time interval from at_rest to going_fast? [y/n]: Do you know the speed of the porsche when at_rest? [y/n]: Do you know the speed of the porsche when going_fast? [y/n]: Do you know the position of the porsche when at_rest? [y/n]: Do you know the position of the porsche when going_fast? [y/n]: Is it possible that an event in this list is closely connected to (or is actually the same as) another event? [at_rest, going_fast] [y/n]: Using just a few words, can you say anything else that the porsche does in this problem? [y/n]: Figure 1. The software’s input and output for the “Porsche” problem. The student’s input is shown in bold. In its declarative mode, Prolog relentlessly searches for “solutions” to rules based on other rules and any available data. Procedural languages follow a strict path through the given code until terminal conditions are met. Prolog and 1D Kinematics Problems To develop a physics-problem solver in Prolog, we first needed a framework. Kinematic Equations All 1DK problems in a freshman-level textbook (such as Knight’s) can be solved with two equations: \[ x = x_0 + v_0 t + \frac{1}{2} a t^2, \] \[ v = v_0 + a t, \] and \[ v = v_0 + a \Delta t, \] where the usual variable meanings apply.\(^5\) In forming a program design, we need a strict (and ever-obvious) interpretation of these equations as follows. Because these equations allow for advancing \( x \) and \( v \) forward in time, identification of the time interval \( \Delta t \) is of primary importance. Typical 1DK problems might include several different time intervals, each requiring specific identification and ordering to develop a solution. The program can specify these kinematic variables only after these time intervals are clearly defined. For example, the initial position of an object, \( x_0 \), is strictly the object's position at the time interval's beginning, and \( x \) is its position at the time interval's end. Analogous meanings hold for \( v_0 \) and \( v \). It assumes acceleration, \( a \), is constant and acts over the entire time interval. Unless they're properly connected to some time interval, these kinematic variables can play no logical role in a solution. Prolog's natural fit with our endeavor is now apparent. We program Equation 1 as a rule into Prolog via the construct \[ \text{physics\_law}(\text{Object}, \{x(\text{Object}, \text{Tend}), x0(\text{Object}, \text{Tstart}), v0(\text{Object}, \text{Tstart}), a(\text{Object}, \text{Tstart}, \text{Tend}), \Delta t(\text{Object}, \text{Tstart}, \text{Tend}), \text{'}x=x0+v0*\text{\Delta t} + 1/2*a*\text{\Delta t}^2\text{').} \] Here \text{physics\_law} is a Prolog rule, containing three parameters, an \text{Object} variable, a list of kinematic variables, and textual advice for the student. In this rule, the kinematic variables are tied to an object \text{Object} and a time interval \((\Delta t = \text{Tend} - \text{Tstart})\) via \text{Tstart} and \text{Tend}. Prolog's built-in pattern-matching capabilities let this rule succeed only if the pattern of kinematic variables—which are linked carefully to a time interval—are matched to known data. To see how this works, consider Prolog's attempt to unify the first term in the \text{physics\_law} rule's list above: \[ x(\text{Object}, \text{Tend}). \] In its knowledge base, Prolog might, for example, find \( x(\text{porsche}, \text{top\_speed}) \), meaning "we know the position of the Porsche when it's reached its top speed." This find will instantiate \text{Object} to \text{porsche} and \text{Tend} to \text{top\_speed} for all subsequent terms in the rule. When it comes to \( x0(\text{Object}, \text{Tstart}) \), Object has already been instantiated to \text{porsche}, so it searches for terms matching \( x0(\text{porsche}, \text{Tstart}) \), with \text{Tstart} as yet uninstantiated. It might find \( x0(\text{porsche}, \text{at\_rest}) \) instantiating \text{Tstart} to \text{at\_rest}. Thus, \text{Object}, \text{Tstart}, and \text{Tend} are now instantiated and will remain so for the rest of the terms in the rule. Failure of any subgoal will force backtracking, in which case the program will seek alternative solutions to the \( x0 \) and \( x \) goals. We again emphasize the organizational structure here. The variables can occur only at the end of a time interval, which is denoted here by the time when the Porsche has attained \text{top\_speed}, according to the student. This end-time boundary must match any other end-time boundaries needed by other kinematic variables if this rule as a whole is to succeed. The same applies for kinematic variables involving start-time boundaries. In traditional "paper and pencil" format, this rule would be written as \[ \begin{align*} x_\text{porsche, top\_speed} &= x_\text{porsche, at\_rest} + \\ v_\text{porsche, at\_rest} &= v_\text{porsche, top\_speed, at\_rest} + \\ \frac{1}{2} &\Delta t_\text{top\_speed, at\_rest} = v_\text{porsche, at\_rest} + \frac{1}{2} a_\text{acceleration} \Delta t_\text{top\_speed, at\_rest} \end{align*} \] (3) It's true that there's excessive subterm matching capabilities, but it's this specific term-by-term organization that lies at the core of our work. In practice, the \text{physics\_law} rule succeeds if any three of the four variables in its kinematic-variable list are known. The fourth might then be derived (by the student) using the kinematic equation the rule represents (as the software will advise). A similar written rule exists for Equation 2. Surprisingly, this completes our kinematic equations programming into the Prolog program. Prolog's built-in inference engine handles the rest, relentlessly searching all available data as it attempts to find rule versions for which all but one of the variables is known. At that point, the unknown is assumed computable and added to the program's knowledge base. Time-Interval Identification In the Porsche problem, there are two time intervals (when the Porsche is accelerating and when it's coasting). The stone problem also has two time intervals (when the stone is falling in front of, and toward, the window from above). Because of the time intervals' importance, the software aggressively attempts to learn about them as soon as possible. It does so by asking the student what the objects "are doing" at the start and end of each such intervals. To answer these questions, the student is forced to examine a given object's actions and devise descriptions of what seems to occur at the beginning and end of each time interval. This effectively defines "time bounds" for the time interval. In 1DK problems, there are two types of time intervals. The first are explicitly stated, as in the Porsche and stone problems above. The Porsche accelerates for "5 seconds" and the stone \text{January/February} 2010 falls for “0.3 seconds.” As Figure 1 shows, the Porsche’s explicit time interval is recognized and bounded by times when the Porsche is “at rest” and “going fast.” This tells the software that we know a fact called \( \text{dt}(\text{porsche, at_rest, going_fast}) \) or \( \Delta_{\text{at_rest-going_fast}} \). To the software, this means the student knows an actual value of \( \Delta \) and can use it in a later computation. For the stone, the 0.3 seconds is bounded by expressions such as “top of window” and “bottom of window.” The second type of time interval is not explicitly stated, but is known to exist via a particular set of actions that an object takes or imposes. For example, in the stone problem, there’s a time interval during which the stone is falling toward the window, but it’s not known how long this interval lasts. However, even without knowledge of the time interval’s magnitude, the student can supply descriptive time interval boundaries if the program asks “if they can say anything else about what the stone does in the problem.” The student can recognize that, for a part of the problem, the stone “falls toward the window,” which is a time interval bounded by the acts of “being released by a hand” and “reaching the top of the window.” With this knowledge, the software could discover a \( \text{dt(stone, released_by_hand, top_of_window)} \) (or \( \Delta_{\text{released_by_hand-top_of_window}} \), which currently has no known magnitude. In the Prolog code, this is handled by a modified form of the \( \text{physics_law} \) construct that has the rule \( \text{adjacent_times} \) (Object, Description, \( T_{\text{start}}, T_{\text{end}} \)) (as in \( \Delta \) “boundaries”) in place of \( \text{dt} \) (Object, \( T_{\text{start}}, T_{\text{end}} \)). The former notion is always forced to fail as a Prolog fact because the student can’t compute with a time interval of unknown magnitude. The failing \( \text{dt_bounds} \) predicate instead triggers the software into asking specific questions about the time interval, with the goal of finding the time interval’s magnitude from the kinematic equations. Such questions involve asking the student whether \( a \) is known during the time interval, if \( x \) or \( v \) are known at the time interval’s end, and if \( x_0 \) or \( v_0 \) are known at the time interval’s start (alternatively, it could search its internal database for the same information). After gathering such information, the software sees whether a \( \Delta \) is computable from Equation 1 or 2; if so, it’s added to the problem’s knowledge base. **Time-Interval Sequencing and Connections** At any given point in the problem, several textual descriptions might accumulate in the database, linked to the time intervals’ beginnings and ends. Connecting and sequencing such expressions is another critical step toward an analytical solution. In the Porsche problem (Figure 1), the acceleration time interval ends with the description \( \text{going_fast} \), while the coasting time interval begins with \( \text{no_acceleration} \). Although different descriptions, these times actually represent the same time instant. The problem’s story line would indicate that the \( \text{no_acceleration} \) just precedes the \( \text{going_fast} \). In other words, the Porsche begins its coasting just as it ends its acceleration. Likewise, the stone is seen to be “reaching the top of the window” (after being released) just as its position is known to be at the “top of window.” The software constantly trolls its database for such descriptions and asks the student if any of them actually represent the same physical time instant. If it knows two time expressions are equivalent—for example \( t_1 \) and \( t_2 \), with \( t_1 \) just preceding \( t_2 \) in the storyline)—then the software can immediately connect the two respective time intervals via kinematic variables, as in \[ x_0(t_2) = x(t_1) \] and \[ v_0(t_2) = v(t_1) \] These equations state that initial parameters in \( x \) or \( v \) at the beginning of the later time interval can be found from the final like-parameters at the previous time interval’s end. We represent Equation 4 as \[ \text{physics_law(Object,} \{x'(Object,'/',Tstart)]}, \{x('Object','/',Tend,'}) \text{=} x0 ('Object','/'}, \text{Tstart,'})'] \text{:-} \text{adjacent_times(Object,Tend,} \text{Tstart)} \] where the rule \( \text{adjacent_times} \) succeeds if two times (\( T_{\text{start}} \) and \( T_{\text{end}} \)) are found in the database and known to be related. Such work is left up to the Prolog engine: the code specifies only a required pattern between any \( x \) and \( x_0 \) (or \( v \) and \( v_0 \)). If one of the two terms is known, the other can be found and added to the database, with instructions given to the student. Knowledge of \( \text{adjacent_times} \) facts comes directly from the student. **Kinematic Variables** With clearly identified time intervals in hand, the software can now ask about specific kinematic variables in their proper context. For example, if it knows the time interval magnitude—either directly or in a derivable sense—between events \( T_{\text{end}} \) and \( T_{\text{start}} \), it can pose a question about \( v \), a kinematic variable that can exist only at the end of a given time interval. The Prolog code \[ \text{question} \{ \text{Object}, v \text{ (Object, } T_{\text{end}}) \} : - \\ \text{maybe_dt} \{ \text{Object}, \_, T_{\text{end}} \}, \\ \text{not} \{ v \text{ (Object, } T_{\text{end}}) \}, \\ \text{affirm} \{ '\text{Do you know the speed of the} \\ \text{Object, ' when } 'T_{\text{end}}, ' ? ' \} \] will handle such a question. The \text{maybe_dt} rule succeeds if the database contains either an explicit or derivable time interval magnitude. The underbar (\_) in this line is the Prolog “anonymous” or “don’t care” variable. Because the code is asking about \( v \), we’re not concerned with the time interval’s beginning, hence the placement of only \( T_{\text{end}} \) in the code. If for example, \text{Object} instantiates to \text{stone} and \( T_{\text{end}} \) to \text{reaching_top_of_window}, the question will read “Do you know the speed of the stone when reaching_top_of_window?” (We fine the broken, grammatically incorrect yet understandable sentences a charming aspect of AI applications.) The \text{affirm} predicate used to pose the question succeeds only if the student answers “yes” to the question. Such questions are also posed for \( v_0, x_0, x, \) and \( a \). If any of these rules succeed, then the program inserts this kinematic variable knowledge—which is carefully tied to an object and time interval—into the Prolog database as newly gained knowledge. Other Components Several lines of code form the software’s core engine. These lines continually drive the software to ask questions, make database assertions, and display any instructions to the student. The software is started with a rule called \text{solve}, which takes a single object name as its parameter—as in \text{solve(porsche)} or \text{solve(stone)}. This rule starts by extracting questions to ask from any available question predicates and updates the database for questions with affirmative answers or those that might generate new time intervals data. The rule \text{check_laws} is called by \text{solve} after the program asks appropriate questions. It sequences through all \text{physics_laws} rules attempting to find one that succeeds (with all but one of its required variables known). A subgoal of \text{check_laws}, called \text{check_terms}, actually checks each kinematic variable’s availability against the database and returns the one that is unknown, which might now be a known (derivable) quantity. In practice, the software might terminate without the student finding an answer. If so, it’s run again (and again) as necessary, forcing the student to rethink previous answers or answer new questions, until he or she earns enough data from the problem to find what it requires. Repeated runs are generally different, as run \( N \) can benefit from data asserted into the database in run \( N - 1 \). We generated Figure 1 by running the software three times, with the answer finally appearing near the bottom, in the line describing how \( x \text{(porsche, went_farther)} \) is found. Running the Software The complete Prolog code for our work is available at \text{http://ocean.physics.calpoly.edu/prolog}. You can download the code, \text{phys.pl}, and the supporting input/output routines, \text{io.pl}, and save them in a common folder. With Swi Prolog installed, you can load and compile the code by typing [\text{phys}], in response to Prolog’s \text{-} prompt. You can then run the software by typing \text{solve(porsche)} or \text{solve(stone)} following the next \text{-} prompt, to start the program working on what’s computable for that object. Finally, you can rerun the software as needed—via additional \text{solve(porsche)} or \text{solve(stone)}, entries—at each subsequent \text{-} prompt. Prolog’s input/output interface is extremely fragile. All “yes/no” responses must be a single, lowercase “y” or “n” followed by a period (that is, \text{y. or n.}) and the return key. All short-response inputs must be in the form of a valid Prolog atom construct, followed by a period—that is, all lowercase letters, with no spaces or symbols other than the underbar (\_), such as \text{at_the_top_of_the_window.} (with a period at the end). Although we could have written similar software using a procedural language, Prolog’s natural ability to handle symbolic information in a strict logical setting makes it a compelling platform for this work. We’re particularly impressed with the pattern-matching ability of Prolog’s inference engine, yet find it ever-difficult to write code that maximally exploits this after a lifetime of procedural programming. We’re also intrigued by Prolog’s \text{call} predicate, which allows data to be executed as code; this enables compact representation of the \text{physics_laws} by allowing each member of a \text{physics_law} list (that is, each data element) to be executed as a Prolog query (code). Our future work includes adapting this code into a Web-ready form, so students can access it via a Web browser. In addition, we plan to adapt the code to handle kinematics problems involving more than one object, such as “two trains passing each other” and so on. We also see potential for this software to help grade-school students with the dreaded “story problems”—that is, the word problems involving age, money, simple motion, and so on that often stand as a barrier between those who “get math” and those who don’t. References Thomas J. Bensky is an associate professor of physics at California Polytechnic State University in San Luis Obispo, California. His research interests include optics of the ocean and the use of computer graphics and artificial intelligence in undergraduate education. Bensky has a PhD in physics from the University of Virginia. Contact him at tbensky@calpoly.edu. Catherine A. Taff is a mission systems engineer at The Boeing Company. Her research interests are in computer modeling and simulation. Taff has an MS in physics from San Francisco State University. Contact her at ctaff@sfsu.edu. Selected articles and columns from IEEE Computer Society publications are also available for free at http://ComputingNow.computer.org. TIMELY, ENVIRONMENTALLY FRIENDLY DELIVERY DIGITAL EDITIONS Subscribe to the interactive digital versions of Computer and IEEE Security & Privacy, and access the latest news and information whenever and wherever you want it. Computer The IEEE Computer Society's flagship publication, Computer magazine publishes peer-reviewed technical articles on all aspects of computer science, computer engineering, technology, and applications. Industry professionals, researchers, and managers rely on Computer to keep current on research developments, trends, best practices, and changes in the profession. Upcoming theme issues include: - Extreme-scale computing, - Multi- and many-core, - Biometric identification, - Nano-architecture. To see what you're missing, check out selected Computer articles for free in Computing Now, and then subscribe to the digital edition to get full access right away. $29.95 for 12 issues! IEEE Security & Privacy IEEE Security & Privacy brings together the practical and the leading edge advances in security, privacy, and dependability. IEEE Security & Privacy covers and influences policy in the enterprise and the government—from basic training and attack trends to the US's cyberattack policy and telephone wiretapping. S&P brings guidance from some of the leading thinkers in the field. Bruce Schneier, Steve Bellovin, Gary McGraw, and Mike Howard have you in mind when writing their columns! Upcoming theme issues include: - The insider threat, - Mobile device security, and - The security and privacy of cloud computing. $19.95 for 6 issues! The latest content at your fingertips within minutes. Email notification. Receive an alert as soon as each digital edition is available. Links will take you directly to the enhanced PDF edition OR the web browser-based edition. Quick access. Download the full issue in less than two minutes with a broadband connection. Convenience. Read your digital edition anytime -- from your home PC, at work, or on your laptop while traveling. Digital archives. Subscribers can access the virtual archive of digital issues dating back to Jan./Feb. 2007. To subscribe, go to: computer.org/digitaleditions
{"Source-Url": "http://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?article=1067&context=phy_fac", "len_cl100k_base": 6928, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26626, "total-output-tokens": 8012, "length": "2e12", "weborganizer": {"__label__adult": 0.0004074573516845703, "__label__art_design": 0.0007877349853515625, "__label__crime_law": 0.000514984130859375, "__label__education_jobs": 0.039825439453125, "__label__entertainment": 0.0002675056457519531, "__label__fashion_beauty": 0.00031948089599609375, "__label__finance_business": 0.000614166259765625, "__label__food_dining": 0.0005841255187988281, "__label__games": 0.000843048095703125, "__label__hardware": 0.0026493072509765625, "__label__health": 0.0007715225219726562, "__label__history": 0.00060272216796875, "__label__home_hobbies": 0.00033473968505859375, "__label__industrial": 0.0012693405151367188, "__label__literature": 0.0009360313415527344, "__label__politics": 0.0005426406860351562, "__label__religion": 0.0008602142333984375, "__label__science_tech": 0.39501953125, "__label__social_life": 0.0004892349243164062, "__label__software": 0.037261962890625, "__label__software_dev": 0.51318359375, "__label__sports_fitness": 0.00048065185546875, "__label__transportation": 0.0009684562683105468, "__label__travel": 0.00024890899658203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31632, 0.01736]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31632, 0.70998]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31632, 0.89399]], "google_gemma-3-12b-it_contains_pii": [[0, 3979, false], [3979, 9168, null], [9168, 10613, null], [10613, 16293, null], [16293, 21405, null], [21405, 26556, null], [26556, 29306, null], [29306, 31632, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3979, true], [3979, 9168, null], [9168, 10613, null], [10613, 16293, null], [16293, 21405, null], [21405, 26556, null], [26556, 29306, null], [29306, 31632, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31632, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31632, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31632, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31632, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31632, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31632, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31632, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31632, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31632, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31632, null]], "pdf_page_numbers": [[0, 3979, 1], [3979, 9168, 2], [9168, 10613, 3], [10613, 16293, 4], [16293, 21405, 5], [21405, 26556, 6], [26556, 29306, 7], [29306, 31632, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31632, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
61ab0df70792a8901dc1ecdc27bf83b49f20fc10
[REMOVED]
{"len_cl100k_base": 7964, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 42337, "total-output-tokens": 11385, "length": "2e12", "weborganizer": {"__label__adult": 0.0004372596740722656, "__label__art_design": 0.000507354736328125, "__label__crime_law": 0.0006747245788574219, "__label__education_jobs": 0.0017309188842773438, "__label__entertainment": 0.00011974573135375977, "__label__fashion_beauty": 0.0002751350402832031, "__label__finance_business": 0.0006275177001953125, "__label__food_dining": 0.0005116462707519531, "__label__games": 0.0012121200561523438, "__label__hardware": 0.0011167526245117188, "__label__health": 0.0012769699096679688, "__label__history": 0.0005059242248535156, "__label__home_hobbies": 0.00017893314361572266, "__label__industrial": 0.0010833740234375, "__label__literature": 0.00032448768615722656, "__label__politics": 0.0004968643188476562, "__label__religion": 0.0007109642028808594, "__label__science_tech": 0.294677734375, "__label__social_life": 0.0001735687255859375, "__label__software": 0.01399993896484375, "__label__software_dev": 0.67724609375, "__label__sports_fitness": 0.0006589889526367188, "__label__transportation": 0.0009636878967285156, "__label__travel": 0.00027060508728027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39815, 0.06019]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39815, 0.31235]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39815, 0.85548]], "google_gemma-3-12b-it_contains_pii": [[0, 2510, false], [2510, 5721, null], [5721, 8476, null], [8476, 9784, null], [9784, 12718, null], [12718, 15224, null], [15224, 18023, null], [18023, 20978, null], [20978, 23321, null], [23321, 26472, null], [26472, 28929, null], [28929, 31710, null], [31710, 33160, null], [33160, 36598, null], [36598, 39815, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2510, true], [2510, 5721, null], [5721, 8476, null], [8476, 9784, null], [9784, 12718, null], [12718, 15224, null], [15224, 18023, null], [18023, 20978, null], [20978, 23321, null], [23321, 26472, null], [26472, 28929, null], [28929, 31710, null], [31710, 33160, null], [33160, 36598, null], [36598, 39815, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39815, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39815, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39815, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39815, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39815, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39815, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39815, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39815, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39815, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39815, null]], "pdf_page_numbers": [[0, 2510, 1], [2510, 5721, 2], [5721, 8476, 3], [8476, 9784, 4], [9784, 12718, 5], [12718, 15224, 6], [15224, 18023, 7], [18023, 20978, 8], [20978, 23321, 9], [23321, 26472, 10], [26472, 28929, 11], [28929, 31710, 12], [31710, 33160, 13], [33160, 36598, 14], [36598, 39815, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39815, 0.02749]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
983410aa44dc0a4f8d641b9402789ac7855b0805
Source-free, Machine-checked Validation of Native Code in Coq Kevin W. Hamlen The University of Texas at Dallas hamlen@utdallas.edu Dakota Fisher The University of Texas at Dallas djf1800000@utdallas.edu Gilmore R. Lundquist The University of Texas at Dallas grl0820000@utdallas.edu ABSTRACT Picinæ is an infrastructure for machine-proving properties of raw native code programs without sources within the Coq program-proof co-development system. This facilitates formal reasoning about binary code that is inexpressible in source languages or for which no source code exists, such as hand-written assembly code or code resulting from binary transformations (e.g., binary hardening or debloating algorithms). Preliminary results validating some highly optimized, low-level subroutines for Intel and ARM architectures using this new framework are presented and discussed. CCS CONCEPTS • Software and its engineering → Formal software verification; Software reverse engineering; • Security and privacy → Logic and verification. KEYWORDS formal methods, binary validation, automated theorem proving Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. ACM Reference Format: 1 INTRODUCTION Humans are notoriously error-prone when it comes to reasoning about code. For example, despite aggressive vetting by the open-source community, bugs in the Linux kernel persist for an average of almost 2 years before they are finally discovered [10]. An unprecedented 16,555 CVEs were tabulated by MITRE in 2018 alone.\(^1\) Reasoning about raw native code is even more difficult than analyzing code at the source level. While source code is designed to be human-readable, native code is designed to be machine-readable and is hence often unintuitive to humans. In particular, disassembled machine code typically lacks any structured control-flow idioms, and each assembly operation potentially implements a host of obscure side-effects on the state of the machine. For example, despite having no explicit parameters at the assembly level, the Intel x86 AAD instruction sets the AL register to \(AL' = (AL + 10 \cdot AH) \mod 2^8\), zeros the AH register, replaces the low 16 bits of overlapping registers AX and EAX, assigns SF the 8th bit of AL', sets or clears the ZF flag depending on whether AL' = 0, and sets or clears the PF flag depending on whether the number of 1's in the binary representation of AL' is even or odd. Any or all of these effects can impact the behavior of subsequent instructions. To obtain high confidence about the correctness and safety of complex code, machine-checked formal methods have long been championed as providing the highest attainable level of assurance for code validation. Rather than relying upon error-prone manual inspection of code, or upon semi-automated spot-checking regimens (e.g., fuzzing) that do not exhaustively cover the program’s state space (which is often infinite), formal methods approaches construct machine-checked proofs that are universally quantified over the state space, and that explicitly formalize all assumptions. For example, the Coq program-proof co-development environment has been used to construct the only C-compiler [7] in which compiler bug-checkers (e.g., Csmith) could find no errors. Formal methods approaches typically entail developing proofs of source-level code properties, which are then reflected down to the object and native code levels by a certifying compiler—a compiler for which there exists a machine-checkable proof of semantic transparency assuring that semantic source-level properties are preserved by compilation (e.g., [7]). However, these top-down approaches cannot prove properties of codes for which sources do not exist or are unavailable. Mission-critical, source-free codes abound, including - low-level binary runtime libraries (which often derive in part from hand-written assembly code) to which most source codes link at compile- or load-time; - binary code resulting from low-level code transformations, such as VM instrumentation, source-free control-flow integrity [13, 16], program shepherding [6], or binary debloating [5, 11]; and - closed-source libraries and components, which are building blocks for a majority of commodity software. This paper introduces Picinæ (Platform In Coq for INstruction-level Analysis of Executables): a new infrastructure for developing machine-checkable proofs of native code behavioral properties without source code. Picinæ implements the syntax and semantics of an ISA-general intermediate language (IL) in Coq, to which a variety of native code languages can be lifted. To facilitate lifting, a small plug-in for the Binary Analysis Platform (BAP) [1] automatically translates BAP IL code into Picinæ IL, allowing the system to support all architectures currently supported by BAP. Once lifted, theorems and machine-checkable proofs can be developed in Coq to formally validate behavioral properties of the code, such as calling convention adherence, termination, safety (avoidance of “bad” states), and even full correctness. \(^1\)www.cvedetails.com/vulnerability-list/year-2018/vulnerabilities.html As a preliminary case-study, we used Picinæ to machine-validate three heavily optimized subroutines from the GNU C standard libraries for Intel x86 and ARM. Our experiences show that although native code formal validation remains difficult, Picinæ’s infrastructure affords sufficient expressive power to build machine-checkable proofs about code that is beyond the scope of traditional top-down formal methods. 2 OVERVIEW Figure 1 illustrates Picinæ’s high-level workflow. Raw native code is first lifted to an IL data structure expressed as a .v file that is directly readable into Coq. The lifter is not a full disassembler; it exhaustively decodes all valid opcode byte sequences in executable segments of the target program (or, optionally, within a specified address range) without attempting to decide whether the decoded instructions are reachable or aligned. This avoids problems associated with disassembly undecidability. Coq theorems can later declare (and prove) reachability properties of particular instruction sequences under particular conditions. Lifted code is formalized in Coq as a partial function from address pairs to pairs of execution traces. Coq allows the and proves foundational properties of the IL that serve as building blocks for constructing machine-checkable theorems about source-free native code subroutines. In general, Picinæ’s implementation can be stratified into four levels: (1) Picinæ’s core defines the IL syntax and its basic operational semantics. (2) The symbolic interpreter elaborates these core definitions to build a more efficient transition system for the IL, which can be used within proofs to infer each successive machine state within a program being analyzed. (3) A collection of theory libraries proves foundational properties of the IL that serve as building blocks for constructing proofs of program properties. (4) At the highest level, a suite of ISA-specific tactic libraries automate and streamline common-case proof steps and notations particular to each architecture. Together, these form a foundation for defining and proving machine-checkable theorems about source-free native code subroutines. In general, any property expressible in Coq’s logic [3] can be reasoned about. This includes temporal properties of program 32 \implies (2, \text{AL} := \text{AL} \oplus \text{AH} \oplus 10; \text{AH} := \text{04}); \\ \text{AX} := \text{AX} \oplus \text{EAX} \implies \text{EAX} \equiv 16 \equiv 16 \oplus \text{AX}; \\ \text{SF} := \text{AL} \implies 7; \\ \text{ZF} := \text{AL} \implies 0; \\ \text{PF} := \cdots; \text{OF} := \text{CF} := \text{0}; \\ 33 \implies (2, \ldots) \\ 34 \implies (1, \text{ESP} := \text{ESP} \oplus \text{03}; \\ \text{J} \text{M} \implies \text{ESP} \oplus \text{03}) Figure 3: Intel x86 program AAD;RET starting at address 32 lifted to Picinæ IL. 3 TECHNICAL APPROACH 3.1 Core Definitions Picinæ's core definitions specify the IL's basic syntax and operational semantics in about 200 lines of Coq code. This part of the implementation is designed to be small, since it is the trusted foundation on which all other layers depend. The operational semantics are defined in terms of three inductive propositions: one for expressions, one for IL statements, and one for whole-program execution. Expressions judgments \((e, \sigma) \Downarrow u\) are large-step, and evaluate an expression \(e\) to a value \(u\) in the context of a machine state \(\sigma\). Statement judgments \((q, \sigma) \vdash (\sigma', \chi)\) are also large-step, computing the machine state \(\sigma'\) that results from executing statement \(q\) in state \(\sigma\), along with an exit status \(\chi\) (viz., fall-through \(\downarrow\), jump-to-address \(\downarrow a\), or exception \(\uparrow t\)). Program judgments \((a, \sigma) \Rightarrow_n (\sigma', \chi)\) are small-step; they implement the reflexive transitive closure of statement judgments by executing \(n\) machine instructions starting at address \(a\) in state \(\sigma\) to reach state \(\sigma'\) with exit status \(\chi\in\{\downarrow a, \uparrow t\}\). Since expressions include unpredictable values *, all three operational semantics are non-deterministic. For example, judgment \((e, \sigma) \Downarrow u\) asserts that expression \(e\) might evaluate to value \(u\). Judgment \((\langle P\rangle, \sigma) \Downarrow_{\uparrow e}\) is therefore derivable for all \(n \in \{0, 2^n\}\). Determinism theorems of the form \((e, \sigma) \Downarrow n_1 \Rightarrow (e, \sigma) \Downarrow n_2 \Rightarrow n_1 = n_2\) can establish that certain expressions \(e\) have only one possible value. Machine states \(\sigma : \nu \rightarrow u\) are mappings from state variables \(\nu\) to values \(u\) (viz., binary numbers or arrays of binary numbers). Each ISA defines its own universe of state variables \(\nu\) and their types. This affords reasoning about many different ISAs or combinations of ISAs for cross-platform software analysis within a common logical framework. 3.2 Symbolic Interpreter The inductive propositions that comprise Picinæ's core are succinct (in order to minimize the trusted computing base), but are expressed at a level too low for easy state manipulation in large proofs. To allow users to work at a higher level, a symbolic interpreter is implemented that steps an abstract machine state (possibly containing Coq proof meta-variables) by executing a specified number of machine instructions within a Coq proof context. Interpreting code that branches introduces multiple proof goals to the context—one for each possible branch destination. Interpreting a computed jump leads to an abstract state that must be refined by the prover (e.g., by case distinction on the program counter address) in order to soundly reduce the destination set to a finite set of possible targets before interpretation can continue. Interpretation of programs containing non-deterministic expressions * introduces proof meta-variables to represent the (unknown) values of those expressions, along with hypotheses that constrain the unknowns. For example, after symbolically interpreting the statement at address 32 in Fig. 3, the proof context contains new hypotheses of the form \(x : \mathbb{N}, \sigma(OF) = x, x < 2\), where \(x\) is a fresh Coq proof variable. In total, the symbolic interpreter and its proof of correctness comprise about 1000 lines of Coq code. For efficiency, it is implemented as functional Gallina code launched using Coq's `vm_compute` tactic. 3.3 Theory Libraries The power and ease of machine-assisted validation depends largely on the power and scope of the framework's library of proved theorems. Although Picinæ is still in a stage of early development, its proof library already consists of about 3,500 lines of Coq theorems, definitions, and proofs, divided into the following major sections: - **Inductive schemas** provide proof principles for Floyd-Hoare style inductive reasoning about pre-conditions, invariants, and post-conditions. - A **static semantics** proves type soundness of lifted IL and establishes appropriate bounds on numeric machine state element values. - A **library of two's complement arithmetic** facilitates reasoning about signed and unsigned modular arithmetic operations and their effects upon the binary representations of numbers. - A **collection of determinism lemmas** automatically identifies deterministic expressions and instructions and facilitates deterministic proof development in common cases. - **Monotonicity** theorems allow proofs to soundly reason about architectures in which only a subset of state components are known, programs in which only a subset of instructions have been lifted, and machine states in which only a subset of state component values are known. This facilitates modular, incremental reasoning about ISAs, programs, and states. In order to explain the case-studies that follow, we here limit our focus to describing the first of these sections. In Picinæ, correctness theorems about subroutines are typically expressed as an inductive set \(I : a \rightarrow (\sigma \rightarrow \text{Prop})\), which is a partial function from addresses to machine state propositions, paired with a post-condition \(Q : a \rightarrow \text{Prop}\) (a machine state proposition). Invariant set \(I\) includes the subroutine pre-condition \(I(a_0)\), where \(a_0\) is the subroutine's entry point address. The post-condition is a proposition that is asserted to be true whenever execution reaches the subroutine's return address, as defined by the architectural conventions of the ISA. For example, on 64-bit Intel architectures, a subroutine’s return address is the address \(a\) satisfied by proposition \(\text{Ret}(a) := (\text{M}[\text{ESP}] = a_0) \Downarrow a\), where \(a_0\) is the machine state on entry to the subroutine. With this formalization, partial correctness theorems have the form: \[ I(a_0)\sigma_0 \Rightarrow (a_0, \sigma_0) \Rightarrow_n (\sigma', \downarrow a) \Rightarrow \\ (I(a) = P \rightarrow P \sigma') \wedge (\text{Ret}(a) \rightarrow Q \sigma') \] which asserts that if the pre-condition is satisfied on entry, then whenever execution reaches any address \(a\) in any state \(\sigma'\) after any While the core IL semantics, symbolic interpretation engine, and IL works because memory access permission granularity is per-page, which asserts that satisfying the pre-condition guarantees termination after some number of steps \( n \). Picinæ’s \texttt{prove_invs} theorem reduces such proof goals to a set of \( \forall [1(\mathbb{N})] \) proof cases—one case for each invariant in \( I \). Each case starts symbolic execution at an invariant point in an abstract state satisfying the invariant, and challenges the user to prove that execution inevitably reaches another invariant point in a state satisfying the reached invariant, or exits the subroutine in a state that satisfies the post-condition. Thus, applying \texttt{prove_invs} to a partial or total correctness assertion launches an inductive proof that verifies that \( Q \) and all invariants in \( I \) are satisfied whenever they are reached. This is the core inductive schema for most correctness and safety proofs. ### 3.4 ISA Tactic Libraries While the core IL semantics, symbolic interpretation engine, and IL theory libraries are powerful enough to prove facts about programs from arbitrary ISAs, each ISA has specialized definitions (e.g., calling conventions), common cases (e.g., IL expressions deserving auto-simplification), and notations (e.g., assembly syntaxes) that deserve specialized treatment. Each supported ISA therefore has an ISA-specific supporting module that defines the universe \( v \) of machine state components and their types, specializes the general IL machinery to ISA-specific tasks, introduces proof notations for improved readability, and adds auto-simplification heuristics for easier program analysis. The ISA-specific tactic libraries for Intel x86 and ARM32 v7 ISAs consist of about 550 and 500 (respectively) lines of Coq definitions, theorems, proofs, and tactics. These two ISAs are the basis for the case-studies presented in the next section. ### 4 CASE STUDIES As a preliminary evaluation of our approach, we machine-verified three heavily optimized native code subroutines extracted from the GNU standard C libraries for Intel x86 and ARM32 using Picinæ. Each was lifted to Picinæ IL using BAP with our BIL-to-Picinæ plug-in, and proof development was conducted interactively using Coq v8.8 for Windows with CoqIDE. #### 4.1 ARM String-length Figure 4 lists the assembly code for the binary implementation of strlen. Like most aggressively optimized codes, it has an unintuitive structure. Instead of reading one byte at a time from memory, it reads four bytes at a time and tests each constituent byte of the loaded 32-bit word for nullity before proceeding to the next word. To improve cache alignment, lines 1–4 first round input pointer \( r0 \) down to the nearest word boundary and read four bytes from there into \( r2 \). Lines 5–10 then set all bits within \( r2 \) that precede the start of the string, so that the main loop will disregard them. This works because memory access permission granularity is per-page, and pages are multiples of the word size, ensuring that reading these extra bytes never raises a spurious access exception. The main loop (lines 11–17) uses a chain of conditionally executed instructions to test each byte of the loaded word for nullity without explicit conditional branches. In particular, null detection sets the Z flag as a side-effect, preventing the remaining conditional instructions in the loop body from executing, and thereby exiting the loop. Concluding lines 18–24 use a similar strategy to determine which null byte caused the loop to exit, assign the computed length to \( r0 \), and return to the caller. We proved total correctness of this subroutine in about 230 lines of Coq definitions, theorems, and proofs. More than half of the proof is devoted to proving abstract facts about bit arithmetic; only about 80 lines regard the computational aspects of the code. This highlights a recurring theme that we have experienced in many of our experiments: Correctness of optimized native codes frequently depends upon a host of obscure facts about binary arithmetic and logical operations that are not easily provable from any existing Coq library. Building a more comprehensive library of proved theorems relevant to assembly-level binary arithmetic is therefore important for scaling our approach to larger programs. Our proof assumes as a pre-condition that initial state \( s_0 \) satisfies the architectural calling conventions, and we assign a single invariant to line 11: \[ \exists k, \sigma(r0) = p \oplus 4k - p \mod 4 \land \sigma(r1) = p \oplus 4(k + 1) - p \mod 4 \land \sigma(r2) = m(p \oplus 4k - p \mod 4) \mid (k \neq 0; 2^{8(p \mod 4) - 1}) \land \forall i, i < 4k - p \mod 4 \rightarrow m(p \oplus i) \neq 0 \] where \( p = s_0(r0), m = s_0(m), \) and \( | \) denotes bitwise-or. This asserts that \( r0 \) points to the most recently read memory-word, \( r1 \) points to the next word, \( r2 \) contains the most recently read word (possibly with low-order bits set if \( k = 0 \)), and all bytes between \( p \) and \( r1 \) are non-null. Post-condition \( Q \) is defined by \[ \forall i, i < \sigma(r0) \rightarrow m(p \oplus i) \neq 0 \] which asserts that \( r0 \) holds the length of string \( p \) on exit. The use of 32-bit modular addition \( \oplus \) in these predicates insulates them against a peculiar corner case: The subroutine’s search for nulls could wrap around the end of the address space, leading to unusual conditions such as \( \sigma(r1) < p \). However, if one considers byte sequences that wrap around the end of the address space to be legal strings, then the subroutine is nevertheless correct (but only because its memory-reads are all word-aligned, and therefore no individual load ever spans the address-space limit). We discovered this complication late in the validation effort, forcing us to change our proof strategy. The main proof applies Picinæ’s prove_invs tactic (see §3.3) to reduce the correctness theorem to two subgoals—one for the pre-condition and one for the invariant. Applying the symbolic interpreter to the latter yields five sub-goals: four for the possible exit paths from the loop and a fifth that cycles back to the invariant. All but the last of these subgoals is solved by proving a null_terminate lemma that reasons that finding a null in byte \( j \) of the loaded word equates to finding a null at index \( 4k + j \) of the string. (Verifying this property of bit arithmetic constitutes the majority of the proof logic.) The final subgoal is solved by proving that the state reached by the symbolic interpreter satisfies the loop invariant. ### 4.2 ARM Memset We next verified a subroutine with even more complex optimizations: memset. The control flow for memset consists of three loops and an additional branch (which skips the first two loops if the buffer has fewer than 8 bytes), composing 31 instructions total. The first loop stores individual bytes to reach the first word boundary. This prepares the second loop, which stores two words per store instruction, aligned at word boundaries. The second loop is unrolled four times, with each iteration storing up to 8 words (32 bytes) total. The third loop performs the remainder of stores, and is also unrolled four times. Like strlen, memset uses conditional execution extensively to avoid branching in the unrolled loops; 20 of the 31 instructions are conditional. All three loops use register \( r2 \) to track the number of remaining bytes. Register \( r1 \) holds the character value to be stored in each byte. The setup for the second loop duplicates the least 8 bits of \( r1 \) to fill the 32-bit register. Register \( r1 \) is then copied to \( r12 \) to store two words with one instruction, and \( r3 \) holds the address for the next store in memory. Given starting address \( p \), the character \( c \) to be stored, and the length \( n \) of the buffer, all three loop invariants share a common predicate: \[ \sigma(r3) \oplus \sigma(r2) = p \oplus n \land \sigma(r1) \mod 2^8 = c \land \forall i, i < \sigma(r3) \land p \rightarrow m(p \oplus i) = c \] (5) The second loop’s invariant conjoins this with: \[ \sigma(r1) = \sigma(r12) = c \mid c \ll 8 < 16 \mid c \ll 24 \land \sigma(r3) \mod 4 = 0 \] (6) The postcondition of the program is: \[ \forall i, i < n \rightarrow m(p \oplus i) = c \] (7) Proving that the implementation is correct is straightforward except for two optimizations that raise significant complications: the duplication of \( c \) to obtain a word filled with \( c \), and the second loop’s unusual use of arithmetic underflow to conditionally exit. Both entail proving tricky properties of bit arithmetic. The word containing copies of \( c \) is formed using bitwise-and to cast \( r1 \) to a byte, and then using bitwise-or and shift-left to duplicate it. Validation must therefore prove that extracting any byte of the resulting word yields the original \( c \). To conditionally exit, the second loop subtracts 8 from \( r2 \) to modify an underflow status flag that conditions the execution of the remaining loop instructions. To restore \( r2 \) after the subtraction without corrupting the underflow flag, a bitwise-and clears the least significant 3 bits. Proving that this actually restores \( r2 \) contributes significant complexity to the proof. In total, the memset correctness proof is composed of about 700 lines, of which about 500 are devoted to bit arithmetic properties. Thus, as with the strlen experiment, the bulk of the proof’s complexity is dominated by proofs of obscure numerical properties of bit arithmetic, after which the computational aspects (e.g., control-flows) are relatively easy to validate. ### 4.3 Intel x86 String-compare As a contrast to the previous two experiments, we validated a simpler subroutine on a more complex ISA: Intel x86. Most Intel instructions have many side-effects upon the machine state, potentially complicating proofs about them. However, the implementation of strcmp in the x86 GNU standard libraries is straightforward: its 15 instructions implement a single loop that reads one byte at a time from both strings, comparing each and returning the difference between the first unequal pair. It therefore constitutes a good investigation of how proof complexity scales without the intrusion of problematic bit arithmetic. Proving correctness of this subroutine was relatively easy. The proof consists of about 15 lines of definitions that formally specify lexicographical ordering of strings, followed by about 50 lines of proof. About 20 of those lines are boilerplate proof-initialization tactics that are mostly reused between proofs, leaving only 30 lines of genuine proof effort. These were completed by an expert user in about 1 hour of time. This indicates that the scalability challenges of machine-checked native code validation mainly revolve around the presence of aggressive optimizations in the code being validated. Picinæ’s automation features allow complexities of the ISA to be mostly ignored until they are leveraged by optimized code to implement unusual, extra efficient solutions to programming tasks. ### 5 RELATED WORK Few prior works have attempted to machine-validate source-free native codes. RockSalt [8] models a reduced subset of x86 to prove safety of the Google Native Client [15] sandbox. XCAP [9] models a different x86 subset to verify OS-level context management subroutines. Bedrock [2] implements an IL similar to Picinæ’s for validating compiler back-ends. Picinæ differs from these prior efforts in that it establishes a foundation for machine-validating arbitrary codes, including instruction sequences not emitted by any compiler. ### 6 CONCLUSION Validating raw native code without sources is extremely difficult. There are presently few tools for doing so, and most rely on unvalidated software components (e.g., back-ends of compilers for which no machine-checked proof of correctness has ever been developed). Picinæ fills this gap by providing a fully machine-validated framework for reasoning about binary code lifted to an ISA-agnostic IL. Preliminary experiments validating three low-level subroutines from the GNU C standard libraries show that although formal validation remains challenging, Picinæ’s automation facilities nevertheless make it feasible to specify and prove correctness of highly optimized code for both RISC and CISC architectures. Our experiences indicate that the primary roadblock for scalability of the approach is the inadequacy of existing proof libraries about binary arithmetic for reasoning about many bit-arithmetic properties leveraged by native code to realize optimized computations. Future work should therefore seek to identify and prove more comprehensive, automatable theory libraries for binary arithmetic in order to ease low-level code verification tasks. ACKNOWLEDGMENTS The research reported herein was supported in part by ONR Award N0014-17-1-2995, AFRL Award FA8750-15-C-0066 (PA #2019-4224), DARPA Award FA8750-19-C-0006, NSF Award #1513704, and an endowment from the Eugene McDermott family. Any opinions, recommendations, or conclusions presented are those of the authors and not necessarily of the aforementioned supporters. REFERENCES
{"Source-Url": "https://personal.utdallas.edu/~hamlen/hamlen19feast.pdf", "len_cl100k_base": 6295, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 23801, "total-output-tokens": 8083, "length": "2e12", "weborganizer": {"__label__adult": 0.00040268898010253906, "__label__art_design": 0.00027298927307128906, "__label__crime_law": 0.0004360675811767578, "__label__education_jobs": 0.0003542900085449219, "__label__entertainment": 5.495548248291016e-05, "__label__fashion_beauty": 0.00016880035400390625, "__label__finance_business": 0.0002053976058959961, "__label__food_dining": 0.0004057884216308594, "__label__games": 0.0005059242248535156, "__label__hardware": 0.001312255859375, "__label__health": 0.000591278076171875, "__label__history": 0.00020551681518554688, "__label__home_hobbies": 0.00010991096496582033, "__label__industrial": 0.0004799365997314453, "__label__literature": 0.0002112388610839844, "__label__politics": 0.00027298927307128906, "__label__religion": 0.0005035400390625, "__label__science_tech": 0.025970458984375, "__label__social_life": 8.380413055419922e-05, "__label__software": 0.004852294921875, "__label__software_dev": 0.96142578125, "__label__sports_fitness": 0.0003631114959716797, "__label__transportation": 0.0006766319274902344, "__label__travel": 0.00020420551300048828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32234, 0.03823]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32234, 0.51482]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32234, 0.86745]], "google_gemma-3-12b-it_contains_pii": [[0, 6029, false], [6029, 8884, null], [8884, 15309, null], [15309, 20976, null], [20976, 27407, null], [27407, 32234, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6029, true], [6029, 8884, null], [8884, 15309, null], [15309, 20976, null], [20976, 27407, null], [27407, 32234, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32234, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32234, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32234, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32234, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32234, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32234, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32234, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32234, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32234, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32234, null]], "pdf_page_numbers": [[0, 6029, 1], [6029, 8884, 2], [8884, 15309, 3], [15309, 20976, 4], [20976, 27407, 5], [27407, 32234, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32234, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
324aeddad21279a64af9f9e17a03d4e06a921f17
Threads & Concurrency Threads - Overview - Multicore Programming - Multithreading Models - Thread Libraries - Implicit Threading - Threading Issues - Operating System Examples Objectives • To introduce the notion of a thread—a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems • To discuss the APIs for the Pthreads, Windows, and Java thread libraries • To explore several strategies that provide implicit threading • To examine issues related to multithreaded programming • To cover operating system support for threads in Windows and Linux Process Address Space max PC ... Process Control Block (PCB) Information associated with each process (also called task control block) - Process state – running, waiting, etc - Program counter – location of instruction to next execute - CPU registers – contents of all process-centric registers - CPU scheduling information – priorities, scheduling queue pointers - Memory-management information – memory allocated to the process - Accounting information – CPU used, clock time elapsed since start, time limits - I/O status information – I/O devices allocated to process, list of open files Multiple Processors Threads Address Space CPU A - PC - GPRs - CACHE CPU B - PC - GPRs - CACHE Motivation • Most modern applications are multithreaded • Threads run within application • Multiple tasks with the application can be implemented by separate threads – Update display – Fetch data – Spell checking – Answer a network request • Process creation is heavy-weight while thread creation is light-weight • Can simplify code, increase efficiency • Kernels are generally multithreaded Multithreaded Server Architecture 1. Request from client to server. 2. Server creates a new thread to service the request. 3. Thread resumes listening for additional client requests. Benefits • **Responsiveness** – may allow continued execution if part of process is blocked, especially important for user interfaces • **Resource Sharing** – threads share resources of process, easier than shared memory or message passing • **Economy** – cheaper than process creation, thread switching lower overhead than context switching • **Scalability** – process can take advantage of multiprocessor architectures Multicore Programming • **Multicore** or **multiprocessor** systems putting pressure on programmers, challenges include: – Dividing activities – Balance – Data splitting – Data dependency – Testing and debugging • **Parallelism** implies a system can perform more than one task simultaneously • **Concurrency** supports more than one task making progress – Single processor / core, scheduler providing concurrency Concurrent execution on single-core system: - T1, T2, T3, T4, T1, T2, T3, T4, T1, ... Parallelism on a multi-core system: - Core 1: T1, T3, T1, T3, T1, ... - Core 2: T2, T4, T2, T4, T2, ... - Single core: T1, T2, T3, T4, T1, T2, T3, T4, T1, ... Copyright 2018 Silberschatz, Gavin & Gagne Multicore Programming (Cont.) • Types of parallelism – **Data parallelism** – distributes subsets of the same data across multiple cores, same operation on each – **Task parallelism** – distributing threads across cores, each thread performing unique operation • As # of threads grows, so does architectural support for threading – CPUs have cores as well as *hardware threads* – Consider Oracle SPARC T4 with 8 cores, and 8 hardware threads per core Single and Multithreaded Processes Single-threaded process Multithreaded process Amdahl’s Law - Identifies performance gains from adding additional cores to an application that has both serial and parallel components - S is serial portion - N processing cores That is, if application is 75% parallel / 25% serial, moving from 1 to 2 cores results in speedup of 1.6 times - As N approaches infinity, speedup approaches 1 / S Serial portion of an application has disproportionate effect on performance gained by adding additional cores - But does the law take into account contemporary multicore systems? User Threads and Kernel Threads - **User threads** - management done by user-level threads library - Three primary thread libraries: - POSIX Pthreads - Windows threads - Java threads - **Kernel threads** - Supported by the Kernel - Examples – virtually all general purpose operating systems, including: - Windows - Solaris - Linux - Tru64 UNIX - Mac OS X Multithreading Models • Many-to-One • One-to-One • Many-to-Many Many-to-One • Many user-level threads mapped to single kernel thread • One thread blocking causes all to block • Multiple threads may not run in parallel on multicore system because only one may be in kernel at a time • Few systems currently use this model • Examples: – Solaris Green Threads – GNU Portable Threads One-to-One • Each user-level thread maps to kernel thread • Creating a user-level thread creates a kernel thread • More concurrency than many-to-one • Number of threads per process sometimes restricted due to overhead • Examples – Windows – Linux – Solaris 9 and later Many-to-Many Model • Allows many user level threads to be mapped to many kernel threads • Allows the operating system to create a sufficient number of kernel threads • Solaris prior to version 9 • Windows with the ThreadFiber package Two-level Model • Similar to M:M, except that it allows a user thread to be **bound** to kernel thread • Examples – IRIX – HP-UX – Tru64 UNIX – Solaris 8 and earlier Thread Libraries - **Thread library** provides programmer with API for creating and managing threads - Two primary ways of implementing - Library entirely in user space - Kernel-level library supported by the OS Pthreads • May be provided either as user-level or kernel-level • A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization • *Specification*, not *implementation* • API specifies behavior of the thread library, implementation is up to development of the library • Common in UNIX operating systems (Solaris, Linux, Mac OS X) Pthreads Example #include <pthread.h> #include <stdio.h> int sum; /* this data is shared by the thread(s) */ void *runner(void *param); /* threads call this function */ int main(int argc, char *argv[]) { pthread_t tid; /* the thread identifier */ pthread_attr_t attr; /* set of thread attributes */ if (argc != 2) { fprintf(stderr,"usage: a.out <integer value>\n"); return -1; } if (atoi(argv[1]) < 0) { fprintf(stderr,"%d must be >= 0\n",atoi(argv[1])); return -1; } Pthreads Example (Cont.) ```c /* get the default attributes */ pthread_attr_init(&attr); /* create the thread */ pthread_create(&tid,&attr,runner,argv[1]); /* wait for the thread to exit */ pthread_join(tid,NULL); printf("sum = %d\n",sum); } /* The thread will begin control in this function */ void *runner(void *param) { int i, upper = atoi(param); sum = 0; for (i = 1; i <= upper; i++) sum += i; pthread_exit(0); } ``` Pthreads Code for Joining 10 Threads #define NUM_THREADS 10 /* an array of threads to be joined upon */ pthread_t workers[NUM_THREADS]; for (int i = 0; i < NUM_THREADS; i++) pthread_join(workers[i], NULL); #include <windows.h> #include <stdio.h> DWORD Sum; /* data is shared by the thread(s) */ /* the thread runs in this separate function */ DWORD WINAPI Summation(LPVOID Param) { DWORD Upper = *(DWORD*)Param; for (DWORD i = 0; i <= Upper; i++) Sum += i; return 0; } int main(int argc, char *argv[]) { DWORD ThreadId; HANDLE ThreadHandle; int Param; if (argc != 2) { fprintf(stderr,"An integer parameter is required\n"); return -1; } Param = atoi(argv[1]); if (Param < 0) { fprintf(stderr,"An integer >= 0 is required\n"); return -1; } } /* create the thread */ ThreadHandle = CreateThread( NULL, /* default security attributes */ 0, /* default stack size */ Summation, /* thread function */ &Param, /* parameter to thread function */ 0, /* default creation flags */ &ThreadId); /* returns the thread identifier */ if (ThreadHandle != NULL) { /* now wait for the thread to finish */ WaitForSingleObject(ThreadHandle, INFINITE); /* close the thread handle */ CloseHandle(ThreadHandle); printf("sum = %d\n", Sum); } } Java Threads - Java threads are managed by the JVM - Typically implemented using the threads model provided by underlying OS - Java threads may be created by: ```java public interface Runnable { public abstract void run(); } ``` - Extending Thread class - Implementing the Runnable interface Java Multithreaded Program class Sum { private int sum; public int getSum() { return sum; } public void setSum(int sum) { this.sum = sum; } } class Summation implements Runnable { private int upper; private Sum sumValue; public Summation(int upper, Sum sumValue) { this.upper = upper; this.sumValue = sumValue; } public void run() { int sum = 0; for (int i = 0; i <= upper; i++) sum += i; sumValue.setSum(sum); } } public class Driver { public static void main(String[] args) { if (args.length > 0) { if (Integer.parseInt(args[0]) < 0) System.err.println(args[0] + " must be >= 0."); else { Sum sumObject = new Sum(); int upper = Integer.parseInt(args[0]); Thread thrd = new Thread(new Summation(upper, sumObject)); thrd.start(); try { thrd.join(); System.out.println ("The sum of " + upper + " is " + sumObject.getSum()); } catch (InterruptedException ie) { } } } else System.err.println("Usage: Summation <integer value>"); } } Implicit Threading • Growing in popularity as numbers of threads increase, program correctness more difficult with explicit threads • Creation and management of threads done by compilers and run-time libraries rather than programmers • Three methods explored – Thread Pools – OpenMP – Grand Central Dispatch • Other methods include Microsoft Threading Building Blocks (TBB), java.util.concurrent package Thread Pools • Create a number of threads in a pool where they await work • Advantages: – Usually slightly faster to service a request with an existing thread than create a new thread – Allows the number of threads in the application(s) to be bound to the size of the pool – Separating task to be performed from mechanics of creating task allows different strategies for running task • i.e. Tasks could be scheduled to run periodically • Windows API supports thread pools: ```c DWORD WINAPI PoolFunction(AVOID Param) { /* * this function runs as a separate thread. */ } ``` OpenMP - Set of compiler directives and an API for C, C++, FORTRAN - Provides support for parallel programming in shared-memory environments - Identifies parallel regions – blocks of code that can run in parallel ```c #include <omp.h> #include <stdio.h> int main(int argc, char *argv[]) { /* sequential code */ #pragma omp parallel { /* sequential code */ printf("I am a parallel region."); } /* sequential code */ return 0; } ``` Grand Central Dispatch • Apple technology for Mac OS X and iOS operating systems • Extensions to C, C++ languages, API, and run-time library • Allows identification of parallel sections • Manages most of the details of threading • Block is in “^{ }” - ^{ printf("I am a block"); } • Blocks placed in dispatch queue – Assigned to available thread in thread pool when removed from queue Grand Central Dispatch • Two types of dispatch queues: – serial – blocks removed in FIFO order, queue is per process, called main queue • Programmers can create additional serial queues within program – concurrent – removed in FIFO order but several may be removed at a time • Three system wide queues with priorities low, default, high ```c dispatch.queue_t queue = dispatch.get_global.queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); dispatch.async(queue, ^{ printf("I am a block."); }); ``` Threading Issues • Semantics of \texttt{fork()} and \texttt{exec()} system calls • Signal handling – Synchronous and asynchronous • Thread cancellation of target thread – Asynchronous or deferred • Thread-local storage • Scheduler Activations Semantics of fork() and exec() • Does `fork()` duplicate only the calling thread or all threads? – Some UNIXes have two versions of fork • `exec()` usually works as normal – replace the running process including all threads Signal Handling - **Signals** are used in UNIX systems to notify a process that a particular event has occurred. - **A signal handler** is used to process signals - Signal is generated by particular event - Signal is delivered to a process - Signal is handled by one of two signal handlers: - default - user-defined - Every signal has **default handler** that kernel runs when handling signal - **User-defined signal handler** can override default - For single-threaded, signal delivered to process Signal Handling (Cont.) - Where should a signal be delivered for multi-threaded? - Deliver the signal to the thread to which the signal applies - Deliver the signal to every thread in the process - Deliver the signal to certain threads in the process - Assign a specific thread to receive all signals for the process Thread Cancellation • Terminating a thread before it has finished • Thread to be canceled is **target thread** • Two general approaches: – **Asynchronous cancellation** terminates the target thread immediately – **Deferred cancellation** allows the target thread to periodically check if it should be cancelled • Pthread code to create and cancel a thread: ```c pthread_t tid; /* create the thread */ pthread_create(&tid, 0, worker, NULL); /* cancel the thread */ pthread_cancel(tid); ``` Thread Cancellation (Cont.) - Invoking thread cancellation requests cancellation, but actual cancellation depends on thread state <table> <thead> <tr> <th>Mode</th> <th>State</th> <th>Type</th> </tr> </thead> <tbody> <tr> <td>Off</td> <td>Disabled</td> <td>–</td> </tr> <tr> <td>Deferred</td> <td>Enabled</td> <td>Deferred</td> </tr> <tr> <td>Asynchronous</td> <td>Enabled</td> <td>Asynchronous</td> </tr> </tbody> </table> - If thread has cancellation disabled, cancellation remains pending until thread enables it - Default type is deferred - Cancellation only occurs when thread reaches cancellation point - i.e. `pthread_testcancel()` - Then cleanup handler is invoked - On Linux systems, thread cancellation is handled through signals Thread-Local Storage - **Thread-local storage** (TLS) allows each thread to have its own copy of data. - Useful when you do not have control over the thread creation process (i.e., when using a thread pool). - Different from local variables: - Local variables visible only during single function invocation. - TLS visible across function invocations. - Similar to **static** data: - TLS is unique to each thread. Scheduler Activations • Both M:M and Two-level models require communication to maintain the appropriate number of kernel threads allocated to the application • Typically use an intermediate data structure between user and kernel threads – **lightweight process (LWP)** – Appears to be a virtual processor on which process can schedule user thread to run – Each LWP attached to kernel thread – How many LWPs to create? • Scheduler activations provide **upcalls** - a communication mechanism from the kernel to the **upcall handler** in the thread library • This communication allows an application to maintain the correct number kernel threads Operating System Examples • Windows Threads • Linux Threads Windows Threads • Windows implements the Windows API – primary API for Win 98, Win NT, Win 2000, Win XP, and Win 7 • Implements the one-to-one mapping, kernel-level • Each thread contains – A thread id – Register set representing state of processor – Separate user and kernel stacks for when thread runs in user mode or kernel mode – Private data storage area used by run-time libraries and dynamic link libraries (DLLs) • The register set, stacks, and private storage area are known as the context of the thread Windows Threads (Cont.) • The primary data structures of a thread include: – ETHREAD (executive thread block) – includes pointer to process to which thread belongs and to KTHREAD, in kernel space – KTHREAD (kernel thread block) – scheduling and synchronization info, kernel-mode stack, pointer to TEB, in kernel space – TEB (thread environment block) – thread id, user-mode stack, thread-local storage, in user space Windows Threads Data Structures ETHREAD - thread start address - pointer to parent process KTHREAD - scheduling and synchronization information - kernel stack TEB - thread identifier - user stack - thread-local storage kernel space | user space Linux Threads - Linux refers to them as *tasks* rather than *threads* - Thread creation is done through `clone()` system call - `clone()` allows a child task to share the address space of the parent task (process) - Flags control behavior | flag | meaning | |------------|----------------------------------------------| | CLONE_FS | File-system information is shared. | | CLONE_VM | The same memory space is shared. | | CLONE_SIGHAND | Signal handlers are shared. | | CLONE_FILES | The set of open files is shared. | - `struct task_struct` points to process data structures (shared or unique)
{"Source-Url": "http://www.cs.umd.edu/class/spring2019/cmsc412/Slides/Set5%20-Chapter%204.pdf", "len_cl100k_base": 4202, "olmocr-version": "0.1.53", "pdf-total-pages": 50, "total-fallback-pages": 0, "total-input-tokens": 66534, "total-output-tokens": 6122, "length": "2e12", "weborganizer": {"__label__adult": 0.0003039836883544922, "__label__art_design": 0.00022459030151367188, "__label__crime_law": 0.0003046989440917969, "__label__education_jobs": 0.00048422813415527344, "__label__entertainment": 4.7087669372558594e-05, "__label__fashion_beauty": 0.00012099742889404296, "__label__finance_business": 0.00010544061660766602, "__label__food_dining": 0.0003230571746826172, "__label__games": 0.0007038116455078125, "__label__hardware": 0.00254058837890625, "__label__health": 0.0003230571746826172, "__label__history": 0.00019931793212890625, "__label__home_hobbies": 0.00011479854583740234, "__label__industrial": 0.0004777908325195313, "__label__literature": 0.0001392364501953125, "__label__politics": 0.00020003318786621096, "__label__religion": 0.0004515647888183594, "__label__science_tech": 0.0187530517578125, "__label__social_life": 6.192922592163086e-05, "__label__software": 0.005828857421875, "__label__software_dev": 0.96728515625, "__label__sports_fitness": 0.0003371238708496094, "__label__transportation": 0.0005483627319335938, "__label__travel": 0.0001809597015380859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17997, 0.00634]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17997, 0.45094]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17997, 0.80952]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 22, false], [22, 177, null], [177, 585, null], [585, 622, null], [622, 1182, null], [1182, 1202, null], [1202, 1279, null], [1279, 1680, null], [1680, 1864, null], [1864, 2289, null], [2289, 2718, null], [2718, 3011, null], [3011, 3472, null], [3472, 3555, null], [3555, 4080, null], [4080, 4452, null], [4452, 4517, null], [4517, 4838, null], [4838, 5114, null], [5114, 5349, null], [5349, 5525, null], [5525, 5742, null], [5742, 6086, null], [6086, 6614, null], [6614, 7065, null], [7065, 7278, null], [7278, 7899, null], [7899, 8424, null], [8424, 8741, null], [8741, 9290, null], [9290, 10069, null], [10069, 10480, null], [10480, 11082, null], [11082, 11558, null], [11558, 11947, null], [11947, 12451, null], [12451, 12699, null], [12699, 12927, null], [12927, 13445, null], [13445, 13771, null], [13771, 14268, null], [14268, 14944, null], [14944, 15364, null], [15364, 16017, null], [16017, 16078, null], [16078, 16602, null], [16602, 17026, null], [17026, 17275, null], [17275, 17997, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 22, true], [22, 177, null], [177, 585, null], [585, 622, null], [622, 1182, null], [1182, 1202, null], [1202, 1279, null], [1279, 1680, null], [1680, 1864, null], [1864, 2289, null], [2289, 2718, null], [2718, 3011, null], [3011, 3472, null], [3472, 3555, null], [3555, 4080, null], [4080, 4452, null], [4452, 4517, null], [4517, 4838, null], [4838, 5114, null], [5114, 5349, null], [5349, 5525, null], [5525, 5742, null], [5742, 6086, null], [6086, 6614, null], [6614, 7065, null], [7065, 7278, null], [7278, 7899, null], [7899, 8424, null], [8424, 8741, null], [8741, 9290, null], [9290, 10069, null], [10069, 10480, null], [10480, 11082, null], [11082, 11558, null], [11558, 11947, null], [11947, 12451, null], [12451, 12699, null], [12699, 12927, null], [12927, 13445, null], [13445, 13771, null], [13771, 14268, null], [14268, 14944, null], [14944, 15364, null], [15364, 16017, null], [16017, 16078, null], [16078, 16602, null], [16602, 17026, null], [17026, 17275, null], [17275, 17997, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17997, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17997, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17997, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17997, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17997, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17997, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17997, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17997, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17997, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17997, null]], "pdf_page_numbers": [[0, 0, 1], [0, 22, 2], [22, 177, 3], [177, 585, 4], [585, 622, 5], [622, 1182, 6], [1182, 1202, 7], [1202, 1279, 8], [1279, 1680, 9], [1680, 1864, 10], [1864, 2289, 11], [2289, 2718, 12], [2718, 3011, 13], [3011, 3472, 14], [3472, 3555, 15], [3555, 4080, 16], [4080, 4452, 17], [4452, 4517, 18], [4517, 4838, 19], [4838, 5114, 20], [5114, 5349, 21], [5349, 5525, 22], [5525, 5742, 23], [5742, 6086, 24], [6086, 6614, 25], [6614, 7065, 26], [7065, 7278, 27], [7278, 7899, 28], [7899, 8424, 29], [8424, 8741, 30], [8741, 9290, 31], [9290, 10069, 32], [10069, 10480, 33], [10480, 11082, 34], [11082, 11558, 35], [11558, 11947, 36], [11947, 12451, 37], [12451, 12699, 38], [12699, 12927, 39], [12927, 13445, 40], [13445, 13771, 41], [13771, 14268, 42], [14268, 14944, 43], [14944, 15364, 44], [15364, 16017, 45], [16017, 16078, 46], [16078, 16602, 47], [16602, 17026, 48], [17026, 17275, 49], [17275, 17997, 50]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17997, 0.02311]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
bbf5d3008f450cf5d5bed5d1f43c507d23caf118
Practicing Domain-Specific Languages: From Code to Models Laure Gonnord, Sébastien Mosser To cite this version: HAL Id: hal-01865448 https://hal.archives-ouvertes.fr/hal-01865448 Submitted on 31 Aug 2018 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Practicing Domain-Specific Languages: From Code to Models Laure Gonnord Univ Lyon, Université Claude Bernard Lyon 1 LIP, CNRS, ENS de Lyon, Inria F-69342, LYON Cedex 07, France Laure.Gonnord@ens-lyon.fr Sébastien Mosser Université Côte d’Azur, CNRS, I3S, France Sophia Antipolis mosser@i3s.unice.fr ABSTRACT This paper describes our experience in constructing a new Domain-Specific Language course at the graduate level whose objectives is to reconcile concepts coming from Language Design as well as Modeling domains. We illustrate the course using the reactive systems application domain, which prevents us to fall back in a toy example pitfall. This paper describes the nine stages used to guide students through a journey starting at low-level C code to end with the usage of a language design workbench. This course was given as a graduate course available at Université Côte d’Azur (8 weeks, engineering-oriented) and École Normale Supérieure de Lyon (13 weeks, research-oriented). CCS CONCEPTS • Software and its engineering → Compilers: Designing software; Development frameworks and environments; KEYWORDS Software Modeling, Domain-Specific Languages, Code Generation, Code Abstraction, Reactive Systems ACM Reference Format: 1 INTRODUCTION The foundation of the course we propose is the observation that the numerous concepts behind “Language Design” and “Software Modeling” are difficult to apprehend for students. From the point of view of educators, numerous difficulties arise when we build on a new instance of each of these two courses. On the one hand, teaching language design (at both graduate or undergraduate level) is a necessary but painful task. Students struggle to understand why do they need to know how to design languages: writing a compiler (or an interpreter) is a tedious task, and the large number of existing programming languages ensure that at least one will implement the feature needed for a given purpose. In addition, this kind of courses are often implemented under the name “Compilation”, and usually focus on lexical analysis, attributed grammars or symbol resolution [1, 8], and forgets the most relevant parts, namely, abstractions and semantics. It is then hard to focus on the part related to Language Design when students are ensnared in a difficult (from a theoretical point of view) and technical (from a practical point of view) context. However, we defend that in addition to the underlying foundations related to this field, it is important for student to understand how to design a language. It will help them to create their own when relevant, but also help them to classify existing ones and support their choices. On the other hand, teaching modeling is also a necessary but painful task [5, 6]. We defend it as necessary considering that the essence of modeling is abstraction, i.e., the ability to remove unnecessary details from a complex situation. But finding the right way to teach modeling is complicated [2]. Students might struggle with complicated technological stacks, syntactical issues in the UML [7] and have difficulties to understand the differences between models and meta-models when applied to simple toy examples. We defend that software developers must be confronted to modeling during their studies to identify the strength of abstraction-driven approaches. Clearly, a “modeling for modeling” approach does not work, and the infamous “UML to Java” example [4] cannot reasonably be used in 2018 to support model-based courses. During a conference dedicated to software engineering and programming languages hosted in Besançon in 2016, the two authors met and exchanged the views described in the two previous paragraphs. During this discussion, it was clear that the main issue was to consider Software Language Engineering (SLE) and Model-driven Engineering (MDE) as two disjoint sets. We decided to leverage our experience in teaching languages and models to create a common syllabus for a course shared by École Normale Supérieure de Lyon (ENSL) and Université Côte d’Azur (UCA). This course uses the point of view of Domain-specific Languages (DSLs) to support the teaching of language design and abstraction, using a practical approach. We used as foundations a case study dedicated to embedded devices from a language point of view [3], coupled to our experience in teaching embedded systems from a reactive programming point of view. The goal of this paper is not to describe the syllabus of the course 1 but to share the rationale of the course, with the description of a lab session that goes through multiple level of abstractions and different technological stacks. The paper is organized as follows: in Section 2 we depict the objectives we identified for this course, and the lab examples it is built on. In Section 3, we develop the objectives of each of the first steps of the “Minimal and Viable example”, that illustrate different levels of abstractions, from code to models (of code). Then, in Section 4, we show alternative approaches working at the language Available online: https://github.com/mosser/sec-labs design level to illustrate modeling concepts. Finally, the last section (Section 5) gives more information about how the course was implemented in both universities. 2 COURSE OVERVIEW The keystone of the pedagogical approach we follow is to use the predisposition of students to work with code to catch their curiosity and make them work on the concepts that drive this course: identifying abstractions to design languages. Considering DSLs as the object under study, the course we propose has then the following objectives: - O1 Illustrate how to abstract code into models; - O2 Identify how to operationalize models according to different targets (e.g., ease of development, intended users); - O3 Study the relationships that exist between concepts and tools; - O4 Acquire experience in modeling through hands-on lab sessions. The course is implemented in a “two-phases” fashion. The first phase relies on a minimal and viable lab example, developed in Section 3 and 4: the rationale of this phase is to discover and experiment the DSL main concepts in a guided way. The second phase consists in the creation of a complete language for a new domain in an unguided way. The approach we propose is entirely open source, with lightweight technology, and low cost embedded devices for the lab material. Thanks to this approach, the students progressively acquire the following knowledge and skills: - The definition and practical use of the following concepts: model, meta-model (and their synonyms in language theory: languages, grammars), and object-orientation and reflexivity; - A methodology to design a new language for a specific application domain: identify what is reusable, and what needs extensions, make rational implementation choices, test; - An experience in designing a real-life DSL, targeting a business-driven case study. Minimal & Viable Lab Example The lab example we propose is based on the Arduino open-source technology, and the use of some sensors and actuators: one 7-segment display, a button, a led per platform. The platform can be designed around a breadboard as in Fig. 1 or thanks to a pre-built Arduino shield (Fig. 2). An Arduino Uno micro-controller costs 20€. To build the breadboard version, one must buy small electronic hardware (a breadboard, a LED, a button and a display) for approximately 10€. The shield version is more expensive, as the shield can cost up to the price of the Arduino board according to vendors. A platform can be used by one group of up to four students, where two is the right team size based on our experience. In this lab, we propose a sequence of “stages”, each stage being built in the same “2-steps”way: 1. Students are given a minimal working example (switching the LED on and off) of the language/technology used in the stage. They experiment and begin to criticize the solution in terms of performance, readability, usage, … 2. Then, they modify the example in a non-trivial way to transform it in a viable example, representative of the domain. We here propose to use a 7-segments display to count time and reset. This example is non trivial since it requires to introduce memory states. The two applications need to be composed on the very same board: pushing the button changes the LED state, and also reset the counter to 0. In the next two sections, we focus on the description of the nine stages identified in this minimal and viable example. The idea is to describe what we give to the students to kick-off the work for each stage, and the questions we use to drive the associated “step back” discussions and guide their report writing. 3 FROM CODE TO MODELS To prevent falling back in the toy example pitfall, we illustrate the course using the reactive systems application domain. This domain is pertinent since there is a long tradition in designing specialized languages and development processes for these kind of software, especially in the area of critical embedded systems. Here we choose a less ambitious subdomain, namely, programming a micro-controller reacting to tiny sensors and operating on actuators. However, the modeling and code issues that arise from this simplified case are representative and a good abstraction of the real one. This section illustrates both objectives O1 and O2 of the course. Indeed, the first three stages of the lab illustrate different levels of abstractions one can use while programming an Arduino-based reactive system. From these stages, we start discussions about tools and methods to gain abstraction in a piece of code (O1) and also the pros and the cons of the different approaches according to # Listing 1: Minimal example: Plain C code ```c #include <avr/io.h> int main(void) { DDRB |= 0x00100000; while(1) { PORTB ^= 0b00100000; _delay_ms(1000); } return 0; } ``` 3.1 Plain Code (C) The first stage uses C code working at the micro-controller registries level. We provide a running piece of code (Lst. 1), as well as the environment to compile it (thanks to avr-gcc) and upload the compiled image to the micro-controller (thanks to avr-dude) with a Makefile. The given code enables a LED plugged on pin 13 to blink forever at a frequency of 1Hz. At this low level of abstraction, accesses to sensors and actuators consist in injecting an electrical current in the micro-controller physical pins. At the code level, this is done thanks to parallel writes to ad-hoc registers called PORTx (x being B, C or D), that are configured with the help of the corresponding DDRx registers (input:0 or output:1). For instance, in Lst. 1 at line 5, we setup the LED plugged to pin 13 as an actuator by setting to 1 the 5th bit of the DDRB registry (the first 8 pins being handled by DDRA, $13 = 5 + 8$). Then, thanks to an infinite loop, we switch it on and off with the help of a xor applied to the very same bit in the PORTB registry. Considering this piece of code, students are asked to answer to the following questions and invited to elaborate and argument their answers on paper: - What can we say about readability of this code? What are the skills required to write such a program? - Regarding the application domain, could you characterize the expressivity? The configurability of the code to change pins or behavior? Its debugging capabilities? - Regarding the performance of the output code, what kind of parallelism is expressed by the use of the DDRx registers? - What if we add additional tasks in the micro-controller code, with the same frequency? With a different frequency? 3.2 Using the Arduino Library (C) The second step uses the Arduino library\textsuperscript{3} which is a C++ library provided by the Arduino designers. This library provides higher-level access to each pin individually, for example to support configuration in write or read mode with a function called pinMode, or to control the electrical current sent to a given pin with a call to digitalWrite. The code given to students is depicted in Lst. 2. This code is an iso-functional version of the one depicted in Listing 1, using the Arduino library. Based on the given example and the realization of the 7-segments counter, we ask students to discuss about the following questions in their report: - Is this language extensible enough to support new features? - What is the price for the developer? - Who is the public targeted by this "language"? - What kind of parallelism can still be expressed? - Is this language extensible enough to support new features? 3.3 Programming a Finite State Machine (C) The two applications to be developed for the micro-controller helps students to identify the need for abstraction when targeting a specialized domain. They easily identify that the LED and 7-segments functionalities can be modeled thanks to a Finite State Machine (FSM). Before moving to a model-driven approach, we use in this stage a convention-based approach to reify abstractions at the code level. Considering a system where one can express transitions between states, it is possible to implement such an FSM using functions as states, and conditional instructions coupled to terminal function call for transitions (see Lst. 3). Using this abstraction, we raise the following questions to help students understand the importance of abstraction at the code level: - Does introducing a convention solve the readability issue? - How to extend an app with a new feature? Does the approach prevent one to perform invasive changes in the existing behavior to introduce a new one? - How to extend the code so that to support new features, e.g., memory-less tasks, state-full tasks, different frequencies? 3.4 Modeling an ArduinoApp (UML & Java) We use this stage to leverage the insights gained at the previous one, and emphasize the importance of working at the model level when dealing with abstractions. The idea here is to show firstly, that working with models free the user from the syntax, and secondly, that code generation mechanisms can be used to reach the previously defined operational target. Using a model-driven approach, it becomes clear to the students that the user is now restricted to the vocabulary available in the meta-model, and cannot deviate from it. It helps to position the meta-model as the abstract syntax of the language, defining what concepts needs to be exposed to the user. With respect to code generation, we show how the concepts... Listing 3: Minimal example: functional FSM ```c #include <avr/io.h> #include <util/delay.h> #include <Arduino.h> int led = 13; void state_on() { digitalWrite(led, HIGH); _delay_ms(1000); state_off(); } void state_off() { digitalWrite(led, LOW); _delay_ms(1000); state_on(); } int main(void) { pinMode(led, OUTPUT); state_on(); return 0; } ``` Listing 4: Simple FSM Visitor implementation ```java public class ToC extends Visitor<StringBuffer> { @Override public void visit(App app) { 4 c("#include <avr/io.h>); c("#include <util/delay.h>); c("#include <Arduino.h>); c("""); c("void_setup()"); for(Actuator a: app.getActuators()){ a.accept(this); } c("\n"); for(State state: app.getStates()){ h(String.format("void state_%s();", state.getName())); state.accept(this); } if (app.getInitial() != null) { 19 c("int main(void) { systematically generate code for the app. c("\n c("\n c("\n c("\n c("\ } @Override public void visit(Actuator actuator) { ... } @Override public void visit(State state) { ... } @Override public void visit(Action action) { ... } private void c(String s) { this.code.append(String.format("%s\n", s)); } } ``` - From the user point of view, what does it change? Is the approach usable for large apps? - Consider the LED app and the counter one as two separate models. Is it possible to automate the creation of the final app based on these two models? - What about the readability of the generated code compared to the previous one “by hand”? Its debugging capabilities? Its extensiveness? - Explain the interest of modeling in terms of genericity, functional property verification. 3.5 Remodeling an ArduinoApp (UML & Java) During the previous stage, students understand quickly that working with the meta-model defined in Fig. 3 is not suitable for large applications: the final FSM is the cartesian product of the two apps (LED and counter). We offer them here two choices: - Creating a composition operator to support the combination of elementary applications to produce complex ones; - Switch to another kind of abstraction that will provide a better support for end-users. 3.6 Conclusions In this section we depicted how, from low-level Arduino code, where a single programming model is promoted, we abstracted the domain-specific features by modeling it in different ways. We also showed that the choice of the modeling paradigm has a substantial impact on the expressivity and extensibility of further developments. Thanks to a code-first approach and a journey through abstraction levels, we manage to lead the students to a point where they recognize the value of models, and see the benefits of using such artifacts. From now on, we will no longer change the abstraction level, but make a tour on different domain-specific language and meta-modelling paradigms that will permit to generate domain-specific code. 4 FROM MODELS TO DSLS Thanks to the previous stages, we are now working at the model level. The following stages explore how models relate with tools ($O_3$), and how such models and tools can target different users ($O_4$). Contrarily to the previous stages that were sequential, these stages are independent, as they address different paradigm in an hands-on fashion ($O_5$). 4.1 Integrating an existing DSL (LUSTRE) In Section 3 we ended up with the conclusion that using a reactive system representation was a suitable way to model the domain in a scalable way, avoid costly code generation, without sacrificing expressivity or end-user usage. As a consequence, now that we came with this new paradigm for modeling, why not searching for an existing (possibly domain-specific) language implementing reactive systems, so that to reuse it for our particular purpose? The LUSTRE synchronous language was intended to be used for the design of critical real-time embedded systems. However, some educational-driven experiences have been made for real-time programming courses. This stage takes inspiration from them. The minimal code depicted in Lst. 7 illustrates the key feature of this DSL: only the functionality of the infinite loop is described, avoiding implementation details as well as non logical time. Here, the node describes the actuator “led on” as a boolean output whose value is false during the first period, then the negation of its value during the preceding period (pre(led_on)) forever. The infinite loop depicted in Listing 8 (where ctx depicts the context, i.e., the current state) is compiled from this description (and the desired frequency). The user should also encode the glue code in a separate file. The compilation chain and its relationship with the application domain being depicted in Figure 5, the students are invited to argument their answers to the following questions: - Who is the intended user for such a language? - What is the cost of reusing this existing DSL for the developer in terms of code? - What is the cost of adding a new task of our domain? - Was the cost of adding a new hardware target? 1http://www-verimag.imag.fr/The-Lustre-Programming-Language-and Figure 4: Minimal reactive meta-model Figure 5: Lustre Compilation chain for Arduino Listing 8: Generated C code chain for Lustre, and glue code ```c // cpt.c - Generated void cpt_step(cpt_ctx* ctx) { //... cpt_O_led_on(...); } // main.c - Generated int main() { //... while(1) { cpt_step(ctx); _delay_ms(1000); } return 1; } // Glue code (Arduino target) - Hand written void cpt_O_led_on(void* cdata, _boolean _V) { if (_V == 1) digitalWrite(led, HIGH); else digitalWrite(led, LOW); } ``` - The Lustre language impose the memory to be bounded by construction. Is this a limitation for our (sub) domain? Listing 9: Minimal example: Reactive code using ANTLR App: Blinking - led is an actuator bound to pin 13 ``` producer: quartz emit "tick" at 1Hz ``` consumer: blinker ```c bool state initialized as true state : led is HIGH !state: led is LOW state is !state ``` blinker listens to quartz - The Lustre language comes with its own ecosystem (test, formal verification), what are the generic properties we can imagine to prove from our domain? 4.2 Designing an External DSL (ANTLR) The Lustre stage illustrates how to find and reuse an existing DSL that might fit a given purpose. In this stage, we ask the student to define a dedicated external language, reifying the domain concepts associated to their choice (FSM & composition or reactive system) directly in a dedicated syntax. We give to the student a kick-off implementation of an external grammar (using ANTLR), and an evaluation of the Abstract Syntax Tree that produce an instance of the previously defined meta-model (as a Java object). We also provide a program conform to the defined syntax (Lst. 9), and the command line script to call the compiler and produce a reactive code associated to such a program. Based on their extension of the grammar to support the counter application, students are asked to discuss the following points: - Who is the intended user? What about the tooling associated to the language? http://www.antlr.org/ When a language is not present, students are encouraged to publish (and maybe biased by their previous knowledge of Java). 4.3 Designing an Embedded DSL (e.g., Groovy) Considering the cost of defining an external language, we explore here how to embed abstractions in an existing language instead of creating a new one from scratch. At this stage, we let the students free to choose their technological stack, as we only provide a link to the embedded directory of the ArduinoML zoo of syntax\(^7\). The idea of the ArduinoML zoo is to provide alternative syntaxes (11 embedded ones and 4 externals, provided by 9 contributors) to the FSM meta-model described in the previous section. Students can then pick-up one familiar language example in the zoo, and implement the counter application using their favorite language. When a language is not present, students are encouraged to publish a pull-request on the GitHub repository to update the zoo. Often, students chose the Groovy language to support their work at this stage, considering the large amount of documentation available (and maybe biased by their previous knowledge of Java). Based on their work to adapt the ArduinoML example to the counter application, students are asked to discuss the following questions: - How to chose between embedded or external? - What is the impact of the host language choice? - What about the maintainability of the concrete syntax? - Who is targeted as an audience by this class of languages? 4.4 Using a Language Workbench (e.g., MPS) Considering the cost of designing an external language from scratch, and the intrinsic limitations of the embedded approach, we propose here to explore how dedicated workbenches can be used to model language. The key point here is to make students understand that language design is ”simply” another domain, and that domain-specific tooling can be defined to support them, following the very same approach that they just use to support Arduino application designers. We chose the Meta Programming System\(^8\) (MPS) to support this step, and also provide a link to the Xtext\(^9\) version of the ArduinoML syntax for interested students. We give to the student a reference structure, and the associated projection to reach a concrete syntax (Fig. 6). Students can immediately use the generated environment and experiment code completion, syntax coloring, type constraints, which came for no additional costs. When the implementation of the counter app is finished, we ask to the students the following set of questions: - What is the cost/benefits ratio of using a workbench? - What are the limitations of such an approach? - What about vendor lock-in? 4.5 Conclusions In this section, we described four versions of the same language, using alternative modeling approach to support its implementation. First, reusing a dedicated language helped us to discuss the concept of domain scope and model integration (through glue code). Then, we explored three different ways to create new languages capturing domain abstractions. These different ways help us to discuss, among others, domain evolution, meta-modeling principles, and user relevance. 5 LOGISTICS & EVALUATION This course follows up a 5 years old course about Domain-specific languages taught at UCA. This new version is part of two different curricula: "Fundamental Computer Science" Master of Science at ENS and "Software Architecture" Master of Engineering at UCA. In Lyon, the format is 24 hours, supervised, including closely related lectures and labs (13 weeks, 4 credits). The course was attended by a small number of students which never attended any software engineering course, and are inexperienced in language design. However, they have a broad knowledge in semantics and program abstractions. At UCA, the course is classically attended by a large number of students (35), and length 8 weeks for 2 credits. The evaluation differs, as UCA values an engineering approach (thus evaluating a project) and ENSL is a research-oriented environment (half of the evaluation is made on a bibliographic study about DSLs, models and languages). We consider as a prerequisite basic notion of software development and modeling. 5.1 Case study examples for Phase #2 The nine different implementations of the reactive language are used in the course to support the first phase, where students explore in a guided way how to work with abstractions on a given domain. As educators, we guided their journey by providing reference code, and step back questions. The second phase of the course relies on the capture of a new domain, based on the experience learned during the first phase. We briefly describe here several cases studies used in the past to support DSLs and meta-modeling teaching. - Sensor simulation: create a language supporting the modeling of sensors to support load testing of data collection middleware. Students have to model sensors based on polynomial interpolations, Markov chains, replay from legacy --- \(^7\)https://github.com/mosser/ArduinoML-kernel/tree/master/embedded \(^8\)https://www.jetbrains.com/mps/ \(^9\)https://www.eclipse.org/Xtext/version dataset, and define an execution environment to send the simulated data to a time-series database (e.g., InfluxDb). - Application deployment: create a language to support the deployment of services in a distributed environment. Students have to capture what is a service, how services relate to each others, create a deployment plan and upload the different codes to the modeled topology in order to setup a running ecosystem. - Scientific Workflow: create a language to support the modeling of scientific workflows (e.g., grid computing data processing, machine learning workflow). Students have to capture concepts like data sources, sinks, processors, and data links to transfer data among processors. They must also reach an execution context that respect the expected semantics for data flows. 6 PERSPECTIVES The course is re-offered at UCA and ENSL for the next academic year. Discussions have started to implement it at Université du Québec à Montréal in the upcoming years at the graduate level. We plan to clean up the available material published on GitHub, which is for now scattered among several repositories (one instance per course and the ArduinoML zoo) into a single one. We also plan to start communicating about this course in the model-driven engineering and software-engineering communities to gather feedback from researchers and improve the lab contents. An in-depth evaluation of the course outcome is an ongoing work, as we plan to better evaluate this point in the new instances of the course. Acknowledgments Authors want to thanks the GdR GPL to support collaboration between researchers in France at the national level, which allow the creation of such inter-universities initiative. We also want to thanks Benoît Combemale for the fruitful discussions we had that helped to classify the different stages of the labs. REFERENCES
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01865448/file/edusymp18_GM._cr.pdf", "len_cl100k_base": 6344, "olmocr-version": "0.1.49", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 27070, "total-output-tokens": 7846, "length": "2e12", "weborganizer": {"__label__adult": 0.0007596015930175781, "__label__art_design": 0.0008921623229980469, "__label__crime_law": 0.0004911422729492188, "__label__education_jobs": 0.0251617431640625, "__label__entertainment": 0.0001455545425415039, "__label__fashion_beauty": 0.0003695487976074219, "__label__finance_business": 0.0003643035888671875, "__label__food_dining": 0.0008401870727539062, "__label__games": 0.0009522438049316406, "__label__hardware": 0.0015344619750976562, "__label__health": 0.0008425712585449219, "__label__history": 0.0005168914794921875, "__label__home_hobbies": 0.0002720355987548828, "__label__industrial": 0.0008893013000488281, "__label__literature": 0.0008182525634765625, "__label__politics": 0.0005826950073242188, "__label__religion": 0.0012378692626953125, "__label__science_tech": 0.0198211669921875, "__label__social_life": 0.0003457069396972656, "__label__software": 0.003847122192382813, "__label__software_dev": 0.93701171875, "__label__sports_fitness": 0.0006046295166015625, "__label__transportation": 0.0015268325805664062, "__label__travel": 0.0003812313079833984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32442, 0.02304]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32442, 0.56221]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32442, 0.89098]], "google_gemma-3-12b-it_contains_pii": [[0, 960, false], [960, 6414, null], [6414, 11051, null], [11051, 15836, null], [15836, 18247, null], [18247, 21356, null], [21356, 23440, null], [23440, 28619, null], [28619, 32442, null]], "google_gemma-3-12b-it_is_public_document": [[0, 960, true], [960, 6414, null], [6414, 11051, null], [11051, 15836, null], [15836, 18247, null], [18247, 21356, null], [21356, 23440, null], [23440, 28619, null], [28619, 32442, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32442, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32442, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32442, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32442, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32442, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32442, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32442, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32442, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32442, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32442, null]], "pdf_page_numbers": [[0, 960, 1], [960, 6414, 2], [6414, 11051, 3], [11051, 15836, 4], [15836, 18247, 5], [18247, 21356, 6], [21356, 23440, 7], [23440, 28619, 8], [28619, 32442, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32442, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
f8a728ed25f64fdaa035eaf673c7f754074b3cfe
[REMOVED]
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19920017548.pdf", "len_cl100k_base": 6351, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 27614, "total-output-tokens": 7233, "length": "2e12", "weborganizer": {"__label__adult": 0.0004527568817138672, "__label__art_design": 0.0004582405090332031, "__label__crime_law": 0.000545501708984375, "__label__education_jobs": 0.00119781494140625, "__label__entertainment": 0.0001386404037475586, "__label__fashion_beauty": 0.0002257823944091797, "__label__finance_business": 0.0003452301025390625, "__label__food_dining": 0.0005002021789550781, "__label__games": 0.0006961822509765625, "__label__hardware": 0.0017957687377929688, "__label__health": 0.0009055137634277344, "__label__history": 0.00035691261291503906, "__label__home_hobbies": 0.0001767873764038086, "__label__industrial": 0.0008101463317871094, "__label__literature": 0.0005135536193847656, "__label__politics": 0.0003659725189208984, "__label__religion": 0.0006041526794433594, "__label__science_tech": 0.2568359375, "__label__social_life": 0.0001481771469116211, "__label__software": 0.010894775390625, "__label__software_dev": 0.720703125, "__label__sports_fitness": 0.00042891502380371094, "__label__transportation": 0.0008406639099121094, "__label__travel": 0.00021708011627197263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29768, 0.02368]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29768, 0.76923]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29768, 0.91253]], "google_gemma-3-12b-it_contains_pii": [[0, 148, false], [148, 4257, null], [4257, 10047, null], [10047, 14474, null], [14474, 18938, null], [18938, 24250, null], [24250, 29768, null]], "google_gemma-3-12b-it_is_public_document": [[0, 148, true], [148, 4257, null], [4257, 10047, null], [10047, 14474, null], [14474, 18938, null], [18938, 24250, null], [24250, 29768, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29768, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29768, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29768, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29768, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29768, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29768, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29768, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29768, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29768, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29768, null]], "pdf_page_numbers": [[0, 148, 1], [148, 4257, 2], [4257, 10047, 3], [10047, 14474, 4], [14474, 18938, 5], [18938, 24250, 6], [24250, 29768, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29768, 0.08943]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
d645f83b5ebfa198b9d36f948b6fcb864128eb2c
Finalization report: homogeneous PVM/PARIX Overeinder, B.J.; Sloot, P.M.A.; Petersen, J. Citation for published version (APA): General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. Commission of the European Communities ************************ ESPRIT III PROJECT NB 6756 ************************ CAMAS COMPUTER AIDED MIGRATION OF APPLICATIONS SYSTEM ************************ CAMAS-TR-2.3.4 Finalization Report Homogeneous PVM/PARIX ************************ Date: October 1994 ACE - U. of Amsterdam - ESI SA - ESI GmbH - FEGS - Parsytec GmbH - U. of Southampton. Technical Report CAMAS-TR-2.3.4 Finalization Report Homogeneous PVM/PARIX B. J. Overeinder and P. M. A. Sloot University of Amsterdam J. Petersen Parsytec GmbH October 1994 Abstract This document reports on the design and implementation considerations of PVM/PARIX, homogeneous version 1.0. This version is for use with PARIX 1.2 only. Further, it contains information how to use Homogeneous PVM/PARIX and the appendix contains the installation notes. 1 What Is Homogeneous PVM/PARIX? The popularity of PVM nowadays, can be partly explained by the inherent portability of PVM programs over a large number of parallel systems. The spectrum of parallel systems consists from loosely coupled networks with remote shell capabilities, via parallel clusters, to MPP architectures. The use of PVM as a parallel programming environment for MPP architectures, raises the following observation. The heterogeneity which made PVM so popular in the beginning, now seems a drawback in the application of PVM to MPP architectures due to the large overhead to handle this heterogeneity. This is especially true for the communication primitives, e.g., communication latencies between processes inside the MPP can become quite large. This has motivated the development of a so-called homogeneous version of PVM for the PARIX parallel operating system. This PVM/PARIX version has the additive homogeneous version, which essentially means that all PVM processes are to be run on the MPP system only. Thus, the parallel virtual machine can only consist of the nodes in the Parsytec MPP, and PVM inter-process communication with the front-end system is not supported (of course I/O with the front-end is possible, this is handled by PARIX). It is therefore not possible to send messages to PVM tasks outside the MPP. The large advantage however, is that the communication latency and throughput will be improved drastically, resulting in a communication performance which is almost as fast as the underlying PARIX layer. The user interface of this Homogeneous PVM/PARIX version is somehow different than with the standard PVM. There is no console present, for example to start a virtual machine, and the Homogenous PVM/PARIX programs are issued as normal PARIX programs, e.g., with the `run` or `hpvmrun` script (see also Section 4). The homogeneous version of PVM for PARIX, is an implementation of PVM 3.2.6 [1] on top of PARIX 1.2 [2]. All PVM functionality applicable to MPP systems has been incorporated, with the exception of \texttt{pvm\_recvf()}. 2 Design Considerations 2.1 General Overview of Design The Homogeneous PVM/PARIX version is designed and implemented on top of an asynchronous communication layer, called the Communication Kernel. The Communication Kernel on his turn is implemented on top of PARIX, the operating system for Parsytec’s parallel architectures. See also Fig. 1. ![Homogeneous PVM/PARIX design overview](image) Figure 1: The Homogeneous PVM/PARIX design overview. The advantage of this layered design is, that the functionality gap between PVM and PARIX is bridged by the Communication Kernel. The used strategy results in an implementation where the PVM intrinsics are clearly separated from the MPP’s operating system dependent characteristics. This improves the maintainability and portability to newer PVM versions. 2.2 Design Considerations of the Communication Kernel The Communication Kernel’s primary raison d’être was the need for typed asynchronous buffered communication, which is PVM’s basic message passing model. The implementation of the typed asynchronous buffered communication in a separate layer is motivated by the generic characteristics of this message passing model. Along the design and implementation, other functionality needed for a PVM implementation was thought to be generic enough to be added to the Communication Kernel. As an extension to the point-to-point communication, a multicast communication primitive has been integrated in the Communication Kernel. Other functionality added to the Communication Kernel is dynamic remote context creation and a run server capable of load balancing according to the round-robin strategy. This two components together efficiently supports the `pvm_spawn` call. ### 2.3 Design Considerations of Homogeneous PVM/PARIX With the effective support of the Communication Kernel, the PVM implementation has become quite straightforward. Many of the PVM/PARIX calls are implemented without complications on top of the Communication Kernel. Most prominent implementation efforts to PVM/PARIX were the multiple message buffering scheme and the group server. The implementation of the PVM multiple message buffering scheme is as flexible as can be. There is no restriction on the number of message buffers, and for each buffer there does not exist a limit on the size. Group communication and synchronization (`pvm_bcast` respectively `pvm_barrier`) is administered and coordinated by the group server. The group server is a independent thread/context running on processor number 0, but could be placed on any other processor (processor number 0 is always present). In standard PVM, the group server is build on top of the PVM routines. In our design, the group server is implemented on top of the Communication Kernel, thus resides in the same layer as PVM. This not only improves performance by circumventing an extra layer, but has also the advantage that the group server can make direct use of the multicast provided by the Communication Kernel. Worth to mention is, that the Homogeneous PVM/PARIX implementation supports multiple (dynamic) PVM tasks per MPP node. Many other MPP specific PVM implementations only supports one task per node. ### 3 Programming with PVM/PARIX The Homogeneous PVM/PARIX version is software compatible with other implementations of PVM 3.2.6 as described in [1], with the exceptions discussed in Section 5. #### Common Programming Practices With the Homogeneous PVM/PARIX version, PVM programs become normal PARIX executables, just like any PARIX program you would write yourself. However, the programming practice of writing PVM programs has not changed. Each PVM program has to include `pvm3.h` (C) or `fpvm3.h` (FORTRAN) found in the `include` directory of the distribution. The first PVM call in any program has to be `pvm_mytid()` (or `pvmfmytid` in FORTRAN programs). This routines initializes the PVM layer and enrolls the task to PVM. In order to terminate the PVM program, `pvm_exit()` **has to be** called. Without calling this function, each node will hang forever, waiting for the other nodes to call `pvm_exit()`. #### Compiling PVM Programs Compiling PVM programs is like compiling PARIX programs, with the exception of the library that is to be linked with the application. Depending on whether you are compiling a program for Transputer or PowerPC systems, you prefix the compiler call with px for Transputer architectures, and ppx for PowerPC architectures. The command-line to compile a program for a Transputer system would look like: $ px cc.px -I/pvmdir/include file.c -o file -Lpvmdir/lib -lpvm3, or using FORTRAN $ px f77.px -I/pvmdir/include file.f -o file \ -1pvmdir/lib -1fpvm3 -1pvm3 When you want to compile a program that came with a Makefile, the simplest way to build the executable is: $ aimk <target> **Group Communication Library** The group communication code has been integrated into the standard PVM library. It is therefore not necessary to link with a separate group library in contrast to the standard PVM implementation. To prevent “old” makefiles from generating all sorts of errors because of a missing group communication library, PVM/PARIX comes with a dummy group library that contains nothing, but keeps the linker happy. ### 4 Running Programs with PVM/PARIX **Starting Programs** Since PVM/PARIX programs are regular PARIX programs, they can be run using the standard PARIX run utility. However, since many PVM makefiles automatically install executables in $HOME/pvm3/bin/<ARCH>, using run may be inconvenient. Therefore, PVM/PARIX comes with a simple front-end to run, called hpvmrun, that looks in different places for the binary to execute. The flags and arguments needed to run PARIX jobs should still be given to either run or hpvmrun (note: hpvmrun accepts the same arguments as run). Executing a PVM/PARIX job on a four processors partition becomes $ hpvmrun -a p4 file where nrm allocates the four nodes; or with a pre-allocated partition $ hpvmrun -f0 2 2 file Since there exist a number of different run scripts, it is impossible to give detailed information on how to run PARIX jobs on your particular system. Please refer to run(1) for more information. **NOTE:** Do not forget to increase the maximum number of virtual links, if one of the following errors is reported: PX_AllocLLink error or PX_AllocVLink error for PowerPC; and AllocLLink error or AllocVLink error for Transputer. See also the “Release Notes PARIX 1.2-PPC”, Section “Frequently Asked Questions”. **PVM Console** There is no pvmconsole for the Homogeneous PVM/PARIX implementation, since all tasks can be managed from the command-line. As a result, the output of each of the parallel tasks is sent to the terminal where the run or hpvmrun was issued. 5 Notes on the Implementation Unimplemented Calls This PVM implementation adheres to the definitions in the PVM 3.2.6 manual, with a few exceptions (see also the manual pages [3]). Most notably, the functionality that is not applicable to a homogeneous MPP implementation is not supported (e.g., \texttt{pvm\_addhost}). Apart from this, several functions not suitable for MPP systems in general are not implemented, such as the signal capabilities. This affects the functions in Table 1. \begin{table}[h] \centering \begin{tabular}{llll} \texttt{pvm\_addhost} & \texttt{pvm\_delhost} & \texttt{pvm\_kill} \\ \texttt{pvm\_notify} & \texttt{pvm\_recvf+} & \texttt{pvm\_sendsig} \\ \texttt{pvm\_start\_pvmd} & \texttt{pvm\_tickle} \\ \end{tabular} \caption{The functions indicated with a ‘+’ will be implemented in a future release of PVM/PARIX, the other functions are not applicable to a MPP system and will therefore not be implemented.} \end{table} All non-implemented functions return \texttt{PvmNotImpl}. Process Grids Since PVM/PARIX is built on top of PARIX, it has also the same limitations with respect to the grid that can be built, which implies that one can only run PVM programs on a $m \times n$-grid. Different Libraries The Homogeneous PVM/PARIX distribution comes with a standard PVM3 library, and one compiled with debug information (\texttt{libpvm3g.a}). The last might be useful when you encounter a bug in PVM/PARIX and want to fill in a bug-report form. 6 Performance Results A ping-pong experiment was performed to measure the communication performance of the Homogeneous PVM implementation. The typical ping-pong experiment used on Parsytec parallel architectures is the SendLink/RecvLink benchmark, which sends packages of size 1, 4, 16, 64, 256, 1K, 4K, 16K, and 64K bytes. Each measurement for a package size is repeated thirty-two times, resulting in a mean value. The time measurements in the ping-pong experiment are performed in the following way: \begin{verbatim} t1 = time(); send(); receive(); t2 = time(); elapsed_time = (t2 - t1) / 2; \end{verbatim} The latency measured in Table 2, is the time to send and receive one byte to and from the nearest neighbor in the process grid. The throughput is measured by sending a message of 64K bytes to and from, and divide the number of bytes by the elapsed time. ![Communication performance of PVM versus PARIX on Parsytec GCel.](image) Figure 2: Communication performance of PVM versus PARIX on Parsytec GCel. <table> <thead> <tr> <th>Machine/OS</th> <th>Latency</th> <th>Throughput</th> </tr> </thead> <tbody> <tr> <td>GCel/PARIX</td> <td>46 µsec</td> <td>1088 Kb/sec</td> </tr> <tr> <td>PVM</td> <td>315 µsec</td> <td>1082 Kb/sec</td> </tr> <tr> <td>Xplorer/PARIX</td> <td>89 µsec</td> <td>1032 Kb/sec</td> </tr> <tr> <td>PVM</td> <td>152 µsec</td> <td>1027 Kb/sec</td> </tr> </tbody> </table> Table 2: Communication latency and throughput A Installing Homogeneous PVM/PARIX Where to Put PVM PVM can be installed either in a users home-directory, or in a system-wide accessible directory to ease sharing between different users. In any case, there must be a directory named `pvm3` somewhere on the filesystem. The complete path to this directory, including `pvm3` will be denoted by `pvmdir` in this manual. Apart from this distribution tree, each PVM user has to create a `pvm3` directory in his home directory. This directory will contain his own PVM-executables. **The Distribution** The Homogeneous PVM/PARIX distribution contains two files: - `install` - `HPvmParix-1.0.tar.Z` The tar-file is organized as follows: - `pvm3/bin` Directory containing several example programs - `pvm3/doc` Some documentation on this PVM implementation - `pvm3/examples` Sources of standard PVM example programs - `pvm3/include` Include files for C and FORTRAN programs - `pvm3/lib` Libraries for PVM/PARIX, together with supporting tools - `pvm3/man` Manual pages for Homogeneous PVM/PARIX One megabyte of disk space is required for the complete Homogeneous PVM/PARIX installation. **The Installation** First, place the install script and the Homogeneous PVM/PARIX distribution somewhere on your system. The install script with its arguments takes care for the installation in the proper directory. To actually install the Homogeneous PVM/PARIX distribution, execute the install script: ``` $ ./install <pvmdir-path> ``` where `<pvmdir-path>` is the path where the `pvmdir` should be created. For example, if you want PVM to be installed in `/usr/local`, you should run ``` $ ./install /usr/local ``` which creates `/usr/local/pvm3` and unpacks the distribution into it. If a different PVM version is already present, the install script will rename some of the existing tools in order to retain compatibility with the other version. You will not notice any difference when using either PVM version: every change is made completely transparent. **NOTE:** If you plan to (re-)install a PVM version other than the heterogeneous PVM/PARIX in the same directory where Homogeneous PVM/PARIX resides, take care of the following. - Before (re-)installation, make sure that you save the files `pvmgetarch` and `aimk` from `pvmdir/lib` before you install the new version. - After (re-)installation of the new PVM version, rename the files `pvmgetarch` and `aimk` that come with the new distribution to `pvmgetarch.org` and `aimk.org`. - Finally, restore the saved (Homogeneous PVM/PARIX) version of these programs. This is necessary because the Homogeneous PVM/PARIX version of these tools are *wrappers* around the original tools and thus rely on the originals being present, renamed to `<name>.org` **Per User Installation** In order for a user to use PVM, he must create a `pvm3` directory in his home directory to contain his own PVM-binaries. To create this directory, execute the following commands: ``` $ mkdir $HOME/pvm3 $ mkdir $HOME/pvm3/bin $ mkdir $HOME/pvm3/bin/<ARCHITECTURE> ``` where `<ARCHITECTURE>` is either `PARIXPPC_H` for the Homogeneous PVM/PARIX for Power PC systems, or `PARIXT8_H` for the Homogeneous PVM/PARIX for Transputer T800 systems. Last, but not least, the path containing the PVM tools (`pvmdir/lib`) should be added to the user’s `PATH` (command path) environment variable and the environment variable `PVM_ROOT` should be set to `pvmdir`. **Different PVM Versions** As with standard PVM, `pvmgetarch` is supplied to determine on what system PVM/PARIX is running. This script is a wrapper around the original `pvmgetarch`, which, if present, is renamed to `pvmgetarch.org` for this version. If a particular computer system (front-end) has shared file systems with other front-end computers, the problem can arise that it is impossible to determine which PVM version has to be chosen. Should it be Homogeneous PVM/PARIX on a PPC based system, Homogeneous PVM/PARIX on a T800 based system, or the heterogeneous version? To solve this problem, you can set the environment variable PVMPARIX to PARIXPPC_H, PARIXT8_H or to NO. If PVMPARIX is set to PARIXPPC_H or PARIXT8_H, the pvmgetarch script determines the appropriate PVM version for PPC systems or T800 systems. If PVMPARIX equals NO, the control is immediately given to the original pvmgetarch script to determine which PVM version you need. References
{"Source-Url": "https://pure.uva.nl/ws/files/2044048/25758_Sloot59finalization.pdf", "len_cl100k_base": 4349, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 20928, "total-output-tokens": 5020, "length": "2e12", "weborganizer": {"__label__adult": 0.00028705596923828125, "__label__art_design": 0.0002460479736328125, "__label__crime_law": 0.00023698806762695312, "__label__education_jobs": 0.0003199577331542969, "__label__entertainment": 6.771087646484375e-05, "__label__fashion_beauty": 0.00011920928955078124, "__label__finance_business": 0.0002410411834716797, "__label__food_dining": 0.0002505779266357422, "__label__games": 0.0005064010620117188, "__label__hardware": 0.004711151123046875, "__label__health": 0.0003190040588378906, "__label__history": 0.0002396106719970703, "__label__home_hobbies": 9.649991989135742e-05, "__label__industrial": 0.0008029937744140625, "__label__literature": 0.0001442432403564453, "__label__politics": 0.0001653432846069336, "__label__religion": 0.00039887428283691406, "__label__science_tech": 0.06329345703125, "__label__social_life": 5.811452865600586e-05, "__label__software": 0.0220794677734375, "__label__software_dev": 0.90478515625, "__label__sports_fitness": 0.00025963783264160156, "__label__transportation": 0.0004303455352783203, "__label__travel": 0.0001685619354248047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18445, 0.02074]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18445, 0.30915]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18445, 0.85842]], "google_gemma-3-12b-it_contains_pii": [[0, 1157, false], [1157, 1550, null], [1550, 3874, null], [3874, 5576, null], [5576, 8437, null], [8437, 10896, null], [10896, 12988, null], [12988, 14111, null], [14111, 14991, null], [14991, 17623, null], [17623, 18445, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1157, true], [1157, 1550, null], [1550, 3874, null], [3874, 5576, null], [5576, 8437, null], [8437, 10896, null], [10896, 12988, null], [12988, 14111, null], [14111, 14991, null], [14991, 17623, null], [17623, 18445, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18445, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18445, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18445, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18445, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18445, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18445, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18445, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18445, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18445, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18445, null]], "pdf_page_numbers": [[0, 1157, 1], [1157, 1550, 2], [1550, 3874, 3], [3874, 5576, 4], [5576, 8437, 5], [8437, 10896, 6], [10896, 12988, 7], [12988, 14111, 8], [14111, 14991, 9], [14991, 17623, 10], [17623, 18445, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18445, 0.03448]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
106f85153ee6a85eb8667fcba457c7cfb07cd034
GUEST EDITORS’ INTRODUCTION: EVOLVING CRITICAL SYSTEMS Lorcan Coyle, Mike Hinchey, and Bashar Nuseibeh, Lero—the Irish Software Engineering Research Centre José Luiz Fiadeiro, University of Leicester This special issue brings together key software engineering researchers and practitioners who both influence their organizations and evaluate the emerging practice of developing these new systems. We believe that the software engineering community must concentrate efforts on the techniques, methodologies, and tools needed to design, implement, and maintain critical software systems that evolve successfully. This special issue summarizes many of the topics discussed and embodies what we believe to be some of the most important research challenges for evolving critical software systems—without incurring prohibitive costs. Several widespread changes in software engineering highlight the importance of evolving critical systems (ECS). We’ve identified the following five game changers: - **Software ubiquity.** More software is being deployed in more consumer devices, which means failures are more likely to affect ordinary people. - **Software criticality.** As software embeds itself deeper into the fabric of society, single software failures have greater potential to affect more people. This increases the potential for software to be considered critical even when it isn’t complex. - **People-in-the-loop.** As software is deployed to control systems in which human actors participate, the issue of human interactions with software becomes more important. - **Entanglement.** Software dependencies have become more complex, and much real-world software is entangled with software developed by third-party providers. - **Increased evolution tempo.** The tempo of evolution will continue increasing as users expect more from software. The software market is often unforgiving when even small changes can’t be done cheaply and quickly. Taken together, these changes characterize evolving critical systems and frame the research agenda in this emerging area (www.lero.ie/ecs/whitepaper). **ECS MANIFESTO** ECS research challenges add to the broad software engineering research agenda, with more of a focus on predictability, quality, and the ability to change. Table 1 characterizes the criticality of software engineering research challenges for ECS. The fundamental research question underlying ECS research is this: how do we design, implement, and maintain critical software systems that are highly reliable while retaining this reliability as they evolve, without incurring prohibitive costs? Several demands must be met before ECS’s ideals can be realized. The changing development environment, for example, proves that we must maintain the quality of critical software despite constant change in its teams, processes, methods, and toolkits. Likewise, we must improve our existing software design methodologies so that they facilitate the support and maintenance of ECS—how can we use agile development methodologies to evolve critical software? We must also specify what we want to achieve during an evolution cycle and confirm that we’ve achieved the intended result (verification) and only the result intended (validation). We must thus elicit and represent requirements for change such that we ensure the changes take place correctly. Furthermore, we must develop techniques for better estimating specific evolution activities a priori, only attempting software change when we know for certain that evolution will be successful and that benefits will outweigh costs. For example, some systems shouldn’t evolve at all because the cost and risk of performing evolution successfully will exceed the system’s value by orders of magnitude. To prevent cost and time overruns, we must work toward developing objective criteria to help us decide whether a given system is in this class. All these requirements demand strategies to make model-driven, automatic evolution a better alternative to manual change. In cases where it isn’t appropriate to mechanize change, we must develop heuristics for determining when such an approach is viable. When humans must perform the change, we need to develop support tools that make this a less risky enterprise. We also need improved tools for traceability that keep various software artifacts—such as documentation and source code—in sync throughout the evolution cycle. Where regulatory compliance is required, these tools must ensure that evolution results in compliant software. Finally, during runtime evolution, we must ensure that developers adhere to runtime policies. We do so by developing techniques that can monitor and model changing requirements in dynamic environments, especially autonomic and adaptive software environments. We must also develop strategies for evolution that tolerate uncertainty in the operational environment, which changes deterministically, nondeterministically, or stochastically. We must then ensure that software never evolves into a state of unstable behavior. Given the tensions between the need for software change and the danger implicit in changing critical software, the most pressing question for practitioners is which processes, techniques, and tools can most cost-effectively evolve critical systems. We believe a concerted focus from the research community to overcome these challenges will be needed for ECS to have an impact on software engineering. IN THIS ISSUE This special issue contains contributions from leading participants in the field. In “Evolving Embedded Systems,” Gabor Karsai and coauthors discuss the importance of considering the interplay among requirements, processes, deployments, and tests’ evolution when developing embedded systems. They also address challenges relating to the different evolutionary time scales—whether the system is evolved at design time, load time, or runtime. In addition, the authors examine the importance of process and system co-evolution, especially with regard to embedded systems, discussing how verification can be used to check the correctness of load-time evolution. They conclude by addressing the difficulties inherent in testing software that evolves at load time or runtime, while also relating it to the use of online testing, built-in testing for load-time evolution, and the need to evolve the tests themselves, especially for runtime evolution. In “Evolving Software Architecture Descriptions of Critical Systems,” Tom Mens and coauthors discuss the use of architectural descriptions to describe software-intensive systems and how they can be used to handle increasing complexity to mitigate the risks incurred in constructing and evolving these systems. They also assess the use of model-transformation approaches to evolve models of software architectures and the co-evolution of architecture descriptions, software design, and implementation. They call for architectural change to be considered a first-class construct that ensures Consider the problem of assessing criticality in automotive systems. One important aspect of most such systems is functional safety. This is addressed in ISO 26262, a forthcoming standard for functional safety of electrical and electronic (E/E) systems in road vehicles. Currently, a draft international standard (DIS), it's expected to be approved by the International Organization for Standardization by mid-2011. To reduce unpredictable product liability risks, all road vehicles brought to market after publication of the final standard must conform to ISO 26262. This means that E/E system developers must consider all development process and product properties requirements mandated by the standard from the beginning. To specify the criticality of E/E system malfunctions, ISO 26262 defines the Automotive Safety Integrity Level (ASIL), which ranges from A (lowest) to D (highest); a QM value indicates that a malfunction isn’t safety-related. As Figure A shows, developers must estimate three parameters to determine ASIL: - **Exposure (E).** This factor defines the probability of a system being in an operational situation that can be hazardous if coincident with a failure mode; rated on a scale from E0 (incredible) to E4 (high probability). - **Controllability (C).** This factor assesses the potential of avoiding specific harm or damage through the timely reactions of the persons involved; rated on a scale from C0 (controllable in general) to C3 (difficult to control or uncontrollable). - **Severity (S).** This parameter measures the potential extent of harm to an individual in a specific situation; rated on a scale from S0 (no injuries) to S3 (life-threatening or fatal injuries). Various interpretations of the categories' meaning have been defined for E, C, and S. To avoid different ASIL classifications for the same malfunction, the automotive industry must establish common criteria. An appropriate system design makes reducing ASIL classifications for some elements, a process known as ASIL decomposition. In this case, the elements' independence after decomposition must be assured—“lower-quality” parts mustn’t affect the operation of “critical” parts that assure safety. For example, software malfunctions within the electronic actuator pedal (EGAS) system may lead to the ASIL B, as the top of Figure B shows. This is decomposed into QM for the function level and B for the monitoring level (as the bottom of the figure shows), which assures safety by switching off the power stages if the level malfunctions. In the case of E/E systems, automotive engineers are thus guided by a well-established set of different safety-integrity levels rather than a general notion of criticality. However, technical challenges arise from the problem of coping with different ASILs within one application. And this situation will probably occur more frequently given the growing trend to build functionally cooperating networks of originally separate systems. More challenges arise if safety requirements conflict with other types of criticality, such as system reliability and availability (quality of service). For example, a fail-safe solution like switching off the power stages in the EGAS might lead to customer dissatisfaction if it occurs too often. Dieter Lienert is the head of a group for the engineering of software-intensive systems at Robert Bosch GmbH. Contact him at dieter.lienert@de.bosch.com. Stefan Kriso is a senior project manager at Robert Bosch GmbH. Contact him at stefan.kriso@de.bosch.com. **Figure A.** Automotive Safety Integrity Level determination. Electrical/electronic system developers must estimate three parameters—exposure (E), controllability (C), and severity (S)—to determine a malfunction’s AWIL (Source: ISO/DIS 26262-3, Table 4.) **Figure B.** Example of ASIL decomposition for the electronic actuator pedal (EGAS) system. Top: Software malfunctions in the microcontroller (µC) are classified as ASIL B. Bottom: the monitoring level is classified as ASIL B, while the function level can be classified as QM if freedom from interference between the levels can be justified. Boutheina Chetali, Security Labs—Gemalto Smart-card certification’s evolution can’t be tackled as a whole, but only as an update evolving into a new service or capability. The software’s update focuses on the context of product surveillance and maintenance, two processes clearly defined by the certification scheme. To update a certified product and maintain the certificate, the developer must provide evidence that this update has no security impact. Open smart cards Requirements such as these could burden a business model, where software updates must be frequent to keep pace with the rapid evolution of specifications. However, the most interesting evolution deals with new services first, then with new code applications. For that, the trend is the certification of “open” smart cards. Smart cards that have been certified as “open” could be used to load any kind of applications through binary code when in the field, while keeping its certificate intact. An open card has no applications onboard and essentially becomes an “operating system”—such as a Java Card platform. This sophisticated application manages card resources securely by, for example, loading, installing, deleting, and delivering updates. The openness relies on the loading mechanism and its isolation properties, and these security mechanisms have been evaluated during certification to provide the necessary guarantees that the product can load any code and that two different applications can remain secure and protected from each other. Certification rules To maintain stability, certification includes rules called hypotheses that applications must respect before being loaded onto the card. These rules rely on blocking attack paths. One well-known rule advises that code be checked by a bytecode verifier, which means the application must be bytecode-verified before loading, if the onboard verification is not a feature of the card. The rules set represents the card issuer policy, but if the applications originate elsewhere, in different market sectors that don’t have the same historic level of security requirements, such as banking and mobile communication, a common agreement on policy rules could present a complex task. Therefore, it seems obvious that if the product is protected against any kind of application that could be loaded on the card, the underlying software must include a large set of protective countermeasures. For a constrained-resources device such as a smart card, adding ever more software countermeasures or onboard verification leads to performance issues during execution and also consumes card-memory space. Both are crucial for the end user’s satisfaction: the first is response time to the request, the second the number of applications that can load onto the product. Essentially, then, the challenge resides on performance rates, and a trade-off must be made between security and speed. So the notion of evolution is not the same for each sector: the lifetime of a banking card differs from that of a SIM card. This is why taking certification requirements and evolution requirements for cards into account when hosting applications from different market sectors poses a daunting challenge for the smart-card industry. Boutheina Chetali manages the Formal Methods Group at Security Labs—Gemalto. Contact her at boutheina.chetali@gemalto.com. In “Evolution in Relation to Risk and Trust Management,” Mass Soldal Lund and coauthors argue that risk and trust management methodologies in general, and assessment in particular, aren’t well-equipped to deal with evolution. They go on to explore risk assessment from the perspectives of maintenance, in which old risks must be updated to take into account new risks introduced by changes before and after, in which the assessor must be aware of current risks, future risks, and risks introduced by the change process itself, and continuous evolution, which involves identifying and assessing how risks evolve. The authors also explore using each of these two perspectives from the specific viewpoint of the risk that trust relations impose on a system. They assert that the evolution of trust is much more challenging, given the highly dynamic nature of trust relations. Finally, in “Why Critical Systems Need Help to Evolve,” Bernie Cohen and Philip Boxer analyze the difficulties in evolving complex sociotechnical systems by using the provision of orthotic services in the UK as an exemplar. They use three cuts to address the risks of ECS. The Heisenberg cut presents a mismatch between the model that defines a system and the reality of its interaction with stakeholders; the Heisenberg cut presents a mismatch between what behaviors can and can’t be predicted by its users, independently of their use of it; and the Endo-exo cut reflects the difference between what can and can’t be directly known by clients about their own needs. By enabling the members of and stakeholders in a sociotechnical system to analyze and project the experience of their own participation, the authors gain an understanding of how the orthotic service makes the three cuts and identify changes that could improve it. Significantly, these changes were ultimately rejected by the UK’s National Health Service. The authors suggest that this stemmed from a failure to understand the ecosystem in which the service was embedded and the wider implications of these changes beyond the orthotics services. In addition to this introduction, we’ve included three shorter practitioner contributions from Robert Bosch GmbH, Security Labs—Gemalto, and the Directorate General for Informatics in the European Commission. Citizens and businesses are empowered by eGovernment services designed around users’ needs and developed in collaboration with third parties, as well as increased access to public information, strengthened transparency and effective means for involvement of stakeholders in the policy process; • “Mobility in the Single Market is reinforced by seamless eGovernment services for the setting up and running of a business and for studying, working, residing, and retiring anywhere in the European Union”; and • “Efficiency and effectiveness is enabled by a constant effort to use eGovernment to reduce the administrative burden, improve organizational processes and promote a sustainable low-carbon economy.” Aligned with this declaration, the European Commission is preparing to modernize its portfolio of mission-critical information systems to deliver and operate smart e-government services that are innovative and built from a user-centric viewpoint, enabling their participation in the underlying processes (empowerment); and streamline administrative processes in a learning organization to improve effectiveness, efficiency, and transparency, and to share and value intellectual assets through appropriate knowledge-management approaches. This new generation of critical systems will be built on three principles: harmonization and convergence of business processes, reusability and interoperability of information systems or systems components, and sharing services at the infrastructure level. These three layers of the EC’s IT Enterprise Architecture Framework will be enabled by an organization possessing the necessary IT governance arrangements and project management methodologies supporting top-caliber staff whose skills and knowledge will be continuously improved through collaboration in multidisciplinary teams. The new mission-critical systems will be developed under the auspices of the EU’s Interoperability Solutions for European Public Administrations (ISA) program, which came into being on 1 January 2010. ISA’s focus is on back-office solutions to support the interaction between European public administrations and the implementation of EU policies and activities. It underlines the key role that standards and interoperability at all levels—legal, organizational, semantic, and technical—will play in ensuring these new systems contribute to European integration. Developments will conform to the ISA’s European Interoperability Strategy, will be based on its European Interoperability Framework, and will respect the related architectural guidelines. The program thus defines the architecture for the next generation of evolving critical information systems upon which the EU will rely to implement the Europe 2020 vision articulated by EC President José Manuel Barroso. E-government is now mainstream; in the EU and elsewhere it will be the catalyst in transforming public administrations over the next decade. The challenge for today’s public-sector CIOs is to build evolving critical information systems that offer more online public services, streamline administrative procedures and cut red tape, and implement innovative service delivery mechanisms. The emerging evolving critical systems research domain will contribute to meeting this challenge. Declan Deasy is Director of Information Systems at the European Commission in Brussels. Contact him at declan.deasy@ec.europa.eu. Franck Noel is Deputy Head of Unit in the European Commission in Brussels. Contact him at franck.noel@ec.europa.eu. MAY 2010 Mike Hinchey is scientific director of Lero and a professor of software engineering at the University of Limerick. His research interests include self-managing software and formal methods for system development. Hinchey received a PhD in computer science from the University of Cambridge. He is a senior member of IEEE and currently chairs the IFIP Technical Assembly. Contact him at mike.hinchey@lero.ie. Bashar Nuseibeh is a professor of software engineering and chief scientist for Lero. He is also a professor of computing at The Open University, UK, and a visiting professor at Imperial College London and the National Institute of Informatics, Japan. His research interests include requirements engineering and design, security and privacy, and technology transfer. Nuseibeh holds a PhD in software engineering from Imperial College London and is editor in chief of IEEE Trans. Software Eng. and editor emeritus of the Automated Software Eng. J. He is also a fellow of the British Computer Society (BCS) and the Institution of Engineering and Technology, and an Automated Software Engineering Fellow. Contact him at bashar.nuseibeh@lero.ie. José Luiz Fiadeiro is a professor of software science and engineering in the Department of Computer Science at the University of Leicester. His research interests lie in the mathematical foundations of software system modeling, including software architecture, coordination models and languages, parallel and distributed system design, and service-oriented computing. Fiadeiro received a PhD in mathematics from the Technical University of Lisbon. He is a fellow of the BCS. Contact him at jose@mcs.le.ac.uk. Dieter Lienert and Stefan Kriso describe the emerging functional safety standard for electrical and electronic automotive systems (ISO 26262) and discuss the challenges in assessing criticality in automotive systems. Boutheina Chetali points out that the smart-card industry faces a significant challenge in managing both certification and evolution requirements. Finally, Franck Noël and Declan Deasy discuss how European integration within the EU has, in many cases, been enabled and accelerated by the development and evolution of their technical infrastructure. Acknowledgments This special issue resulted from a workshop held in Schloss Dagstuhl, Germany, in December 2009. The gathering brought together key software engineering researchers and practitioners in positions to influence their organizations’ research direction and discuss the emerging theme of ECS. This work was supported in part by Science Foundation Ireland grant 03/CE2/I303_1 to Lero—the Irish Software Engineering Research Centre (www.lero.ie). We thank Schloss Dagstuhl for hosting the perspectives workshop, and all the workshop participants whose discussions led to this special issue. Lorcan Coyle is a research fellow at Lero in the University of Limerick. His research interests include autonomic computing, context awareness, pervasive computing, and machine learning. Coyle received a PhD in computer science from Trinity College Dublin. He is a member of the Institution of Engineers of Ireland. Contact him at lorcan.coyle@lero.ie. Where Are We Headed? Rich Internet applications are a heterogeneous family of solutions, characterized by a common goal of adding new capabilities to the conventional hypertext-based Web. RIAs combine the Web’s lightweight distribution architecture with desktop applications’ interface interactivity and computation power, and the resulting combination improves all the elements of a Web application. Read the latest issue of IEEE Internet Computing now! www.computer.org/internet
{"Source-Url": "https://www.computer.org/csdl/mags/co/2010/05/mco2010050028.pdf", "len_cl100k_base": 4484, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 17781, "total-output-tokens": 4800, "length": "2e12", "weborganizer": {"__label__adult": 0.00023806095123291016, "__label__art_design": 0.00022864341735839844, "__label__crime_law": 0.00032591819763183594, "__label__education_jobs": 0.0006928443908691406, "__label__entertainment": 5.429983139038086e-05, "__label__fashion_beauty": 9.775161743164062e-05, "__label__finance_business": 0.0002970695495605469, "__label__food_dining": 0.0002448558807373047, "__label__games": 0.0004565715789794922, "__label__hardware": 0.0009007453918457032, "__label__health": 0.00036072731018066406, "__label__history": 0.00018465518951416016, "__label__home_hobbies": 5.239248275756836e-05, "__label__industrial": 0.0002639293670654297, "__label__literature": 0.0001722574234008789, "__label__politics": 0.0003135204315185547, "__label__religion": 0.00028443336486816406, "__label__science_tech": 0.027252197265625, "__label__social_life": 8.022785186767578e-05, "__label__software": 0.01328277587890625, "__label__software_dev": 0.95361328125, "__label__sports_fitness": 0.000171661376953125, "__label__transportation": 0.0004642009735107422, "__label__travel": 0.00014030933380126953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24143, 0.00874]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24143, 0.37564]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24143, 0.92396]], "google_gemma-3-12b-it_contains_pii": [[0, 2368, false], [2368, 7020, null], [7020, 11164, null], [11164, 16836, null], [16836, 20369, null], [20369, 24143, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2368, true], [2368, 7020, null], [7020, 11164, null], [11164, 16836, null], [16836, 20369, null], [20369, 24143, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24143, null]], "pdf_page_numbers": [[0, 2368, 1], [2368, 7020, 2], [7020, 11164, 3], [11164, 16836, 4], [16836, 20369, 5], [20369, 24143, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24143, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
c4453de2c5066d8cc2ccbb523a4cf4ba7dda8c03
CS 267 Unified Parallel C (UPC) Kathy Yelick http://upc.lbl.gov Slides adapted from some by Tarek El-Ghazawi (GWU) UPC Outline 1. Background 2. UPC Execution Model 3. Basic Memory Model: Shared vs. Private Scalars 4. Synchronization 5. Collectives 6. Data and Pointers 7. Dynamic Memory Management 8. Programming Examples 8. Performance Tuning and Early Results 9. Concluding Remarks Context • Most parallel programs are written using either: • Message passing with a SPMD model • Usually for scientific applications with C++/Fortran • Scales easily • Shared memory with threads in OpenMP, Threads+C/C++/F or Java • Usually for non-scientific applications • Easier to program, but less scalable performance • Global Address Space (GAS) Languages take the best of both • global address space like threads (programmability) • SPMD parallelism like MPI (performance) • local/global distinction, i.e., layout matters (performance) **Partitioned Global Address Space Languages** - Explicitly-parallel programming model with SPMD parallelism - Fixed at program start-up, typically 1 thread per processor - Global address space model of memory - Allows programmer to directly represent distributed data structures - Address space is logically partitioned - Local vs. remote memory (two-level hierarchy) - Programmer control over performance critical decisions - Data layout and communication - Performance transparency and tunability are goals - Initial implementation can use fine-grained shared memory - Multiple PGAS languages: UPC (C), CAF (Fortran), Titanium (Java) Global Address Space Eases Programming - The languages share the global address space abstraction - Shared memory is logically partitioned by processors - Remote memory may stay remote: no automatic caching implied - One-sided communication: reads/writes of shared variables - Both individual and bulk memory copies - Languages differ on details - Some models have a separate private memory area - Distributed array generality and how they are constructed Current Implementations of PGAS Languages - A successful language/library must run everywhere - UPC - Commercial compilers available on Cray, SGI, HP machines - Open source compiler from LBNL/UCB (source-to-source) - Open source gcc-based compiler from Intrepid - CAF - Commercial compiler available on Cray machines - Open source compiler available from Rice - Titanium - Open source compiler from UCB runs on most machines - Common tools - Open64 open source research compiler infrastructure - ARMCI, GASNet for distributed memory implementations - Pthreads, System V shared memory UPC Overview and Design Philosophy • Unified Parallel C (UPC) is: • An explicit parallel extension of ANSI C • A partitioned global address space language • Sometimes called a GAS language • Similar to the C language philosophy • Programmers are clever and careful, and may need to get close to hardware • to get performance, but • can get in trouble • Concise and efficient syntax • Common and familiar syntax and semantics for parallel C with simple extensions to ANSI C • Based on ideas in Split-C, AC, and PCP UPC Execution Model UPC Execution Model - A number of threads working independently in a SPMD fashion - Number of threads specified at compile-time or run-time; available as program variable \texttt{THREADS} - \texttt{MYTHREAD} specifies thread index \((0..\text{THREADS}-1)\) - \texttt{upc\_barrier} is a global synchronization: all wait - There is a form of parallel loop that we will see later - There are two compilation modes - Static Threads mode: - \texttt{THREADS} is specified at compile time by the user - The program may use \texttt{THREADS} as a compile-time constant - Dynamic threads mode: - Compiled code may be run with varying numbers of threads Hello World in UPC • Any legal C program is also a legal UPC program • If you compile and run it as UPC with P threads, it will run P copies of the program. • Using this fact, plus the identifiers from the previous slides, we can parallel hello world: ```c #include <upc.h> /* needed for UPC extensions */ #include <stdio.h> main() { printf("Thread %d of %d: hello UPC world\n", MYTHREAD, THREADS); } ``` Example: Monte Carlo Pi Calculation • Estimate Pi by throwing darts at a unit square • Calculate percentage that fall in the unit circle • Area of square = \( r^2 = 1 \) • Area of circle quadrant = \( \frac{1}{4} \pi r^2 = \pi/4 \) • Randomly throw darts at \( x,y \) positions • If \( x^2 + y^2 < 1 \), then point is inside circle • Compute ratio: • # points inside / # points total • \( \pi = 4 \times \text{ratio} \) Pi in UPC • Independent estimates of pi: ```c main(int argc, char **argv) { int i, hits, trials = 0; double pi; if (argc != 2) trials = 1000000; else trials = atoi(argv[1]); srand(MYTHREAD*17); for (i=0; i < trials; i++) hits += hit(); pi = 4.0*hits/trials; printf("PI estimated to \%f.\n", pi); } ``` Each thread gets its own copy of these variables Each thread can use input arguments Initialize random in math library Each thread calls “hit” separately Helper Code for Pi in UPC • Required includes: ``` #include <stdio.h> #include <math.h> #include <upc.h> ``` • Function to throw dart and calculate where it hits: ``` int hit() { int const rand_max = 0xFFFFFFFF; double x = ((double) rand()) / RAND_MAX; double y = ((double) rand()) / RAND_MAX; if ((x*x + y*y) <= 1.0) { return(1); } else { return(0); } } ``` Shared vs. Private Variables Private vs. Shared Variables in UPC - Normal C variables and objects are allocated in the private memory space for each thread. - Shared variables are allocated only once, with thread 0 ```cpp shared int ours; // use sparingly: performance int mine; ``` - Shared variables may not have dynamic lifetime: may not occur in a function definition, except as static. Why? **Pi in UPC: Shared Memory Style** - Parallel computing of pi, but with a bug ```c shared int hits; main(int argc, char **argv) { int i, my_trials = 0; int trials = atoi(argv[1]); my_trials = (trials + THREADS - 1)/THREADS; srand(MYTHREAD*17); for (i=0; i < my_trials; i++) hits += hit(); upc_barrier; if (MYTHREAD == 0) { printf("PI estimated to %f.", 4.0*hits/trials); } } ``` shared variable to record hits divide work up evenly accumulate hits **What is the problem with this program?** Shared Arrays Are Cyclic By Default • Shared scalars always live in thread 0 • Shared arrays are spread over the threads • Shared array elements are spread across the threads shared int x[THREADS] /* 1 element per thread */ shared int y[3][THREADS] /* 3 elements per thread */ shared int z[3][3] /* 2 or 3 elements per thread */ • In the pictures below, assume THREADS = 4 • Red elts have affinity to thread 0 Think of linearized C array, then map in round-robin As a 2D array, y is logically blocked by columns z is not Pi in UPC: Shared Array Version • Alternative fix to the race condition • Have each thread update a separate counter: • But do it in a shared array • Have one thread compute sum ```c shared int all_hits [THREADS]; main(int argc, char **argv) { ... declarations an initialization code omitted for (i=0; i < my_tries; i++) all_hits[MYTHREAD] += hit(); upc_barrier; if (MYTHREAD == 0) { for (i=0; i < THREADS; i++) hits += all_hits[i]; printf("PI estimated to %f.", 4.0*hits/trials); } } ``` all_hits is shared by all processors, just as hits was update element with local affinity UPC Synchronization UPC Global Synchronization - UPC has two basic forms of barriers: - **Barrier**: block until all other threads arrive ```c upc_barrier ``` - **Split-phase barriers** ```c upc_notify; // this thread is ready for barrier do computation unrelated to barrier upc_wait; // wait for others to be ready ``` - Optional labels allow for debugging ```c #define MERGE_BARRIER 12 if (MYTHREAD%2 == 0) { ... upc_barrier MERGE_BARRIER; } else { ... upc_barrier MERGE_BARRIER; } ``` Synchronization - Locks - Locks in UPC are represented by an opaque type: ```c upc_lock_t ``` - Locks must be allocated before use: ```c upc_lock_t *upc_all_lock_alloc(void); ``` allocates 1 lock, pointer to all threads ```c upc_lock_t *upc_global_lock_alloc(void); ``` allocates 1 lock, pointer to one thread - To use a lock: ```c void upc_lock(upc_lock_t *l) void upc_unlock(upc_lock_t *l) ``` use at start and end of critical region - Locks can be freed when not in use ```c void upc_lock_free(upc_lock_t *ptr); ``` Pi in UPC: Shared Memory Style • Parallel computing of pi, without the bug ```c shared int hits; main(int argc, char **argv) { int i, my_hits, my_trials = 0; upc_lock_t *hit_lock = upc_all_lock_alloc(); int trials = atoi(argv[1]); my_trials = (trials + THREADS - 1)/THREADS; srand(MYTHREAD*17); for (i=0; i < my_trials; i++) my_hits += hit(); upc_lock(hit_lock); hits += my_hits; upc_unlock(hit_lock); upc_barrier; if (MYTHREAD == 0) printf("PI: %f", 4.0*hits/trials); } ``` UPC Collectives UPC Collectives in General - The UPC collectives interface is available from: - http://www.gwu.edu/~upc/docs/ - It contains typical functions: - Data movement: broadcast, scatter, gather, … - Computational: reduce, prefix, … - Interface has synchronization modes: - Avoid over-synchronizing (barrier before/after is simplest semantics, but may be unnecessary) - Data being collected may be read/written by any thread simultaneously Pi in UPC: Data Parallel Style • The previous version of Pi works, but is not scalable: • On a large # of threads, the locked region will be a bottleneck • Use a reduction for better scalability ```c #include <bupc_collectivev.h> // shared int hits; main(int argc, char **argv) { ... for (i=0; i < my_trials; i++) my_hits += hit(); my_hits = bupc_allv_reduce(int, my_hits, 0, UPC_ADD); // upc_barrier; if (MYTHREAD == 0) printf("PI: %f", 4.0*my_hits/trials); } ``` Berkeley collectives no shared variables barrier implied by collective Recap: Private vs. Shared Variables in UPC - We saw several kinds of variables in the pi example - Private scalars (`my_hits`) - Shared scalars (`hits`) - Shared arrays (`all_hits`) - Shared locks (`hit_lock`) ![Diagram showing global address space with private and shared variables for different threads] Work Distribution Using \texttt{upc\_forall} Example: Vector Addition - Questions about parallel vector additions: - How to layout data (here it is cyclic) - Which processor does what (here it is “owner computes”) ```c /* vadd.c */ #include <upc_relaxed.h> #define N 100*THREADS shared int v1[N], v2[N], sum[N]; void main() { int i; for(i=0; i<N; i++) if (MYTHREAD == i%THREADS) sum[i]=v1[i]+v2[i]; } ``` cyclic layout owner computes Work Sharing with upc_forall() - The idiom in the previous slide is very common - Loop over all; work on those owned by this proc - UPC adds a special type of loop ``` upc_forall(init; test; loop; affinity) statement; ``` - Programmer indicates the iterations are independent - Undefined if there are dependencies across threads - Affinity expression indicates which iterations to run on each thread. It may have one of two types: - Integer: `affinity%THREADS` is MYTHREAD - Pointer: `upc_threadof(affinity)` is MYTHREAD - Syntactic sugar for loop on previous slide - Some compilers may do better than this, e.g., ``` for(i=MYTHREAD; i<N; i+=THREADS) ``` - Rather than having all threads iterate N times: ``` for(i=0; i<N; i++) if (MYTHREAD == i%THREADS) ``` Vector Addition with upc_forall - The `vadd` example can be rewritten as follows - Equivalent code could use "&sum[i]" for affinity - The code would be correct but slow if the affinity expression were `i+1` rather than `i`. ```c #define N 100*THREADS shared int v1[N], v2[N], sum[N]; void main() { int i; upc_forall(i=0; i<N; i++; i) sum[i]=v1[i]+v2[i]; } ``` The cyclic data distribution may perform poorly on some machines. Distributed Arrays in UPC Blocked Layouts in UPC - The cyclic layout is typically stored in one of two ways - Distributed memory: each processor has a chunk of memory - Thread 0 would have: 0, THREADS, THREADS*2, … in a chunk - Shared memory machine: each thread has a logical chunk - Shared memory would have: 0, 1, 2, … THREADS, THREADS+1, … - What performance problem is there with the latter? - What is this code was instead doing nearest neighbor averaging? - Vector addition example can be rewritten as follows ```c #define N 100*THREADS shared int [*] v1[N], v2[N], sum[N]; void main() { int i; upc forall(i=0; i<N; i++; &a[i]) sum[i]=v1[i]+v2[i]; }``` Layouts in General • All non-array objects have affinity with thread zero. • Array layouts are controlled by layout specifiers: • Empty (cyclic layout) • [*] (blocked layout) • [0] or [] (indefinite layout, all on 1 thread) • [b] or [b1][b2]…[bn] = [b1*b2*…bn] (fixed block size) • The affinity of an array element is defined in terms of: • block size, a compile-time constant • and THREADS. • Element i has affinity with thread \[(i \div \text{block\_size}) \mod \text{THREADS}\] • In 2D and higher, linearize the elements as in a C representation, and then use above mapping 2D Array Layouts in UPC • Array a1 has a row layout and array a2 has a block row layout. shared [m] int a1 [n][m]; shared [k*m] int a2 [n][m]; • If \((k + m) \% \text{THREADS} = = 0\) them a3 has a row layout shared int a3 [n][m+k]; • To get more general HPF and ScaLAPACK style 2D blocked layouts, one needs to add dimensions. • Assume \(r*c = \text{THREADS}\); shared [b1][b2] int a5 [m][n][r][c][b1][b2]; • or equivalently shared [b1*b2] int a5 [m][n][r][c][b1][b2]; UPC Matrix Vector Multiplication Code - Matrix-vector multiplication with matrix stored by rows - (Contrived example: problems size is PxP) ```c shared [THREADS] int a[THREADS][THREADS]; shared int b[THREADS], c[THREADS]; void main (void) { int i, j, l; upc_forall( i = 0 ; i < THREADS ; i++; i) { c[i] = 0; for ( l = 0 ; l < THREADS ; l++) c[i] += a[i][l]*b[l]; } } ``` /* mat_mult_1.c */ #include <upc_relaxed.h> #define N 4 #define P 4 #define M 4 shared [N*P /THREADS] int a[N][P], c[N][M]; // a and c are row-wise blocked shared matrices shared[M/THREADS] int b[P][M]; //column-wise blocking void main (void) { int i, j , l; // private variables upc_forall(i = 0 ; i<N ; i++; &c[i][0]) { for (j=0 ; j<M ;j++) { c[i][j] = 0; for (l= 0 ; l<P ; l++) c[i][j] += a[i][l]*b[l][j]; } } } Notes on the Matrix Multiplication Example • The UPC code for the matrix multiplication is almost the same size as the sequential code • Shared variable declarations include the keyword `shared` • Making a private copy of matrix B in each thread might result in better performance since many remote memory operations can be avoided • Can be done with the help of `upc_memget` Domain Decomposition for UPC - Exploits locality in matrix multiplication - \( A (N \times P) \) is decomposed row-wise into blocks of size \((N \times P) / \text{THREADS}\) as shown below: - \( B(P \times M) \) is decomposed column-wise into \( M / \text{THREADS}\) blocks as shown below: - **Note:** \( N \) and \( M \) are assumed to be multiples of \( \text{THREADS} \) \[ \begin{align*} \text{Thread 0} & : 0 .. (N*P / \text{THREADS}) -1 \\ \text{Thread 1} & : (N*P / \text{THREADS}) .. (2*N*P / \text{THREADS}) -1 \\ \text{Thread THREADS-1} & : ((\text{THREADS}-1) \times N*P) / \text{THREADS} .. (\text{THREADS}*N*P / \text{THREADS}) -1 \\ \end{align*} \] Pointers to Shared vs. Arrays 1. In the C tradition, array can be accessed through pointers. 2. Here is the vector addition example using pointers. ```c #define N 100*THREADS shared int v1[N], v2[N], sum[N]; void main() { int i; shared int *p1, *p2; p1=v1; p2=v2; for (i=0; i<N; i++, p1++, p2++) if (i % THREADS == MYTHREAD) sum[i] = *p1 + *p2; } ``` UPC Pointers Where does the pointer point? <table> <thead> <tr> <th>Where does the pointer reside?</th> <th>Local</th> <th>Shared</th> </tr> </thead> <tbody> <tr> <td>Private</td> <td>PP (p1)</td> <td>PS (p3)</td> </tr> <tr> <td>Shared</td> <td>SP (p2)</td> <td>SS (p4)</td> </tr> </tbody> </table> int *p1; /* private pointer to local memory */ shared int *p2; /* private pointer to shared space */ int *shared p3; /* shared pointer to local memory */ shared int *shared p4; /* shared pointer to shared space */ Shared to private is not recommended. UPC Pointers ``` int *p1; /* private pointer to local memory */ shared int *p2; /* private pointer to shared space */ int *shared p3; /* shared pointer to local memory */ shared int *shared p4; /* shared pointer to shared space */ ``` Pointers to shared often require more storage and are more costly to dereference; they may refer to local or remote memory. **Common Uses for UPC Pointer Types** - **int *p1;** - These pointers are fast (just like C pointers) - Use to access local data in part of code performing local work - Often cast a pointer-to-shared to one of these to get faster access to shared data that is local - **shared int *p2;** - Use to refer to remote data - Larger and slower due to test-for-local + possible communication - **int *shared p3;** - Not recommended - **shared int *shared p4;** - Use to build shared linked structures, e.g., a linked list UPC Pointers - In UPC pointers to shared objects have three fields: - thread number - local address of block - phase (specifies position in the block) <table> <thead> <tr> <th>Virtual Address</th> <th>Thread</th> <th>Phase</th> </tr> </thead> <tbody> <tr> <td>63</td> <td>49</td> <td>48</td> </tr> <tr> <td>38</td> <td>37</td> <td>0</td> </tr> </tbody> </table> - Example: Cray T3E implementation UPC Pointers - Pointer arithmetic supports blocked and non-blocked array distributions - Casting of shared to private pointers is allowed but not vice versa! - When casting a pointer-to-shared to a pointer-to-local, the thread number of the pointer to shared may be lost - Casting of shared to local is well defined only if the object pointed to by the pointer to shared has affinity with the thread performing the cast Special Functions - `size_t upc_threadof(shared void *ptr);` returns the thread number that has affinity to the pointer to shared - `size_t upc_phaseof(shared void *ptr);` returns the index (position within the block) field of the pointer to shared - `shared void *upc_resetphase(shared void *ptr);` resets the phase to zero Dynamic Memory Allocation in UPC - Dynamic memory allocation of shared memory is available in UPC - Functions can be collective or not - A collective function has to be called by every thread and will return the same value to all of them Global Memory Allocation shared void *upc_global_alloc(size_t nblocs, size_t nbytes); - nblocs : number of blocks - nbytes : block size - Non-collective: called by one thread - The calling thread allocates a contiguous memory space in the shared space - If called by more than one thread, multiple regions are allocated and each thread which makes the call gets a different pointer - Space allocated per calling thread is equivalent to: shared [nbytes] char[nblocks * nbytes] Collective Global Memory Allocation shared void *upc_all_alloc(size_t nblocks, size_t nbytes); nblocks: number of blocks nbytes: block size • This function has the same result as upc_global_alloc. But this is a collective function, which is expected to be called by all threads • All the threads will get the same pointer • Equivalent to : shared [nbytes] char[nblocks * nbytes] Memory Freeing void upc_free(shared void *ptr); • The upc_free function frees the dynamically allocated shared memory pointed to by ptr • upc_free is not collective Distributed Arrays Directory Style • Some high performance UPC programmers avoid the UPC style arrays • Instead, build directories of distributed objects • Also more general ```c typedef shared [] double *sdblptr; shared sdblptr directory[THREADS]; directory[i]=upc_alloc(local_size*sizeof(double)); upc_barrier; ``` Memory Consistency in UPC • The consistency model defines the order in which one thread may see another threads accesses to memory • If you write a program with unsynchronized accesses, what happens? • Does this work? data = ... while (!flag) { } flag = 1; ... = data; // use the data • UPC has two types of accesses: • Strict: will always appear in order • Relaxed: May appear out of order to other threads • There are several ways of designating the type, commonly: • Use the include file: #include <upc_relaxed.h> • Which makes all accesses in the file relaxed by default • Use strict on variables that are used as synchronization (flag) Synchronization - Fence - Upc provides a fence construct - Equivalent to a null strict reference, and has the syntax - upc_fence; - UPC ensures that all shared references issued before the upc_fence are complete PGAS Languages have Performance Advantages Strategy for acceptance of a new language - Make it run faster than anything else Keys to high performance - Parallelism: - Scaling the number of processors - Maximize single node performance - Generate friendly code or use tuned libraries (BLAS, FFTW, etc.) - Avoid (unnecessary) communication cost - Latency, bandwidth, overhead - Berkeley UPC and Titanium use GASNet communication layer - Avoid unnecessary delays due to dependencies - Load balance; Pipeline algorithmic dependencies One-Sided vs Two-Sided A one-sided put/get message can be handled directly by a network interface with RDMA support: - Avoid interrupting the CPU or storing data from CPU (preposts) A two-sided messages needs to be matched with a receive to identify memory address to put data: - Offloaded to Network Interface in networks like Quadrics - Need to download match tables to interface (from host) - Ordering requirements on messages can also hinder bandwidth Performance Advantage of One-Sided Communication • Opteron/InfiniBand (Jacquard at NERSC): • GASNet’s vapi-conduit and OSU MPI 0.9.5 MVAPICH • This is a very good MPI implementation – it’s limited by semantics of message matching, ordering, etc. • Half power point \( (N^{1/2}) \) differs by *one order of magnitude* Joint work with Paul Hargrove and Dan Bonachea GASNet: Portability and High-Performance <table> <thead> <tr> <th>System</th> <th>MPI ping-pong</th> <th>GASNet put+sync</th> </tr> </thead> <tbody> <tr> <td>Elan3/Alpha</td> <td>14.6</td> <td>6.6</td> </tr> <tr> <td>Elan4/IA64</td> <td>6.6</td> <td>4.5</td> </tr> <tr> <td>Myrinet/x86</td> <td>24.2</td> <td>17.8</td> </tr> <tr> <td>IB/G5</td> <td>22.1</td> <td>13.5</td> </tr> <tr> <td>IB/Opteron</td> <td>9.6</td> <td>8.3</td> </tr> <tr> <td>SP/Fed</td> <td>18.5</td> <td>9.5</td> </tr> </tbody> </table> GASNet better for latency across machines GASNet: Portability and High-Performance GASNet at least as high (comparable) for large messages Flood Bandwidth for 2MB messages - Elan3/Alpha: 244 - Elan4/A64: 857, 255 - Myrinet/x86: 858, 225, 228 - IB/G5: 795 - IB/Opteron: 799, 630 - SP/Fed: 1504, 1490 (up is good) GASNet: Portability and High-Performance GASNet excels at mid-range sizes: important for overlap Joint work with UPC Group; GASNet design by Dan Bonachea Case Study 2: NAS FT - Performance of Exchange (Alltoall) is critical - 1D FFTs in each dimension, 3 phases - Transpose after first 2 for locality - Bisection bandwidth-limited - Problem as #procs grows - Three approaches: - **Exchange:** - wait for 2nd dim FFTs to finish, send 1 message per processor pair - **Slab:** - wait for chunk of rows destined for 1 proc, send when ready - **Pencil:** - send each row as it completes Joint work with Chris Bell, Rajesh Nishtala, Dan Bonachea Overlapping Communication - Goal: make use of “all the wires all the time” - Schedule communication to avoid network backup - Trade-off: overhead vs. overlap - Exchange has fewest messages, less message overhead - Slabs and pencils have more overlap; pencils the most - Example: Class D problem on 256 Processors <table> <thead> <tr> <th>Method</th> <th>Size</th> </tr> </thead> <tbody> <tr> <td>Exchange (all data at once)</td> <td>512 Kbytes</td> </tr> <tr> <td>Slabs (contiguous rows that go to 1 processor)</td> <td>64 Kbytes</td> </tr> <tr> <td>Pencils (single row)</td> <td>16 Kbytes</td> </tr> </tbody> </table> NAS FT Variants Performance Summary - Slab is always best for MPI; small message cost too high - Pencil is always best for UPC; more overlap MFlops per Thread - Best NAS Fortran/MPI - Best MPI (always Slabs) - Best UPC (always Pencils) Joint work with Chris Bell, Rajesh Nishtala, Dan Bonachea Case Study 2: LU Factorization - Direct methods have complicated dependencies - Especially with pivoting (unpredictable communication) - Especially for sparse matrices (dependence graph with holes) - LU Factorization in UPC - Use overlap ideas and multithreading to mask latency - Multithreaded: UPC threads + user threads + threaded BLAS - Panel factorization: Including pivoting - Update to a block of U - Trailing submatrix updates - Status: - Dense LU done: HPL-compliant - Sparse version underway UPC HPL Performance - Comparison to ScaLAPACK on an Altix, a 2 x 4 process grid - ScaLAPACK (block size 64) 25.25 GFlop/s (tried several block sizes) - UPC LU (block size 256) - 33.60 GFlop/s, (block size 64) - 26.47 GFlop/s - n = 32000 on a 4x4 process grid - ScaLAPACK - 43.34 GFlop/s (block size = 64) - UPC - 70.26 Gflop/s (block size = 200) - MPI HPL numbers from HPCC database - Large scaling: - 2.2 TFlops on 512p, - 4.4 TFlops on 1024p (Thunder) Joint work with Parry Husbands Summary • UPC designed to be consistent with C • Some low level details, such as memory layout are exposed • Ability to use pointers and arrays interchangeably • Designed for high performance • Memory consistency explicit • Small implementation • Berkeley compiler (used for next homework) http://upc.lbl.gov • Language specification and other documents http://upc.gwu.edu
{"Source-Url": "http://www.inf.puc-rio.br/~noemi/pcp-10/upctutorial.pdf", "len_cl100k_base": 7511, "olmocr-version": "0.1.49", "pdf-total-pages": 64, "total-fallback-pages": 0, "total-input-tokens": 89205, "total-output-tokens": 10027, "length": "2e12", "weborganizer": {"__label__adult": 0.0003376007080078125, "__label__art_design": 0.0002753734588623047, "__label__crime_law": 0.0002949237823486328, "__label__education_jobs": 0.0005407333374023438, "__label__entertainment": 5.668401718139648e-05, "__label__fashion_beauty": 0.0001380443572998047, "__label__finance_business": 0.0001933574676513672, "__label__food_dining": 0.0003390312194824219, "__label__games": 0.000637054443359375, "__label__hardware": 0.0020198822021484375, "__label__health": 0.0004127025604248047, "__label__history": 0.000244140625, "__label__home_hobbies": 0.00010633468627929688, "__label__industrial": 0.0005674362182617188, "__label__literature": 0.00018358230590820312, "__label__politics": 0.00026416778564453125, "__label__religion": 0.0005788803100585938, "__label__science_tech": 0.027008056640625, "__label__social_life": 7.152557373046875e-05, "__label__software": 0.004978179931640625, "__label__software_dev": 0.95947265625, "__label__sports_fitness": 0.0003800392150878906, "__label__transportation": 0.0006418228149414062, "__label__travel": 0.00020122528076171875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26755, 0.0165]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26755, 0.46131]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26755, 0.82052]], "google_gemma-3-12b-it_contains_pii": [[0, 118, false], [118, 388, null], [388, 958, null], [958, 1606, null], [1606, 2076, null], [2076, 2679, null], [2679, 3212, null], [3212, 3232, null], [3232, 3900, null], [3900, 4312, null], [4312, 4741, null], [4741, 5238, null], [5238, 5655, null], [5655, 5684, null], [5684, 6060, null], [6060, 6612, null], [6612, 7139, null], [7139, 7754, null], [7754, 7774, null], [7774, 8333, null], [8333, 8891, null], [8891, 9424, null], [9424, 9440, null], [9440, 9885, null], [9885, 10468, null], [10468, 10784, null], [10784, 10829, null], [10829, 11255, null], [11255, 12062, null], [12062, 12517, null], [12517, 12543, null], [12543, 13208, null], [13208, 13800, null], [13800, 14299, null], [14299, 14713, null], [14713, 15187, null], [15187, 15564, null], [15564, 16230, null], [16230, 16619, null], [16619, 17180, null], [17180, 17544, null], [17544, 18077, null], [18077, 18420, null], [18420, 18841, null], [18841, 19167, null], [19167, 19406, null], [19406, 19887, null], [19887, 20271, null], [20271, 20438, null], [20438, 20761, null], [20761, 21436, null], [21436, 21657, null], [21657, 22203, null], [22203, 22661, null], [22661, 23032, null], [23032, 23533, null], [23533, 23807, null], [23807, 23963, null], [23963, 24480, null], [24480, 25046, null], [25046, 25343, null], [25343, 25870, null], [25870, 26370, null], [26370, 26755, null]], "google_gemma-3-12b-it_is_public_document": [[0, 118, true], [118, 388, null], [388, 958, null], [958, 1606, null], [1606, 2076, null], [2076, 2679, null], [2679, 3212, null], [3212, 3232, null], [3232, 3900, null], [3900, 4312, null], [4312, 4741, null], [4741, 5238, null], [5238, 5655, null], [5655, 5684, null], [5684, 6060, null], [6060, 6612, null], [6612, 7139, null], [7139, 7754, null], [7754, 7774, null], [7774, 8333, null], [8333, 8891, null], [8891, 9424, null], [9424, 9440, null], [9440, 9885, null], [9885, 10468, null], [10468, 10784, null], [10784, 10829, null], [10829, 11255, null], [11255, 12062, null], [12062, 12517, null], [12517, 12543, null], [12543, 13208, null], [13208, 13800, null], [13800, 14299, null], [14299, 14713, null], [14713, 15187, null], [15187, 15564, null], [15564, 16230, null], [16230, 16619, null], [16619, 17180, null], [17180, 17544, null], [17544, 18077, null], [18077, 18420, null], [18420, 18841, null], [18841, 19167, null], [19167, 19406, null], [19406, 19887, null], [19887, 20271, null], [20271, 20438, null], [20438, 20761, null], [20761, 21436, null], [21436, 21657, null], [21657, 22203, null], [22203, 22661, null], [22661, 23032, null], [23032, 23533, null], [23533, 23807, null], [23807, 23963, null], [23963, 24480, null], [24480, 25046, null], [25046, 25343, null], [25343, 25870, null], [25870, 26370, null], [26370, 26755, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26755, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26755, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26755, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26755, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26755, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26755, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26755, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26755, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26755, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26755, null]], "pdf_page_numbers": [[0, 118, 1], [118, 388, 2], [388, 958, 3], [958, 1606, 4], [1606, 2076, 5], [2076, 2679, 6], [2679, 3212, 7], [3212, 3232, 8], [3232, 3900, 9], [3900, 4312, 10], [4312, 4741, 11], [4741, 5238, 12], [5238, 5655, 13], [5655, 5684, 14], [5684, 6060, 15], [6060, 6612, 16], [6612, 7139, 17], [7139, 7754, 18], [7754, 7774, 19], [7774, 8333, 20], [8333, 8891, 21], [8891, 9424, 22], [9424, 9440, 23], [9440, 9885, 24], [9885, 10468, 25], [10468, 10784, 26], [10784, 10829, 27], [10829, 11255, 28], [11255, 12062, 29], [12062, 12517, 30], [12517, 12543, 31], [12543, 13208, 32], [13208, 13800, 33], [13800, 14299, 34], [14299, 14713, 35], [14713, 15187, 36], [15187, 15564, 37], [15564, 16230, 38], [16230, 16619, 39], [16619, 17180, 40], [17180, 17544, 41], [17544, 18077, 42], [18077, 18420, 43], [18420, 18841, 44], [18841, 19167, 45], [19167, 19406, 46], [19406, 19887, 47], [19887, 20271, 48], [20271, 20438, 49], [20438, 20761, 50], [20761, 21436, 51], [21436, 21657, 52], [21657, 22203, 53], [22203, 22661, 54], [22661, 23032, 55], [23032, 23533, 56], [23533, 23807, 57], [23807, 23963, 58], [23963, 24480, 59], [24480, 25046, 60], [25046, 25343, 61], [25343, 25870, 62], [25870, 26370, 63], [26370, 26755, 64]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26755, 0.02987]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
180a32ea2ad673da5493fadf170e4ba1878e876a
Motivating Examples Here are three standard algorithms — two searches and one sort — which should be analyzed to determine their computational efficiency. Sequential Search: // Sequentially search an array of n elements to determine if a given value is there. If so, set loc to be the first array location containing it and return true. Otherwise, return false. bool SeqSearch(float arr[], int n, float value, int & loc) { loc=0; while (loc<n && arr[loc] != value) { ++ loc; } return loc<n; } Insertion Sort: // Sort an array of n elements using insertion sort. void InsertSort(float arr[], int n) { for (int i=1; i<n; i++) { float temp = arr[i]; int j = i-1; while (j>=0 && arr[j] > temp) { arr[j+1] = arr[j]; j -- ; } arr[j+1] = temp; } } **Binary Search:** // Use binary search to determine if a given value is somewhere in an ordered array of n elements. If so, set loc to be the first array location containing it and return true. Otherwise, set loc to be the array location where it should be inserted and return false. // The array ordering is assumed to be what's called "non-decreasing" order, which means that // arr[0] <= arr[1] <= ... <= arr[n-1] // or, more precisely, // for 0 <= i < n-1, arr[i] <= arr[i+1] bool BinSearch(float arr[], int n, float value, int & loc) { int low = 0, high = n-1, mid; // Before each iteration of the loop, the following conditions hold: // 0 <= low < high < n, // for each j, 0 <= j < low, arr[j] < value // for each j, high <= j < n, value <= arr[j] // while (low < high) { mid = (low + high) / 2; if (value <= arr[mid]) high = mid; else low = mid+1; } loc = low; if (arr[loc] == value) return true; else { if (loc == n-1 && arr[n-1] < value) loc = n; return false; } } **Exercise** How many operations, as a function of the array size n, are required by SeqSearch. If you finish this, try to answer the same question for InsertSort. What issues arose in your discussion? Algorithm Analysis Rules - The goal is to determine the worst-case or average-case time required by an algorithm, generally as a function of the “size” of the data. (Sometimes even the “best-case” is considered!) - Assumptions: sequential execution, simple statements cost 1 unit of time, infinite memory, integers and reals represented in a fixed amount of memory. - Generally, statements are counted to form a function $f(n)$, where $n$ is the size of the data. Sometimes only special operations such as comparisons or exchanges are counted. - We will discuss in class rules for counting when algorithms include: - Consecutive statements. - If-then-else. - Loops and nested loops. Order Notation Order notation is a mathematical formalism used to summarize the computation time required by an algorithm, simplifying the function derived to count the number of operations. On the positive side, this avoids quibbling over the number of operations (and cost) involved in simple algorithmic steps. On the negative side, this does result in some loss of precision. - $T(n) = O(f(n))$ if there are constants $c$ and $n_0$ such that $T(n) \leq cf(n)$ for all $n \geq n_0$. - $T(n) = \Theta(f(n))$ if and only if $T(n) = O(f(n))$ and $f(n) = O(T(n))$. - Limits may be used to simplify this. Suppose \[ \lim_{n \to \infty} \frac{f(n)}{g(n)} = L. \] Then - If $L = 0$, $f(n) = O(g(n))$. - If $0 < L < \infty$, $f(n) = \Theta(g(n))$. If $L$ doesn’t exist, nothing can be concluded. L’Hopital’s rule may be used to analyze the limit. This requires converting $f$ and $g$ from functions of integers to functions of real numbers, which is usually trivial. Order Notation — Rules for Manipulation - If $T_1(n) = O(f(n))$ and $T_2(n) = O(g(n))$, then - $T_1(n) + T_2(n) = \max(\langle f(n) \rangle, \langle g(n) \rangle)$, and - $T_1(n)T_2(n) = O(f(n) \cdot g(n))$ The same rules hold when $\Theta$ is used throughout. - If $T(n)$ is a polynomial of degree $n$ then $T(n) = O(n^k)$ (actually, $T(n) = \theta(n^k)$). - $(\log n)^k = O(n)$ for any constant $k > 0$, but $(\log n)^k \neq \theta(n)$. Also, if $a > 0$ is any fixed constant, then $a = O((\log n)^k)$, but $a \neq \theta((\log n)^k)$. - “$O$” estimates for summations are done in two ways. - Evaluate the summation using techniques from Chapter 1 and then determine an “$O$” (or “$\Theta$”) estimate from the resulting function. - Place upper bounds on terms in the summation to simplify it and eliminate the summation. Order Notation Exercises 1. Show that $5n^2 + 6n = O(n^2)$ using the original definition of “$O$” and then using limits. 2. For each pair of functions, $T(n)$ and $f(n)$, determine which of the following hold: $T(n) = O(f(n)) \quad T(n) = \Theta(f(n))$ Justify your answer. (Assume $k$, $a$ and $b$ are unspecified constants greater than 1 and $a > b$.) (a) $T(n) = n^2 \log n + 5n$, $f(n) = n^3$ (b) $T(n) = \log(n^k)$, $f(n) = (\log n)^k$ (c) $T(n) = \log_a n$, $f(n) = \log_b n$ (d) $T(n) = 2^n$, $f(n) = 2^{(2n)}$. 3. Give the best possible $O$ estimate for $T(n)$, (a) $T(n) = (n^3 + 10n^2) \cdot (n^3 \log n + 20n^4)$ Answer: $T(n) = (n^3 + 10n^2) \cdot (n^3 \log n + 20n^4)$ $\quad = (n^6 \log n + 10n^5 \log n + 20n^7 + 200n^6)$ $\quad = O(n^7)$ (b) \( T(n) = n3^n + n^{10} + 1500n^3 \log n \). \[ T(n) = n3^n + n^{10} + 1500n^3 \log n = O(n3^n) \] (c) \( T(n) = \sum_{i=1}^{n} 5i(i - 1) \) \[ T(n) = \sum_{i=1}^{n} 5i(i - 1) \] \[ = 3 \sum_{i=1}^{n} i^2 - 3 \sum_{i=1}^{n} i \] \[ = \frac{5}{6}n(n + 1)(2n + 1) - \frac{5}{2}n(n + 1) \] \[ = \frac{5}{3}n^3 + \text{lower order terms} \] \[ = O(n^3) \] Or, even more simply: \[ T(n) = \sum_{i=1}^{n} 5i(i - 1) \] \[ = \sum_{i=1}^{n} O(i^2) \] \[ = O(n^3) \] 4. Derive an “O” estimate for the worst-case of InsertSort based on the function we derived in class. Answer: We actually got a variety of answers for the number of operations, depending on what we counted. If we count the two simple statements in the body of the outer loop and the two simple statements in the body of the inner loop, and assume the inner loop makes the maximum number of iterations, we get \[ T(n) = \sum_{i=1}^{n-1} (2 + \sum_{j=0}^{i-1} 2) \] Try evaluating this yourself. First evaluating the inner sum and then the outer one, \[ T(n) = \sum_{i=1}^{n-1} (2 + 2i) \] \[ = 2(n - 1) + (n - 1)n \] \[ = n^2 + n - 2 \] What’s this in O notation? \[ T(N) = O(n^2) \] We could count other operations, such as incrementing i and j, and we would get different constants in the exact formula for \(T(n)\), but constants don’t matter in the \(O\) notation. Algorithm Analysis Exercises 1. Count the number of operations in each of the following two code fragments as a function of \(n\), the length of the array. Each should yield a summation. Then, analyze each summation to give the best possible “\(O\)” estimate for each fragment. (a) // assume arr is an array containing n integers int k = 5; for (int i=0; i<=n-k; i++) { sum = 0; for (int j=i; j<i+k; j++) { sum += arr[j]; } cout << "Sum of elements " << i << " through " << i+k-1 << " is " << sum << "\n"; } Answer: Counting each assignment and output statement as one operation, we have \[ T(n) = 1 + \sum_{i=1}^{n-5} \left( 2 + \sum_{j=i}^{i+5} 1 \right) \] Try evaluating this yourself. \[ T(n) = 1 + \sum_{i=1}^{n-5} \left( 2 + \sum_{j=i}^{i+5} 1 \right) \] \[ = 1 + \sum_{i=1}^{n-5} (2 + 5) \] \[ = 1 + 7 \sum_{i=1}^{n-5} 1 \] \[ = 1 + 7(n - 6) \] \[ = 7n - 5 \] \[ = O(n) \] Note: Often when there are nested loops, the number of operations is quadratic. In this case, though, the inner loop only iterates a constant number of times (5), so the total time is linear, not quadratic. (b) ```c++ // assume arr is an array containing n integers int k = n/2; for (int i=0; i<=n-k; i++) { sum = 0; for (int j=i; j<i+k; j++) { sum += arr[j]; } cout << "Sum of elements " << i << " through " << i+k-1 << " is " << sum << "\n"; } ``` Answer: Again counting each assignment and output statement as one operation, we have \[ T(n) = 1 + \sum_{i=1}^{\lfloor n/2 \rfloor} (2 + \sum_{j=i}^{i+n/2} 1) \] \[ = 1 + \sum_{i=1}^{\lfloor n/2 \rfloor} (2 + n/2) \] \[ = 1 + 2 \sum_{i=1}^{\lfloor n/2 \rfloor} 1 + (n/2) \sum_{i=1}^{\lfloor n/2 \rfloor} 1 \] \[ = 1 + n + n^2/4 \] \[ = O(n^2) \] 2. Rewrite the second code fragment to make it as efficient as possible. Start by thinking carefully about what it actually does! What is the complexity of your new code fragment? Hint: It computes a series of sums, but they are closely related to each other, so it’s not necessary to compute each sum from scratch. 3. In the analysis of InsertSort we assumed that the worst-case would occur at all times. What must be the state of the array for the absolute maximum number of operations to occur? Repeat the analysis of InsertSort to derive average case and best case estimates? What state of the array causes the best-case to occur? More advanced analysis: - In analyzing recursive algorithms, usually a recursive equation (also called a “recurrence relation”) is derived modeling the number of steps required, which is then solved to yield a non-recursive formula. We will examine this using the factorial function and then, later, a solution to the max subsequence sum problem. - Logarithmic times in analysis usually arise from algorithms, such as Binary Search and Merge Sort, that break an array or data set in half and then consider one or both halves separately. Max Subsequence Sum - The simplest solution: algorithm 1 and its analysis. - An easy refinement: algorithm 2 and its analysis. - Divide-and-conquer: algorithm 3 and its analysis. - A simple, fast solution: algorithm 4 and its analysis. • We will confirm the analysis results experimentally. Exercises: 1. Derive a recursive equation to analyze MergeSort and then solve this equation. Assume the array is of size \( n = 2^k \) for integer \( k \geq 0 \). For completeness, here is the algorithm (combining material from Ch 1 and the Ch 1 review): ```c++ template <class T> void MergeSort(T * pts, int n) { MergeSort(pts, 0, n-1); } template <class T> void MergeSort(T * pts, int low, int high) { if (low == high) return; int mid = (low + high) / 2; MergeSort(T, low, mid); MergeSort(T, mid+1, high); // At this point the lower and upper halves // of "pts" are sorted. All that remains is // to merge them into a single sorted list. T* temp = new T[high-low+1]; // scratch array for merging int i=low, j=mid+1, loc=0; // while neither the left nor the right half is exhausted, // take the next smallest value into the temp array while (i<mid && j<high) { if (pts[i] < pts[j]) temp[loc++] = pts[i++]; else temp[loc++] = pts[j++]; } // copy the remaining values --- only one of // these will iterate for (; i<=mid; i++, loc++) temp[loc] = pts[i]; for (; j<high; j++, loc++) temp[loc] = pts[j]; // copy back from the temp array for (loc=0, i=low; i<high; loc++, i++) pts[i]=temp[loc]; delete [] temp; ``` 2. Find an efficient algorithm (along with a running time analysis) to find the *minimum* subsequence sum. **Review Problems** Here are a few review problems which have appeared on homeworks or tests in previous semesters. Practice writing solutions carefully and then compare to solutions provided on-line. If you can solve these problems and the problems we worked on in class then you are ready for the chapter quiz! 1. Show that \( \sum_{i=1}^{n} 2i^3 = O(n^4) \). 2. For each of the following, find \( f(n) \) such that \( t(n) = O(f(n)) \). Make \( f(n) \) as small and simple as possible, i.e. don’t write \( t(n) = O(n^4) \) when \( t(n) = O(n^3) \). Justify your answers. (a) \( t(n) = 13n^2 + 2^n \) (b) \( t(n) = 5(n + 3\log n)(n\log n + 13)\log n + 13n^2 \) (c) \( t(n) = \sum_{i=3}^{n} \sum_{j=i}^{n} i(n-j) \) 3. Exercise 2.6a from the text. Try to derive summations first. Note program fragment (6) is quite difficult. 4. Derive a summation to count, as a function of \( n \), the number of times \texttt{Hello} is output by *each* of the following code fragments. Obtain an accurate “\( O \)” estimate from the summation. (a) for \( i=1; i<=n; i++ \) for \( (j=1; j<i; j++) \) for \( (k=j+1; k<=n; k++) \) cout << "Hello\n"; (b) For this part, assume \( n = 2^k \) and assume the notation \( 2^i \) means \( 2^i \). for \( (i=0; i<k; i++) \) for \( (j=2^i+1; j<=n; j++) \) cout << "Hello\n"; 5. Exercise 2.11 of the text. 6. Write an algorithm that takes an unsorted list (provided as an array) of \( n \) floating point values and returns the smallest difference between any two values in the list. For example, for the list the algorithm should return 0.1, which is the difference between 3.0 and 2.9. Make your algorithm as efficient as you can and give the worst-case running time of your algorithm as a function of \( n \), briefly justifying your answer. Hint: you may change or reorganize the contents of the array. 2.9, 3.5, 1.1, 6.1, 2.3, 1.8, 8.7, 3.0, 2.4,
{"Source-Url": "http://www.cs.rpi.edu:80/courses/spring00/dsa/classdir/ch2/notes2.pdf", "len_cl100k_base": 4338, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 30350, "total-output-tokens": 5039, "length": "2e12", "weborganizer": {"__label__adult": 0.00044846534729003906, "__label__art_design": 0.0003705024719238281, "__label__crime_law": 0.000518798828125, "__label__education_jobs": 0.0024509429931640625, "__label__entertainment": 8.803606033325195e-05, "__label__fashion_beauty": 0.00020933151245117188, "__label__finance_business": 0.00017595291137695312, "__label__food_dining": 0.0006952285766601562, "__label__games": 0.001567840576171875, "__label__hardware": 0.0012359619140625, "__label__health": 0.0007152557373046875, "__label__history": 0.0003466606140136719, "__label__home_hobbies": 0.00016415119171142578, "__label__industrial": 0.0005183219909667969, "__label__literature": 0.00029158592224121094, "__label__politics": 0.00040078163146972656, "__label__religion": 0.0006794929504394531, "__label__science_tech": 0.0183563232421875, "__label__social_life": 0.0001271963119506836, "__label__software": 0.0030651092529296875, "__label__software_dev": 0.9658203125, "__label__sports_fitness": 0.000621795654296875, "__label__transportation": 0.0009202957153320312, "__label__travel": 0.00032019615173339844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13393, 0.02628]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13393, 0.93211]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13393, 0.79667]], "google_gemma-3-12b-it_contains_pii": [[0, 796, false], [796, 2073, null], [2073, 3764, null], [3764, 5397, null], [5397, 5966, null], [5966, 7300, null], [7300, 8160, null], [8160, 9928, null], [9928, 11305, null], [11305, 13051, null], [13051, 13393, null]], "google_gemma-3-12b-it_is_public_document": [[0, 796, true], [796, 2073, null], [2073, 3764, null], [3764, 5397, null], [5397, 5966, null], [5966, 7300, null], [7300, 8160, null], [8160, 9928, null], [9928, 11305, null], [11305, 13051, null], [13051, 13393, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13393, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13393, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13393, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13393, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 13393, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13393, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13393, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13393, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13393, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13393, null]], "pdf_page_numbers": [[0, 796, 1], [796, 2073, 2], [2073, 3764, 3], [3764, 5397, 4], [5397, 5966, 5], [5966, 7300, 6], [7300, 8160, 7], [8160, 9928, 8], [9928, 11305, 9], [11305, 13051, 10], [13051, 13393, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13393, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
2398cebb777f7830f20a1fe23f60a8d9d37bc568
Segregated Dynamic Linking with ELF by Vivek Das Mohapatra, August 10, 2018 Copyright © 2017, 2018 Collabora Ltd Introduction In this article I’m going to discuss a project I’ve been working on at my employer (Collabora Ltd) for one of our customers (Valve Software): Segregated Dynamic Linking for ELF binaries. Contents Terminology 2 What is Segregated Dynamic Linking? 3 Why is Segregated Dynamic Linking? 3 How do we achieve segregated linking? 4 Calling foreign functions from a DSO . . . . . 4 How the linker finds DSOs . . . . . . . . . . . . . . 7 Introducing dlmopen() . . . . . . . . . . . . . . . . 8 Putting the pieces together 9 Supplying a shim DSO . . . . . . . . . . . . . . 9 Loading and exposing the real symbols . . . . 9 Monkeypatching a symbol into a DSO . . . . . 10 ELF DSO layout in memory . . . . . . . . . . 10 Performing a manual symbol relocation . . . . 11 Lies, TODOs, and FIXMEs... 12 Postscript: Putting it all into practice 13 ## Terminology ### calling convention The rules explaining how arguments and return value(s) should be passed between calling code and called function. ### DSO Dynamic Shared Object: A program, a library or a loadable module used to make a working dynamically linked application. ### ELF Executable and Linkable Format: A specific DSO format used by many unix-ish platforms. ### foreign function A function invoked by one DSO that is defined in another. ### GOT A table of Relocation Records (RRs), one for each external symbol. ### linker Specifically, the run-time dynamic linker - the part of system responsible for taking a set of DSOs and assembling them into a working, running program. ### PLT Procedure Linkage Table. ### RR Relocation Record. ### stack Area of memory where arguments for [and return values from] functions are stored during function calls. ### symbol A function or variable (a named thing) provided by or used by a DSO. What is Segregated Dynamic Linking? In normal dynamic linking there is one copy of every DSO mapped into memory, and each DSO can see the symbols of every DSO that it asked to see. Example: If my program needs two DSOs (libAA and libBA) and both the program and libBA need a third DSO (libBB) then there will be one copy of libBB and both the main DSO and libBA will see libBB’s symbols, but libAA will not. In Segregated Dynamic Linking, we still link in a DSO as usual but we hide some of the symbols (functions, variables, etc) that would normally be visible as a result: This allows us to limit the visibility of symbols that would otherwise cause conflicts. Why is Segregated Dynamic Linking? It would be perfectly reasonable, at this point, to ask why we don’t ‘just’ update the offending library to use a compatible version of its dependency: In an ideal world that is exactly what we’d do - unfortunately that is not the world in which we live. In this particular case the scratch that needs itching is the ability of a game in a runtime (to oversimplify: a fixed set of libraries guaranteed to be at specific versions) to work with a Mesa (3D graphics library) pulled from outside the runtime. Why is this necessary? It falls out like this: The runtime is effectively a promise made to the game developers that certain versions of a given set of libraries will always be available, no matter which version of the host operating system the game finds itself on. But Mesa needs to talk to specific hardware - and hardware changes rapidly. So we can't drop a frozen-in-time version of Mesa into the runtime and use that: Even though the Mesa interface might not change the old Mesa probably wouldn’t be able to talk to shiny new hardware - which would mean your game simply wouldn’t run on hardware made any significant amount of time after its release. Ok, you might reasonably ask, why not ‘just’ pull in the Mesa from outside the runtime and be done with it? It turns out this is where our troubles begin: Mesa (or rather its hardware drivers) depend on libstdc++ The runtime provides a specified libstdc++ version Our game loads the runtime libstdc++ DSO Our game loads the Mesa DSOs The linker notes that it already has a libstdc++ DSO The linker uses this DSO to link the Mesa DSOs If the versions aren't compatible... we crash 😞 This is not, as you can imagine, a great sales pitch for your runtime: You can work around it by carefully controlling both the host operating system and the runtime - but that limits your portability. How do we achieve segregated linking? To answer that question we're going to need to pay attention to some low-level details of how function calls between DSOs work: Calling foreign functions from a DSO When code in a DSO wishes to call a foreign function it cannot do so directly: It has no way of knowing where in memory the linker has put the other DSO. This is fixed (as with all problems) by adding a layer of indirection (actually two, because it makes things tidier). Here’s how it works: Some definitions: **PLT - Procedure Linkage Table** An array of stub functions, one for each foreign function. The Procedure Linkage Table (PLT) resides in the Read-Only TEXT section of the DSO, along with the rest of the executable code, and is shared by all copies of the DSO in use. Every DSO has its own PLT. **GOT - Global Offset Table** An array of RRs, each giving the actual location of an item mentioned in the PLT. The GOT resides in the read-write DATA section of the DSO, there is a private copy for each copy of the DSO in use. Every DSO has its own GOT. This is how the first call to a foreign function (whose address we do not yet know) looks: - Calling code puts foreign function arguments on the stack - Execution jumps to a fixed offset in the PLT (specific to this function) - The PLT stub looks up the corresponding address in the GOT and jumps to it - The fixup code pointed at by the GOT asks the linker for the real address - The linker searches the calling DSOs dependencies for the symbol - The fixup code writes the address into the GOT slot - The fixup code jumps to the address in the GOT slot - The code in the foreign DSO pulls the arguments off the stack - The return value is pushed onto the stack - Execution jumps back to the original DSO, just after the original call Subsequent calls skip the stages marked — (in green) as the GOT slot will already contain the real symbol's address. Drawing that out in greater detail: - **Program TEXT (ro)** - push funcX args onto stack - jump to <funcX@PLT> - pop funcX return from stack - **Program PLT (ro)** - fixup@PLT - get name funcX from stack - get address <0xfuncX> - write <0xfuncX> into <funcX RR> - jump to <0xfuncX> - funcA@PLT - ... read address from funcX RR - jump to funcX RR address - **Program GOT (rw)** - reserved - reserved - reserved - funcA RR: <fixup@PLT> - ... funcX RR: <fixup@PLT> ... <0xfuncX> Key: - read-only program text - read-write program data - read-only DSO text - boundary of well-defined object - subdivision within well-defined object - jump (pre-calculated offset) - jump (dynamically calculated offset) - read from fixed offset - write to fixed offset Some things worth noting here (they'll be important later): - Only the calling code in the DSO and the called code in the foreign DSO know anything about the arguments and return value (signature) of the called function. None of the intervening steps know or care about these things. - The PLT is read-only, but the GOT is writable. - The address stored in the GOT slot is the only thing we need to change to divert a function call. - If we scribble an address into the GOT slot before the fixup code is ever called we can completely short circuit the linker’s normal symbol search algorithms. How the linker finds DSOs The observant reader may have noticed that we have glibly skimmed over some details above: “The linker searches the DSOs that the calling DSO depends on” Let’s unpack some of those details: When the linker opens a DSO for the first time it scans it for items in the ‘Dynamic Section’. There are many types of entry here, but for now we’re interested in the DT_NEEDED entries. These entries tell the linker which DSOs are needed by the initial DSO. The other piece of infrastructure we need to know about is the ‘link map’ list. A link map list is a [doubly] linked list of currently loaded DSOs - when a DSO is linked its unresolved symbols are searched for in the link map list, but only DSOs mentioned explicitly in its DT_NEEDED entries, plus any DSOs which have been declared as globally available, are used for this search: Others are skipped over. A simplified diagram showing a 3-element link map list: ``` NULL Ox… Ox… Ox… NULL /usr/.../libA.so.X .../libB.so.X .../libX.so.X libA ELF data libB ELF data libX ELF data ``` Here’s the rough process: - Open the target DSO - Scan it for DT_NEEDED entries - For each dependency: - If the DSO is already mapped add its link map entry to the link map list for our target - Otherwise: - open the new DSO - map it into memory - add the new link map entry to the list for our target - recurse into this algorithm for the new DSO - Add any declared-global DSOs to the target’s link map list The key will be controlling which DSOs appear in our link map We can prevent a DSO from being loaded by pre-seeding a DSO with the same base name in the current link map NOTE: the DT_NEEDED entry is generally the bare DSO name - something like `libfoo.so.1` - and when checking the link map for a DSO it is this last part of the path that is compared. The exact path does not normally matter. When the linker searches for a symbol for a given DSO, it searches only in the link map for the requesting DSO. (A given DSO can appear in multiple link maps, the same copy is re-used). Now we understand how the linker searches for symbols - we’ve learned some important things: - We have the foundations of restricted symbol visibility This looks pretty promising - we have all (or nearly all) the pieces of the puzzle. We might, in fact, have all the pieces if we’re willing to get our hands very dirty indeed and hack up the link maps entirely by hand... But it turns out the glibc developers at Red Hat have already done some of the heavy lifting for us in the form of the `dlmopen()` call. www.collabora.com Introducing dlmopen() First some background: The `dlopen()` C library call opens a DSO, in a similar way to the linker at program start up time: It opens the target and its DT_NEEDED dependencies (in the same way as the linker does, described above) and adds them to the link map for the new DSO. This is close to what we want, but not quite there for two main reasons: - `dlopen()` will re-use existing link map entries which match the target DSO’s DT_NEEDED, which might pull in the ‘wrong’ version of a DSO. - If a new DSO is later opened and flagged as global, and our DSO (or any of its DT_NEEDED entries) are also required by that new DSO, they will cease to be hidden and start being used by the linker (and we want to control DSO and symbol visibility). This is where dlmopen() comes in: It is like `dlopen()`, but puts the new DSO’s link map entries in a specific link map list (that you request) or creates a new list and puts the entries there. If the new DSO mapping is not added to the default link map list the linker won’t encounter it while resolving symbols for DSOs in the default namespace - this means no symbols from the new DSO will be exposed or used during normal linking. Likewise since the new DSO gets its own private link map list, it won’t (by default) have any of its symbols resolved from our main set of DSOs. Now that we have all\(^1\) the pieces of the puzzle, we can put together a solution to our problem. \(^1\)Spoiler: We’re going to need more pieces. Putting the pieces together Let's start by stating the exact problem we're going to try and solve: - We have a runtime in which we wish to start a program - The runtime contains some, but not all of the DSOs we need - The runtime appears to the program as a filesystem starting at / - The real filesystem is bound into the runtime, and appears as /host - In other words, /lib from the OS appears as /host/lib, and so on. - If a DSO comes from /host, we want to use only other DSOs from /host - Only the explicitly required /host DSO should be used by the linker - ie anything else pulled in from /host should remain invisible to the program and other runtime DSOs - The program should run completely unaltered: No rebuilds, no patching of the binary. Supplying a shim DSO The first step is allowing the program to find a DSO that satisfies its DT_NEEDED entry. We can't simply add the /host tree to the linker's search path (that wouldn't allow /host vs runtime segregation): We can, however, provide a fake library with the same name as our target DSO and let the linker find that. This shim library is made easier to generate by the fact that we don't (as you may recall from earlier) need to know the signature of each function: We're never going to call the stubroutines it contains so we only need then to have a compatible entry in the symbol table of our shim DSO. Loading and exposing the real symbols Now that we've generated our fake DSO, we need to have it find the real symbols and patch up the GOTs of any DSOs that need them. This requires a few steps: - The list of symbols to export from the dlmopen() namespace is built into the library (it turns out this is the most reliable way to do this - and there's no reason not to other than code/binary size) - The shim DSO harvests the DT_NEEDED entries recursively from the target DSO from /host and its dependencies, keeping track of which needs what - To find the relevant DSOs, our shim must read and interpret the linker cache at /host/etc/ld.so.cache, as well as searching LD_LIBRARY_PATH, if set, and also searching the default locations at /lib and /usr/lib - In addition to the path mapping we also have to be careful to ignore DSOs with the 'wrong' type - on x86* there are three common types: x86-64, i386 and x32. We only want to pick DSOs which are the same as the original DSO (the program) - While it does so it takes care to remap all paths to have a /host prepended to them, including resolving any symlinks manually with the extra /host element where necessary - Some DSOs should not be pulled from /host - notably the linker (ld.so) itself. It also turns out that in order for the program to work reliably the libc cluster (the core C library, libpthread and so on) must also be from the same tree as one another (and ∴ not from /host) It then dlmopen()s the target DSOs in reverse dependency order: That is - Any DSOs with no dependencies are opened first, then the DSOs which only needed those first DSOs, and so on and so forth The dlmopen() calls are made with the full /host/... path to the DSO in question, to make sure we pick up the /host version The linker, when checking for DT_NEEDED items in the link map, only checks the base name of the DSO (the final libfoo.X part) so our /host DSOs only get other /host DSOs used to satisfy their requirements. Finally, once the full /host DSO set has been loaded, our shim DSO monkeypatches the runtime DSOs to make them use the right symbols (this is complicated enough that I'm going to break it out into its own section [see below]). Monkeypatching a symbol into a DSO ELF DSO layout in memory This is where we need to dive into the details of how an ELF DSO is laid out in memory (as opposed to on disc) in even greater detail. They key to all of this is the glibc library function dl_iterate_phdr(): it allows us to invoke a callback function of our choice on each DSO in the default namespace (it does not touch private dl-mopen() namespaces). Our callback function will know the base address in memory of the DSO, the name of the DSO, and the start of the array of ELF Program Headers. There are many types of program header (see /usr/include/elf.h) but luckily for us we're only interested in PT_DYNAMIC sections. These are, unsurprisingly, the part(s) of the ELF DSO related to dynamic linking. There could technically be more than one dynamic section: We're going to iterate over them all and treat them all the same so this doesn't matter much. In each PT_DYNAMIC section we find an array of ElfW(Dyn) entries, each with a DT_... type, finishing with an entry of type DT_NULL (see elf.h again). These dynamic entries are not guaranteed to be in any particular order (although the DT_*TAB entries will appear at the start), and some of them can only be interpreted usefully by referring to the contents of other DT_... entries. Here are the ones we turn out to be interested in: **DT_STRTAB** The string table. This points to an area (within the current PT_DYNAMIC section) that contains NULL terminated strings. Any DT_... item that needs to refer to a printable string will point to a location in this entry (pointer) **DT_SYMTAB** The symbol table. Contains some metadata about each symbol such as its visibility and type. Since we're working from a pre-compiled list we can assume most of these - we're mainly interested as a way of finding the name of the symbol in the DT_STRTAB (pointer) **DT_PLTREL** This gives the type of the relocation table - DT_REL or DT_RELA (integer) **DT_PLTRELSZ** The total size of the array of relocation records in the relocation table. Used for bounds-checking (integer) **DT_RELASZ/DT_RELSZ** The size of the DT_REL[A] relocation table. At most one may be present (and must match the table type). (integer) **DT_JMPREL** The address of the relocation records. We need DT_PLTREL & DT_PLTRELSZ to process this. (pointer) **DT_REL/DT_RELA** In addition to the relocation table pointed at by DT_JMPREL, there may also be additional relocation tables existing independently. They're typically not interesting as all function-related relocations should be in the DT_JMPREL table, but we process them anyway as this layout rule is 'just' a convention. (pointer, not seen in testing so far) ### Performing a manual symbol relocation Having found the symtab, strtab and relocation array, we can process the entries there: Each relocation record contains the following information: - The address (of the GOT entry, not the symbol) That is, the address of the address used by the PLT to call the function - The relocation type. There are many relocation types (about 40 for x86-64 alone). Fortunately to relocate basic function calls we only need to handle a few per architecture: R_X86_64_JUMP_SLOT & R_386_JMP_SLOT being the most important - and occasionally R_X86_64_GLOB_DAT, R_X86_64_64 and their R_386... equivalents. [ Every relocation type has its own rules for how to calculate the correct jump. ] - The addend (an additional offset to the address). This occurs if the item being relocated is an offset from another named object (eg if a variable is defined as a fixed offset into an array or struct, and the compiler knows this). [ Only DT_RELA records have addends. ] - The index (0-based) of this symbol in the symtab. Finally, having gathered all this information, we can perform a relocation (remember that we have to do this for every DSO from the runtime as each one will have its own GOT): - Check the name of the symbol (relocation record → symtab → strtab) against the hard-coded relocation list in our shim DSO. If it’s not there we’re leaving this symbol alone. - Get the real address of the symbol from the dl-mopen() private namespace with dlsym(). - Calculate the address of the GOT slot which we wish to overwrite. To start with, it will contain the address of the fixup code in the PLT. - Write the real address into the GOT slot. - Proceed to beer island. We have monkey-patched one function symbol in one DSO! What’s left is repeating this process for every symbol in our hard-coded export list, for every DSO from the runtime. Strictly speaking we should only monkeypatch symbols in DSOs which require the real library via a DT_NEEDED entry: If we are a shim for lib-foo.so.1 then we should only try to replace symbols in other DSOs that explicitly depend on lib-foo.so.1. In practice it has not been necessary to be careful about this (at least so far). It’s worth noting that we’re using a very stupid, brute-force approach to matching each symbol: We don’t use any of the clever hashing that the linker uses to speed things up. This is a deliberate ploy to keep the number of moving parts down to a minimum. Lies, TODOs, and FIXMEs... That's it, we're done. Except... for the lies. I have, for the sake of simplicity(!) told some lies and left out some details along the way. Here they are: - **Lie:** The **GOT** slot contains the fixup address to start with. - **Lie:** The **GOT** slot is (always) writable. These are sort of true. Mostly. Except that if the library is RELRO linked, the linker will preemptively fix up the addresses before any code is invoked by the **DSO** and then **mprotect**() the memory region containing the **GOT** to render it unwritable. The workaround is to read the **mprotect**()ed region data from /proc/self/maps, toggle the write bits on, do all our monkeypatching and then put them back the way they were. - **TODO:** The symbol lookup has been simplified: I've left out all the details to do with symbol versioning. The initial goal of the project doesn't require symbol versioning support, so our shim **DSOs** currently don't handle it. At all. - **Lie:** **dlopen()** will work in the private namespace Currently **dlopen()** cannot be called from any code that resides in a private **dlmopen()** namespace. There are two reasons for this: The project-related one is that our target library (libGL) uses **dlopen()** to open its drivers... but it doesn't know it's been re-mapped to /host and must get all its dependencies and modules from there. The workaround here is to monkeypatch all the **DSOs** inside the private namespace to divert **dlopen** to a wrapper that does all of the magic **dlmopen()** reverse-dependency-remap-to-host magic. Fortunately most of the pieces we need for this are ones we've already had to implement for monkeypatching **DSOs** from the runtime, with one glaring exception: **dl_iterate_phdr** doesn't deliver private namespace entries to us. However we're in luck - we can use **dlinfo()** with **RTLD_DI_LINKMAP** to get a link map entry instead, and this turns out to have almost all the information that **dl_iterate_phdr** gives us (and all the info that we care about). [The other problem is that **dlopen()** currently segfaults if called from inside a private **dlmopen** namespace - this is a glibc limitation] - **LIE:** Monkeypatching only needs to happen at start-up If the program ever **dlopen()**s a **DSO** after start up, and it has our target **DSO** in its **DT_NEEDED** entries, it will get the fake addresses from our shim **DSO**. The fix is to also wrap **dlopen()** in the runtime libraries, replacing it with a wrapper that calls the real **dlopen()** and then fixes up any new **DSOs** we haven't seen before. - **Lie:** Monkeypatching is the only address-redirection we need to do If any of the runtime **DSOs** use **dlsym()** to find a symbol patched up by our shim library, they'll find the fake function instead. We wrap **dlsym()** in the same way as **dlopen()** above to handle this. - **FIXME:** **dlmopen()** and **RTLD_GLOBAL** don't work together **dlopen()** supports **RTLD_GLOBAL**. **dlmopen()** does not. If the /host **DSOs** need **RTLD_GLOBAL** **dlopen()** calls to work then currently they... won't. This is another glibc limitation. - **TODO:** We only need to worry about a few relocation types There are more relocation types we may need to support for this project to be generically useful. Variables (which don't go through the **PLT**) should be supported. Likewise TLS (Thread Local Storage) relocations will require more relocation type support. This is another item that isn't (I believe) particularly difficult, but it hasn't been necessary for the initial goals of our project, so onto the TODO list it goes. LIE: The DSO namespaces can be completely segregated. It turns out that for boring implementation detail reasons, some DSOs must not only be the same version but the same instance in both the main and secondary namespaces. In particular the C library cluster cannot function reliably with more than one copy, particularly if threading is activated at any time. This is not possible in any released glibc version, but patches implementing an RTLD_SHARED dlopen flag have been tried (successfully) and sent upstream for review. This, then, is what our solution ends up looking like: There you have it: Everything you never wanted to know about subverting the ELF dynamic linker. Postscript: Putting it all into practice To prove that all of this can actually be done the work-in-progress repo, which has been successfully used to proxy Mesa’s libGL, can be found here: libcapsule.git
{"Source-Url": "https://people.collabora.com/~vivek/dynamic-linking/segregated-dynamic-linking.pdf", "len_cl100k_base": 6170, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 34632, "total-output-tokens": 6874, "length": "2e12", "weborganizer": {"__label__adult": 0.0003097057342529297, "__label__art_design": 0.00020575523376464844, "__label__crime_law": 0.0002267360687255859, "__label__education_jobs": 0.00013840198516845703, "__label__entertainment": 4.291534423828125e-05, "__label__fashion_beauty": 9.524822235107422e-05, "__label__finance_business": 0.0001024007797241211, "__label__food_dining": 0.0002512931823730469, "__label__games": 0.0005245208740234375, "__label__hardware": 0.0006699562072753906, "__label__health": 0.00014674663543701172, "__label__history": 0.0001245737075805664, "__label__home_hobbies": 5.322694778442383e-05, "__label__industrial": 0.00025081634521484375, "__label__literature": 0.00012117624282836914, "__label__politics": 0.0001685619354248047, "__label__religion": 0.00029540061950683594, "__label__science_tech": 0.002288818359375, "__label__social_life": 5.352497100830078e-05, "__label__software": 0.004901885986328125, "__label__software_dev": 0.98828125, "__label__sports_fitness": 0.00020706653594970703, "__label__transportation": 0.00026488304138183594, "__label__travel": 0.0001500844955444336}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25240, 0.00461]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25240, 0.2804]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25240, 0.91016]], "google_gemma-3-12b-it_contains_pii": [[0, 980, false], [980, 1936, null], [1936, 3962, null], [3962, 5565, null], [5565, 6427, null], [6427, 7809, null], [7809, 10423, null], [10423, 11921, null], [11921, 14516, null], [14516, 17602, null], [17602, 20677, null], [20677, 24351, null], [24351, 25240, null], [25240, 25240, null]], "google_gemma-3-12b-it_is_public_document": [[0, 980, true], [980, 1936, null], [1936, 3962, null], [3962, 5565, null], [5565, 6427, null], [6427, 7809, null], [7809, 10423, null], [10423, 11921, null], [11921, 14516, null], [14516, 17602, null], [17602, 20677, null], [20677, 24351, null], [24351, 25240, null], [25240, 25240, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25240, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25240, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25240, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25240, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25240, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25240, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25240, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25240, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25240, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25240, null]], "pdf_page_numbers": [[0, 980, 1], [980, 1936, 2], [1936, 3962, 3], [3962, 5565, 4], [5565, 6427, 5], [6427, 7809, 6], [7809, 10423, 7], [10423, 11921, 8], [11921, 14516, 9], [14516, 17602, 10], [17602, 20677, 11], [20677, 24351, 12], [24351, 25240, 13], [25240, 25240, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25240, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
bbaaaca9bbe35d4fc6b748be64d33b47e55ffb96
Building Secure Software A Difficult But Critical Step in Protecting Your Business Gary McGraw, Ph.D. Building Secure Software The Internet continues to change the role that software plays in the business world, fundamentally and radically. Software no longer simply supports back offices and home entertainment; instead, software has become the lifeblood of our businesses and has become deeply entwined into our lives. The invisible hand of Internet software enables e-business, automates supply chains, and provides instant, worldwide access to information. At the same time, Internet software is moving into our cars, our televisions, our home security systems, and even our toasters. All of this software introduces security risk. Software is the biggest problem in computer security today. A majority of organizations invest in security by buying and maintaining a firewall, but go on to let anybody access multiple Internet-enabled applications through the firewall. These applications are often remotely exploitable, rendering the firewall impotent (not to mention the fact that the firewall is often a piece of fallible software itself). Contrary to popular belief, cryptography alone cannot solve the software security problem. Trust in Cyberspace reports, for example, that 85% of CERT security advisories (through 1998) could not have been prevented with cryptography. Internet-enabled applications present the largest category of security risk today. Real attackers compromise software. The ultimate answer to the computer security problem lies in making software behave. Current approaches, based on fixing things only after they have been exploited in fielded systems, address only symptoms, ignoring the cause of the problem. This paper discusses an approach to security that goes beyond firewalls and cryptography to treat software and software development as essential aspects of a mature risk management approach to security. I present a unified view that emphasizes the understanding and management of both architecture and implementation risks. Software Security Woes A Growing Problem Simply put, security holes in software are common. The December 2000 issue of Industry Standard featured an article titled “Asleep at the Wheel,” that emphasizes the magnitude of today’s security problem. Using numbers derived from Bugtraq (a mailing list dedicated to reporting security vulnerabilities), author David Lake showed the number of new vulnerabilities reported monthly from the start of 1998 until the end of September 2000 (see Figure 1). According to this data, the number of software holes being reported is growing. These figures suggest that approximately 20 new vulnerabilities in software are made public each week. Note that “tried-and-true” software may not be as safe as one might think; many vulnerabilities that have been discovered in software existed for months, years and even decades before discovery. --- The consequences of security flaws vary. Consider that the goal of most malicious hackers is to take control of a networked computer. Attacks tend to be either remote or local. In a remote attack, a malicious attacker can break onto a machine that is connected to the same network, usually through some flaw in software. If the software is available through a firewall, then the firewall will be powerless to protect the computer. In a local attack, a malicious user can gain additional privileges on a machine—usually administrative privileges. Once an attacker has a foothold on a target machine, it is difficult to keep him or her from getting administrative access. Operating systems and the privileged applications that are found on such machines constitute such a large and complex body of code that the presence of some as-yet-undiscovered security hole is always likely. Trends Affecting Software Security Most modern computing systems are susceptible to software security problems, so why is software security a bigger problem now than in the past? I argue that a small number of trends have a large influence on the growth and evolution of the problem. Networks Are Everywhere The growing connectivity of computers through the Internet has increased both the number of attack vectors and the ease with which an attack can be made. This puts software at greater risk. More and more computers, ranging from home PCs to systems that control critical infrastructures (e.g., the power grid), are being connected to enterprise networks and to the Internet. Furthermore, people, businesses, and governments are increasingly dependent upon network-enabled communication such as e-mail or Web pages provided by information systems. Unfortunately, as these systems are connected to the Internet, they become vulnerable to software-based attacks from distant sources. An attacker no longer needs physical access to a system to exploit vulnerable software. Because access through a network does not require human intervention, launching automated attacks is relatively easy. The ubiquity of networking means that there are more software systems to attack, more attacks, and greater risks from poor software security practice than in the past. Systems are Easily Extensible A second trend negatively affecting software security is the degree to which systems have become extensible. An extensible host accepts updates or extensions, sometimes referred to as mobile code, so that the functionality of the system can be evolved in an incremental fashion. For example, the plug-in architecture of Web browsers makes it easy to install viewer extensions for new document types as needed. Today’s operating systems support extensibility through dynamically loadable device drivers and modules. Today’s applications, such as word processors, e-mail clients, spreadsheets, and Web-browsers, support extensibility through scripting, controls, components, and applets. From an economic standpoint, extensible systems are attractive because they provide flexible interfaces that can be adapted through new components. In today’s marketplace, it is crucial that software be deployed as rapidly as possible in order to gain market share. Yet the marketplace also demands that applications provide new features with each release. An extensible architecture makes it easy to satisfy both demands by allowing the base application code to be shipped early and feature extensions to be shipped later as needed. Unfortunately, the very nature of extensible systems makes it hard to prevent software vulnerabilities from slipping in as an unwanted extension. Advanced languages and platforms, including Sun Microsystem’s Java and Microsoft’s .NET Framework, are making extensibility commonplace. System Complexity is Rising A third trend impacting software security is the unbridled growth in the size and complexity of modern information systems, especially software systems. A desktop system running Windows NT and associated applications depends upon the proper functioning of the kernel as well as the applications to ensure that vulnerabilities cannot compromise the system. However, NT itself consists of at least 20 million lines of code, and end user applications are becoming equally, if not more, complex. When systems become this large, bugs cannot be avoided. Figure 2 shows how the complexity of Windows (measured in lines of code) has grown over the years. --- 4 Interestingly, these three general trends are also responsible for the alarming rise of malicious code [McGraw and Morrisett, 2000]. The complexity problem is exacerbated by the use of unsafe programming languages (e.g., C or C++) that do not protect against simple kinds of attacks, such as buffer overflows. In theory, we could analyze and prove that a small program was free of problems, but this task is impossible for even the simplest desktop systems today, much less the enterprise-wide systems used by businesses or governments. Penetrate and Patch is Bad Many software vendors fail to understand that security is not an add-on feature. They continue to design and create products while paying little attention to security. They start to worry about security only after their product has been publicly (and often spectacularly) broken by someone. Then they rush out a patch instead of coming to the realization that designing security in from the start might be a better idea. My goal is to minimize the unfortunately pervasive “penetrate-and-patch” approach to security and thus prevent the problem of desperately having to come up with a fix to a problem that is being actively exploited by attackers. In simple economic terms, it is orders of magnitude cheaper and more effective to find and remove bugs in a software system before its release than trying to fix systems after release. Designing a system for security, carefully implementing the system, and testing the system extensively before release present a much better alternative. The fact that the existing penetrate-and-patch framework is so poorly implemented is yet another reason why the approach needs to be changed. Researchers Bill Arbaugh, Bill Fithen, and John McHugh discuss a life-cycle model for system vulnerabilities that emphasizes how big the problem is. Data from their study shows that intrusions increase once a vulnerability is discovered, the rate continues to increase even though the vendor releases a patch, and exploits continue to occur even after the patch is issued – sometimes years after (see Figure 3). It takes a long time before most users and administrators upgrade to patched software versions. Figure 3: Average Curve of Number of Intrusions for a Security Bug Over Time, as Reported by Arbaugh, Fithen and McHugh, 2000. --- Software Quality Management For Security There is no substitute for working software security as deeply into the software development process as possible, taking advantage of the engineering lessons software practitioners have learned over the years. Which particular software process is followed is not as important as the act of thinking about security as software is designed and built. Both software engineering discipline and security standards such as the Common Criteria both provide many useful tools that good software security can leverage. The key to building secure software is treating software security as risk management and applying the tools in a manner that is consistent with the purpose of the software itself. Bricks, Walls, And Software Security Risk Analysis The aphorism "keep your friends close and your enemies closer" applies quite aptly to software security. Software security is really risk management and thus includes risk identification and assessment as a critical step. The key to an effective risk assessment is expert knowledge of security. Being able to recognize situations where common attacks can be applied is half the battle. Software security risks come in two main varieties: architectural problems and implementation errors. Most software security material focuses on implementation errors leading to problems such as buffer overflows, race conditions, randomness problems, and a handful of other common mistakes. These issues are important, but focusing solely on the implementation level will not solve the software security problem. Building secure software is like building a house. I liken correct use of security-critical system and library calls (such as `strncpy()` versus `strcpy()`) to using solid bricks as opposed to using bricks made of sawdust. The kinds of bricks you use are important to the integrity of your house, but even more important (if you want to keep bad things out) is having a solid foundation, four walls and a roof in the design. The same thing goes for software: what system primitives you use and how you use them is important, but overall design properties often count for more. The following two sections are devoted to bricks and walls, respectively. Implementation Risks (Brick Problems) Software security is often approached as an exercise in addressing a clear set of known problems found in software implementations. Though this is not a complete solution, implementation risks are an important class of problems that deserve attention. The problems highlighted in this section are the most common ways of exploiting software at the lowest levels. Seven Common Problems 1. Buffer overflows Buffer overflows have been causing serious security problems for decades. Buffer overflows accounted for more than 50% of all major security bugs resulting in CERT/CC advisories in 1999, and the data show that the problem is growing instead of shrinking. The root cause behind buffer overflow problems is that C is inherently unsafe, as is C++. There are no bounds checks on array and pointer references, and there are many unsafe string operations in the standard C library. For these reasons, it is imperative that C and C++ programmers writing security-critical code learn about the buffer overflow problem. The most dangerous buffer overflows result in stack-smashing attacks, which target a specific programming fault: careless use of data buffers allocated on the program’s runtime stack (which contains local variables and function arguments). The results of a successful stack-smashing attack can be very serious. A creative attacker taking --- Note that there are many other implementation-level software security problems. The seven outlined are the most commonly encountered. advantage of a buffer overflow vulnerability through stack smashing can usually run arbitrary code. The idea is pretty straightforward: place some attack code somewhere (for example, code that invokes a shell) and overwrite the stack in such a way that control gets passed to the attack code. `strcpy` is one of the most notorious of all dangerous library functions. The following code is a dangerous use of `strcpy` that can be leveraged into a stack-smashing exploit: ```c void main(int argc, char **argv) { char program_name[256]; strcpy(program_name, argv[0]); } ``` ITS4 and Stackguard are both extremely useful in combating buffer overflows.11, 12 2. Race conditions Race conditions are possible only in environments where there are multiple threads or processes occurring at once that may potentially interact (or some other form of asynchronous processing). The term “race condition” implies a race going on between the attacker and the assumptions coded into the program by a developer. In fact, the attacker must “race” to invalidate assumptions about the system that the programmer may have made in the interval between operations, say between the time a parameter is checked and when it is subsequently used. A successful attack involves a quick and dirty change to the situation in a way that has not been anticipated. Problems with file access control and assumptions about the condition of the filesystem have been studied extensively.13 Programmers who have experience with multithreaded programming have almost certainly encountered race conditions whether or not they know the term. Race conditions are an insidious problem, because a program that seems to work fine may still harbor them. They are very hard to detect, especially if you’re not looking for them. They are often difficult to fix, even when you are aware of their existence. In a world where multithreading, multiprocessing, and distributed computing are becoming more and more prevalent, race conditions will continue to become a bigger and bigger problem. ITS4 is an effective tool against classical race conditions. 3. Access control problems Once users have successfully authenticated themselves to a system, the system needs to determine what resources each user should be able to access. There are many different access control models for answering that question. Some of the most complicated are used in distributed computing architectures and mobile code systems, such as CORBA and Java’s EJB models. Often, access control systems are based on complex mathematical models that might be hard to use in practice. Misuse of complex access control systems is a common source of security problems in software. 4. Randomness problems Random numbers are important in security for generating cryptographic keys and many other things. Many developers assume that C’s `random()` and similar functions produce unpredictable results. Unfortunately, that’s a flawed assumption. A call to `random()` is really a call to a traditional “pseudo-random” number generator (PRNG) that happens to be quite predictable. Even some developers who are aware of the potential problems convince themselves that an attack is too difficult to be practical. Attacks on random number generators may seem hard, but they are usually reasonably easy to carry out. 5. Misuse of cryptography Cryptography is something most developers know a little about – usually just enough to be dangerous. There is one sweeping recommendation that applies to every use of cryptography: *Never “roll your own” cryptography!* The next most commonly encountered crypto mistakes include failing to apply cryptography when it’s really called for and incorrect application of cryptography even when the need has been properly identified. The usual result of misuse or nonuse of cryptography is that software applications end up vulnerable to a smorgasbord of network-based attacks. 6. Input validation mistakes Humans have a tendency to make poor assumptions about whom and what they can trust. Even developers have this tendency. Trust isn’t something that should be extended lightly. Sound security practice dictates the assumption that everything be untrusted by default, and trust should only be extended out of necessity. Much of the time, developers do not even realize when they are making decisions about trust. This leads to situations in which a developer writes what seems to be correct, but what turns out to be wrong. There are plenty of tricky problems that can slip by programmers. One common problem that has been around for years but was only first exploited in the year 2000 is a “format string attack” in C. Do not allow untrusted input to have any input at all to a format string. 7. Password problems Almost everyone who has used a computer in the last 20 years knows what a password is. For better or worse, username/password pairs are the most popular method for authenticating users. The underlying premise of passwords is that the user and an authenticating agent (e.g., the user’s machine) share a secret—the password. When it comes time to authenticate, the user types the password, and if it matches the one stored by the authenticating agent, the connection is approved. Like many security technologies, the idea is simple and elegant, but getting everything exactly right is much harder than it first appears. Two areas of particular vulnerability include password storage and user authentication with passwords. **Auditing Code with ITS4** Trying to understand and analyze an entire program implementation is more work than most people are willing to undertake. While a thorough review is possible, most people are willing to settle for a “good enough” source code audit focused on finding common problems. With that in mind, a basic strategy for auditing source code is to: - **Identify all points in the source code where the program might take input from a user** (remote or local). Similarly, look for any places where the program might take input from another program or any other potentially untrusted source. - **Treat any internal API that gathers input as if it were a standard set of input calls.** - **Look for symptoms of problems.** Experience in spotting problems (both statically and dynamically) is important. Much of what to look for consists of function calls to standard libraries that are frequently misused. Use a tool to find such problems. - **Hand-analyze code to determine whether there is a vulnerability.** Doing this can be a challenge. (Sometimes it turns out to be better to rewrite any code that shows symptoms of being vulnerable, whether it is or not. This is because it is rare to be able to determine with absolute certainty that a vulnerability exists just from looking at the source code.) One of the worst things about architectural analysis is that there are no tools available to automate the process or to encode some of the necessary expertise. Fortunately, when auditing C and C++ code, there is some help in the area of implementation analysis—the tool ITS4. With languages other than C and C++, code review remains a manual process. However, code auditing tends to be easier in other languages (such as Java), because such languages tend not to allow the programmer to add security flaws in the implementation to the same extent that C and C++ do. ITS4 is a tool distributed by Cigital for statically scanning C and C++ source code for function calls that are known to be “bad.” It searches for functions that are commonly involved in implementation flaws. For example, it will look for uses of the gets() function, which is often misused in such a way as to enable buffer overflows. The goal of ITS4 is to focus the person performing an implementation analysis. Instead of having an analyst search through an entire program, ITS4 provides an analyst with a list of potential trouble spots. Something similar can be done with grep. However, with grep, you need to remember what to look for every single time. ITS4 encodes knowledge about what to look for. In its current form, ITS4 searches for almost 150 vulnerabilities. ITS4 also performs some basic analysis to try to rule out conditions that are obviously not problems. For example, though sprintf() is a frequently misused function, if the format string is constant, and contains no “%s,” then it probably isn’t worth looking at. ITS4 knows this and thus discounts such calls. By default, ITS4 orders its findings in order of risk: No Risk, Low Risk, Some Risk, Risky, Very Risky, Urgent. Also by default, the tool doesn’t report things of low risk, because such items aren’t real problems often enough to be worth looking into. Plus, ITS4 tends to give a lot of output; more than most analysts are willing to wade through! ITS4 points the analyst to a potentially vulnerable function. It’s the responsibility of the auditor to determine whether or not that function is used properly. It would be nice if ITS4 could automate the semantic analysis of source code that a programmer must do. Such tools do exist in the research lab. Here are some initial conclusions based on experiences using ITS4: - ITS4 still requires a significant level of expert knowledge. While ITS4 encodes knowledge on vulnerabilities (such that they no longer must be kept in the analyst’s head), an expert still does a much better job than a novice at manually performing the static analysis necessary to determine whether a real exploit is possible. - ITS4 only eliminates one-quarter to one-third of the time it takes to perform a source code analysis, because the manual analysis part is so time consuming. Code inspection takes a long time, even for an expert. - ITS4 helps significantly with fighting the “get done, go home” effect, because it prioritizes one instance of a problematic function call over another. - ITS4 has been used to find security problems in real applications. ### Architecture Risks (Wall Problems) Architectural analysis is more important to building secure software than implementation analysis. It is also much harder and much less understood. --- This brief section introduces issues that primarily apply at design time, including use of international standards (such as the Common Criteria) and following general principles for developing secure software systems. Proper software security at an architectural level requires careful integration into a mature software engineering process. **Common Criteria** Some standardized methodologies do exist for building secure systems. Over the last handful of years, several governments around the world including the US government in concert with major user’s groups have been working on a standardized system for design and evaluation of security-critical systems. They have produced a system known as the Common Criteria. Theoretically, the Common Criteria can be applied just as easily to software systems as to any other computing system. The compelling idea behind the Common Criteria is to create a security assurance system that can be systematically applied to diverse security-critical systems. An example can help clarify how the Common Criteria work. Suppose that Wall Street gets together and decides to produce a Protection Profile for the firewalls used to protect the machines holding customers’ financial information. Wall Street produces the Protection Profile, and Protection Profile gets evaluated according to the Common Evaluation Methodology to make sure that it is consistent and complete. Once this effort is complete, the Protection Profile is published widely. A vendor produces its version of a firewall that is designed to meet Wall Street’s Protection Profile. Once the product is ready to go, it becomes a Target of Evaluation and is sent to an accredited lab for evaluation. The Protection Profile defines the Security Target for which the Target of Evaluation was designed. The lab applies the Common Evaluation Methodology using the Common Criteria to determine if the Target of Evaluation meets the Security Target. If the Target of Evaluation meets the Security Target, every bit of the lab testing documentation is sent to National Voluntary Laboratory Accreditation Program (NVLAP) for validation. Depending on the relationship between NVLAP and the lab, this could be a rubber stamp or it could involve a careful audit of the evaluation performed by the lab. If the Target of Evaluation is validated by NVLAP, then the product gets a certificate and is listed alongside other evaluated products. It is then up to the individual investment houses on Wall Street to decide which brands of the firewall they want to purchase from the list of certified Targets of Evaluation that meet the Security Target defined by their chosen Protection Profile. Though the Common Criteria is certainly a good idea whose time has come, security evaluation is unfortunately not as simple as applying a standard Protection Profile to a given Target of Evaluation. The problem with the Common Criteria is evident right in the name. That is, “common” is often not good enough when it comes to security. One critical concern is that software quality management decisions, including security decisions, are sensitive to their context. There is no such thing as “secure” against all levels of threat, yet the Common Criteria set out to create a Target of Evaluation based on a standardized Protection Profile. The very real danger is that the Protection Profile will amount to a least common denominator approach to security. All standards appear to suffer from a similar least common denominator problem. Standards focus on the “what” while underspecifying the “how.” The problem is the “how” tends to require deep expertise—a rare commodity. Standards tend to provide a rather low bar (one that some vendors have trouble hurdling nonetheless). An important question to ponder when thinking about standards is how much real security will be demanded of the system. All in all, the move toward the Common Criteria is probably a good thing. Any approach that can be used to separate the wheat from the chaff in today’s over-promising and under-delivering security world is likely to be useful. Just don’t count on the Common Criteria to manage all software security risks. Often, much more expertise and insight are required than has been captured in commonly adopted Protection Profiles. Risk management is about more than hurdling the lowest bar. Checklists and Guidelines The biggest open research issue in software security is that there is currently no good standard language of discourse for software design. Lacking the ability to specify an application formally, tools and technologies for automated analysis of software security at the architectural level lag significantly behind implementation tools such as ITS4. Until the research community makes more progress on this issue, architectural risk analysis will remain a high-expertise practice. One way of capturing software security design advice is to create general guidelines. John Viega and I present a list of ten such guidelines in *Building Secure Software*. I include the guidelines here without commentary.¹⁶ Note that the goal of these principles is to identify and highlight the most important objectives software security practitioners should keep in mind when designing and building a secure system. Following these principles should help developers and architects who are not necessarily versed in security to avoid a number of common security problems. - Secure the weakest link. - Practice defense in depth. - Fail securely. - Follow the principle of least privilege. - Compartmentalize. - Keep it simple. - Promote privacy. - Remember that hiding secrets is hard. - Be reluctant to trust. - Use your community resources. Some caveats are in order. No list of principles like the ones above is ever perfect. There is no guarantee that following these principles will result in secure software. Not only do the principles present an incomplete picture, but also they sometimes even conflict with each other. As with any complex set of principles, there are often subtle tradeoffs involved, and application of these 10 principles must be sensitive to context. A mature software risk management approach provides the sort of data required to apply the principles intelligently. A Call To Arms Computer security is a vast topic that is becoming more important because the world is becoming highly interconnected, with networks being used to carry out critical transactions. The environment in which machines must survive has changed radically since the popularization of the Internet. The root of most security problems is software that fails in unexpected ways. Though software security as a field has much maturing to do, it has much to offer to those practitioners interested in striking at the heart of security problems. This paper merely scratches the surface of the problem. Good software security practices can help ensure that software behaves properly. Safety-critical and high-assurance system designers have always taken great pains to analyze and track software behavior. Security-critical system designers must follow suit. We can avoid the band-aid–like penetrate-and-patch approach to security only by considering security as a crucial system property. This requires integrating software security into the software engineering process. Software practitioners are only now becoming aware of software security as an issue. Plenty of work remains to be done. The most pressing current need involves understanding architectural-level risks and flaws. Today’s apprenticeship-based approach does not scale well, especially considering the growth of the software security problem. Gary McGraw, Ph.D., is the Chief Technology Officer at Cigital, Inc. He can be reached at gem@cigital.com. This paper was drawn from Building Secure Software (Addison-Wesley, 2001), which was co-authored by McGraw and John Viega. This paper appeared as an article titled “On Bricks and Walls: Why Building Secure Software is Hard,” in the May 2002 issue (Vol. 15, No. 5) of the Cutter IT Journal (www.cutter.com). It is reprinted here with their permission. Dr. McGraw wishes to thank Greg Morrisett, Fred Schneider and other members of the Infosec Research Council’s study group on malicious code for their insight. Copyright © 2002 by Cigital, Inc. All rights reserved
{"Source-Url": "http://www.cigital.com/whitepapers/dl/Building_Secure_Software.pdf", "len_cl100k_base": 6064, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 26678, "total-output-tokens": 7341, "length": "2e12", "weborganizer": {"__label__adult": 0.0003533363342285156, "__label__art_design": 0.00026679039001464844, "__label__crime_law": 0.0008783340454101562, "__label__education_jobs": 0.00034499168395996094, "__label__entertainment": 5.233287811279297e-05, "__label__fashion_beauty": 0.00011390447616577148, "__label__finance_business": 0.00025916099548339844, "__label__food_dining": 0.0002853870391845703, "__label__games": 0.0005655288696289062, "__label__hardware": 0.0006356239318847656, "__label__health": 0.00039458274841308594, "__label__history": 0.00010538101196289062, "__label__home_hobbies": 6.23464584350586e-05, "__label__industrial": 0.00022602081298828125, "__label__literature": 0.00016498565673828125, "__label__politics": 0.00019872188568115232, "__label__religion": 0.00025582313537597656, "__label__science_tech": 0.0084075927734375, "__label__social_life": 6.306171417236328e-05, "__label__software": 0.009735107421875, "__label__software_dev": 0.97607421875, "__label__sports_fitness": 0.00022852420806884768, "__label__transportation": 0.0002551078796386719, "__label__travel": 0.00013887882232666016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34331, 0.01294]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34331, 0.57924]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34331, 0.94413]], "google_gemma-3-12b-it_contains_pii": [[0, 104, false], [104, 3214, null], [3214, 5462, null], [5462, 7936, null], [7936, 10596, null], [10596, 14626, null], [14626, 17936, null], [17936, 21682, null], [21682, 25857, null], [25857, 30225, null], [30225, 33183, null], [33183, 34331, null], [34331, 34331, null]], "google_gemma-3-12b-it_is_public_document": [[0, 104, true], [104, 3214, null], [3214, 5462, null], [5462, 7936, null], [7936, 10596, null], [10596, 14626, null], [14626, 17936, null], [17936, 21682, null], [21682, 25857, null], [25857, 30225, null], [30225, 33183, null], [33183, 34331, null], [34331, 34331, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34331, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34331, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34331, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34331, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34331, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34331, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34331, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34331, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34331, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34331, null]], "pdf_page_numbers": [[0, 104, 1], [104, 3214, 2], [3214, 5462, 3], [5462, 7936, 4], [7936, 10596, 5], [10596, 14626, 6], [14626, 17936, 7], [17936, 21682, 8], [21682, 25857, 9], [25857, 30225, 10], [30225, 33183, 11], [33183, 34331, 12], [34331, 34331, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34331, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
8a93e8c59767b96288774abf3ae00b442f718bba
[REMOVED]
{"Source-Url": "http://oro.open.ac.uk/41191/1/wermelinger93awocg.pdf", "len_cl100k_base": 5743, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 31800, "total-output-tokens": 7125, "length": "2e12", "weborganizer": {"__label__adult": 0.00048232078552246094, "__label__art_design": 0.00122833251953125, "__label__crime_law": 0.0005054473876953125, "__label__education_jobs": 0.0034389495849609375, "__label__entertainment": 0.0002586841583251953, "__label__fashion_beauty": 0.0002343654632568359, "__label__finance_business": 0.00026535987854003906, "__label__food_dining": 0.0004665851593017578, "__label__games": 0.0009064674377441406, "__label__hardware": 0.0007281303405761719, "__label__health": 0.0006394386291503906, "__label__history": 0.00044035911560058594, "__label__home_hobbies": 0.00012791156768798828, "__label__industrial": 0.0004935264587402344, "__label__literature": 0.0032405853271484375, "__label__politics": 0.0003528594970703125, "__label__religion": 0.0008072853088378906, "__label__science_tech": 0.11053466796875, "__label__social_life": 0.0002541542053222656, "__label__software": 0.03759765625, "__label__software_dev": 0.8359375, "__label__sports_fitness": 0.00028896331787109375, "__label__transportation": 0.0005445480346679688, "__label__travel": 0.0002027750015258789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28527, 0.01253]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28527, 0.58486]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28527, 0.89529]], "google_gemma-3-12b-it_contains_pii": [[0, 647, false], [647, 2897, null], [2897, 5804, null], [5804, 8775, null], [8775, 9214, null], [9214, 11835, null], [11835, 13387, null], [13387, 15835, null], [15835, 19257, null], [19257, 22107, null], [22107, 24888, null], [24888, 25977, null], [25977, 27357, null], [27357, 28527, null]], "google_gemma-3-12b-it_is_public_document": [[0, 647, true], [647, 2897, null], [2897, 5804, null], [5804, 8775, null], [8775, 9214, null], [9214, 11835, null], [11835, 13387, null], [13387, 15835, null], [15835, 19257, null], [19257, 22107, null], [22107, 24888, null], [24888, 25977, null], [25977, 27357, null], [27357, 28527, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28527, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28527, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28527, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28527, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28527, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28527, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28527, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28527, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28527, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28527, null]], "pdf_page_numbers": [[0, 647, 1], [647, 2897, 2], [2897, 5804, 3], [5804, 8775, 4], [8775, 9214, 5], [9214, 11835, 6], [11835, 13387, 7], [13387, 15835, 8], [15835, 19257, 9], [19257, 22107, 10], [22107, 24888, 11], [24888, 25977, 12], [25977, 27357, 13], [27357, 28527, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28527, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
985254ab883b36d1d026d4e9b5a90fe7d33a16e3
15-213 “The course that gives CMU its Zip!” Web Services April 19, 2005 Topics - HTTP - Serving static content - Serving dynamic content - Proxies Presented by Kun Gao class26.ppt History of the Web 1945: - Describes the idea of a distributed hypertext system. - A “Memex” that mimics the “web of trails” in our minds. 1989: - Tim Berners-Lee (CERN) writes internal proposal to develop a distributed hypertext system. - Connects “a web of notes with links.” - Intended to help CERN physicists in large projects share and manage information 1990: - Sir Tim BL writes a graphical browser for NeXT machines, and first web server ‘httpd’ Web History (cont) 1992 - NCSA server released - 26 WWW servers worldwide 1993 - Marc Andreessen releases first version of NCSA Mosaic browser (killer app of the 90’s) - Mosaic version released for (Windows, Mac, Unix). - Web (port 80) traffic at 1% of NSFNET backbone traffic. - Over 200 WWW servers worldwide. 1994 - Andreessen and colleagues leave NCSA to form “Mosaic Communications Corp” (predecessor to Netscape). Internet Hosts Internet Domain Survey Host Count Source: Internet Software Consortium (www.isc.org) Internet Addresses IBM - 9.0.0.0 - 9.255.255.255 AT&T - 12.0.0.0 - 12.255.255.255 Xerox - 13.0.0.0 - 13.255.255.255 Hewlett-Packard - 15.0.0.0 - 15.255.255.255 Digital Equipment Corporation - 16.0.0.0 - 16.255.255.255 Apple Computer - 17.0.0.0 - 17.255.255.255 Massachusetts Institute of Technology - 18.0.0.0 - 18.255.255.255 Ford Motor Company - 19.0.0.0 - 19.255.255.255 Where is this traffic going? 1. Yahoo.com! 2. MSN.com 3. Google.com 4. Passport.net 5. eBay.com ... 2543. www.cmu.edu (23% to www.cs.cmu.edu) Web Transaction Clients and servers communicate using the HyperText Transfer Protocol (HTTP) - Client and server establish TCP connection - Client requests content - Server responds with requested content - Client and server close connection (usually) Current version is HTTP/1.1 - RFC 2616, June, 1999. HTTP Protocol GET / HTTP/1.1 host: www.aol.com HTTP/1.0 200 OK MIME-Version: 1.0 Date: Mon, 08 Jan 2001 04:59:42 GMT Server: NaviServer/2.0 AOLserver/2.3.3 Content-Type: text/html Content-Length: 42092 <html> ... </html> Connection closed by foreign host. unix> Client: request line Client: required HTTP/1.1 HOST header Client: empty line terminates headers. Server: response line Server: followed by five response headers Server: expect HTML in the response body Server: expect 42,092 bytes in the resp body Server: empty line ("\r\n") terminates hdrs Server: first HTML line in response body Server: 766 lines of HTML not shown. Server: last HTML line in response body Server: closes connection Client: closes connection and terminates Web Content Web servers return *content* to clients - *content*: a sequence of bytes with an associated MIME (Multipurpose Internet Mail Extensions) type Example MIME types - `text/html` HTML document - `text/plain` Unformatted text - `application/postscript` Postscript document - `image/gif` Binary image encoded in GIF format - `image/jpeg` Binary image encoded in JPEG format Static and Dynamic Content The content returned in HTTP responses can be either static or dynamic. - Static content: content stored in files and retrieved in response to an HTTP request - Examples: HTML files, images, audio clips. - Dynamic content: content produced on-the-fly in response to an HTTP request - Example: content produced by a program executed by the server on behalf of the client. Bottom line: All Web content is associated with a file that is managed by the server. URLs Each file managed by a server has a unique name called a URL (Universal Resource Locator) URLs for static content: - http://www.cs.cmu.edu - Identifies a file called index.html, managed by a Web server at www.cs.cmu.edu that is listening on port 80. URLs for dynamic content: - Identifies an executable file called adder, managed by a Web server at www.cs.cmu.edu that is listening on port 8000, that should be called with two argument strings: 15000 and 213. How Clients and Servers Use URLs Example URL: \texttt{http://www.aol.com:80/index.html} Clients use \textit{prefix} (\texttt{http://www.aol.com:80}) to infer: - What kind of server to contact (Web server) - Where the server is (\texttt{www.aol.com}) - What port it is listening on (80) Servers use \textit{suffix} (\texttt{/index.html}) to: - Determine if request is for static or dynamic content. - No hard and fast rules for this. - Convention: executables reside in \texttt{cgi-bin} directory - Find file on file system. - Initial “/” in suffix denotes home directory for requested content. - Minimal suffix is “/”, which all servers expand to some default home page (e.g., \texttt{index.html}). Anatomy of an HTTP Transaction unix> telnet www.aol.com 80 Trying 205.188.146.23... Connected to aol.com. Escape character is '^]'. GET / HTTP/1.1 host: www.aol.com HTTP/1.0 200 OK MIME-Version: 1.0 Date: Mon, 08 Jan 2001 04:59:42 GMT Server: NaviServer/2.0 AOLserver/2.3.3 Content-Type: text/html Content-Length: 42092 <html> ... </html> Connection closed by foreign host. unix> HTTP Transaction Clients and servers communicate using the HyperText Transfer Protocol (HTTP) - Client and server establish TCP connection - Client requests content - Server responds with requested content - Client and server close connection (usually) Current version is HTTP/1.1 - RFC 2616, June, 1999. HTTP Requests HTTP request is a request line, followed by zero or more request headers Request line: <method> <uri> <version> - <version> is HTTP version of request (HTTP/1.0 or HTTP/1.1) - <uri> is typically URL for proxies, URL suffix for servers. - A URL is a type of URI (Uniform Resource Identifier) - See http://www.ietf.org/rfc/rfc2396.txt - <method> is either GET, POST, OPTIONS, HEAD, PUT, DELETE, or TRACE. HTTP Request Line HTTP methods: - **GET**: Retrieve static or dynamic content - Arguments for dynamic content are in URI - Workhorse method (99% of requests) - **POST**: Retrieve dynamic content - Arguments for dynamic content are in the request body - **OPTIONS**: Get server or file attributes - **HEAD**: Like GET but no data in response body - **PUT**: Write a file to the server! - **DELETE**: Delete a file on the server! - **TRACE**: Echo request in response body - Useful for debugging. HTTP Request Headers Request headers: `<header name>`: `<header data>` - Provide additional information to the server. Major differences between HTTP/1.1 and HTTP/1.0 - HTTP/1.0 uses a new connection for each transaction. - HTTP/1.1 also supports *persistent connections* - multiple transactions over the same connection - `Connection: Keep-Alive` - HTTP/1.1 requires `HOST` header - `Host: kittyhawk.cmcl.cs.cmu.edu` - HTTP/1.1 adds additional support for caching **GET Request to Apache Server From Browser** URI is just the suffix, not the entire URL ``` GET /test.html HTTP/1.1 Accept: */* Accept-Language: en-us Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 4.01; Windows 98) Host: euro.ecom.cmu.edu Connection: Keep-Alive CRLF (\r\n) ``` HTTP Responses HTTP response is a response line followed by zero or more response headers. Response line: <version> <status code> <status msg> - <version> is HTTP version of the response. - <status code> is numeric status. - <status msg> is corresponding English text. - 200 OK Request was handled without error - 403 Forbidden Server lacks permission to access file - 404 Not found Server couldn’t find the file. Response headers: <header name>: <header data> - Provide additional information about response - Content-Type: MIME type of content in response body. - Content-Length: Length of content in response body. GET Response From Apache Server HTTP/1.1 200 OK Date: Thu, 22 Jul 1999 04:02:15 GMT Server: Apache/1.3.3 Ben-SSL/1.28 (Unix) Last-Modified: Thu, 22 Jul 1999 03:33:21 GMT ETag: "48bb2-4f-37969101" Accept-Ranges: bytes Content-Length: 79 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html <html> <head><title>Test page</title></head> <body> <h1>Test page</h1> </body> </html> Web Proxy - Interposed between Client & Server - Usually serves as Firewall Sample Proxy - Produces log of communication between client & server - Alter HTTP contents Servicing Web Page Request Some Tests Current Teaching: - Bryant's teaching - 15-213 Introduction to Computer Systems (Fall '04). - Nonexistent file - Nonexistent host Fun Downloads - Google - CMU - Yahoo - NFL Back to Randy Bryant's home page - URI: /~bryant/test.html - Host: www-2.cs.cmu.edu Client → Proxy → Server GET http://www-2.cs.cmu.edu/~bryant/test.html HTTP/1.1 Host: www-2.cs.cmu.edu User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.3) Gecko/20040910 Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Proxy-Connection: keep-alive Server → Proxy → Client #1 Chunked Transfer Encoding - Alternate way of specifying content length - Each “chunk” prefixed with chunk length - See http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html HTTP/1.1 200 OK Date: Mon, 29 Nov 2004 01:27:15 GMT Server: Apache/1.3.27 (Unix) mod_ssl/2.8.12 OpenSSL/0.9.6 mod_pubcookie/a5/1.76-009 Transfer-Encoding: chunked Content-Type: text/html First Chunk: 0x2ec = 748 bytes 2ec\r\n <head><title>Some Tests</title></head> <h1>Some Tests</h1> <dl> <dt><strong>Current Teaching:</strong> <ul> <li><a href="teaching.html">Bryant’s teaching</a></li> <li><a href="/afs/cs.cmu.edu/academic/class/15213-f04/www/"">15-213</a> Introduction to Computer Systems (Fall ’04).</li> <li><a href="http://www.cs.cmu.edu/nothing.html">Nonexistent file</a></li> <li><a href="http://nowhere.cmu.edu/nothing.html">Nonexistent host</a></li> </ul> <dt><strong>Fun Downloads</strong> <ul> <li><a href="http://www.google.com">Google</a></li> <li><a href="http://www.cmu.edu">CMU</a></li> <li><a href="http://www.yahoo.com">Yahoo</a></li> <li><a href="http://www.nfl.com">NFL</a></li> </ul> </dt> </dl> <hr> Back to <a href="index.html">Randy Bryant’s home page</a> Second Chunk: 0 bytes (indicates last chunk) Serving Dynamic Content Client sends request to server. If request URI contains the string "/cgi-bin", then the server assumes that the request is for dynamic content. GET /cgi-bin/env.pl HTTP/1.1 The server creates a child process and runs the program identified by the URI in that process The child runs and generates the dynamic content. The server captures the content of the child and forwards it without modification to the client. Issues in Serving Dynamic Content How does the client pass program arguments to the server? How does the server pass these arguments to the child? How does the server pass other info relevant to the request to the child? How does the server capture the content produced by the child? These issues are addressed by the Common Gateway Interface (CGI) specification. Because the children are written according to the CGI spec, they are often called *CGI programs*. Because many CGI programs are written in Perl, they are often called *CGI scripts*. However, CGI really defines a simple standard for transferring information between the client (browser), the server, and the child process. add.com: THE Internet addition portal! Ever need to add two numbers together and you just can’t find your calculator? Try Dr. Dave’s addition service at “add.com: THE Internet addition portal!” - Takes as input the two numbers you want to add together. - Returns their sum in a tasteful personalized message. After the IPO we’ll expand to multiplication! The add.com Experience Welcome to add.com: THE Internet addition portal. The answer is: $1 + 5 = 6$ Thanks for visiting! Output page Serving Dynamic Content With GET **Question:** How does the client pass arguments to the server? **Answer:** The arguments are appended to the URI. Can be encoded directly in a URL typed to a browser or a URL in an HTML link: - `http://add.com/cgi-bin/adder?1&2` - `adder` is the CGI program on the server that will do the addition. - Argument list starts with “?” - Arguments separated by “&” - Spaces represented by “+” or “%20” Can also be generated by an HTML form: ``` <form method=get action="http://add.com/cgi-bin/postadder"> ``` Serving Dynamic Content With GET URL: - http://add.com/cgi-bin/adder?1&2 Result displayed on browser: Welcome to add.com: THE Internet addition portal. The answer is: 1 + 2 = 3 Thanks for visiting! Serving Dynamic Content With GET Question: How does the server pass these arguments to the child? Answer: In environment variable QUERY_STRING - A single string containing everything after the "?" - For add.com: QUERY_STRING = “1&2” ```c /* child code that accesses the argument list */ if ((buf = getenv("QUERY_STRING")) == NULL) { exit(1); } /* extract arg1 and arg2 from buf and convert */ ...n1 = atoi(arg1); n2 = atoi(arg2); ``` Serving Dynamic Content With GET **Question:** How does the server pass other info relevant to the request to the child? **Answer:** In a collection of environment variables defined by the CGI spec. Some CGI Environment Variables General - SERVER_SOFTWARE - SERVER_NAME - GATEWAY_INTERFACE (CGI version) Request-specific - SERVER_PORT - REQUEST_METHOD (GET, POST, etc) - QUERY_STRING (contains GET args) - REMOTE_HOST (domain name of client) - REMOTE_ADDR (IP address of client) - CONTENT_TYPE (for POST, type of data in message body, e.g., text/html) - CONTENT_LENGTH (length in bytes) Some CGI Environment Variables In addition, the value of each header of type type received from the client is placed in environment variable HTTP_type - Examples: - HTTP_ACCEPT - HTTP_HOST - HTTP_USER_AGENT (any “-” is changed to “_”) Serving Dynamic Content With GET Question: How does the server capture the content produced by the child? Answer: The child generates its output on stdout. Server uses `dup2` to redirect stdout to its connected socket. Notice that only the child knows the type and size of the content. Thus the child (not the server) must generate the corresponding headers. ```c /* child generates the result string */ sprintf(content, "Welcome to add.com: THE Internet addition portal\n <p>The answer is: %d + %d = %d\n <p>Thanks for visiting!\r\n", n1, n2, n1+n2); /* child generates the headers and dynamic content */ printf("Content-length: %d\r\n", strlen(content)); printf("Content-type: text/html\r\n"); printf("\r\n"); printf("%s", content); ``` Serving Dynamic Content With GET bass> ./tiny 8000 GET /cgi-bin/adder?1&2 HTTP/1.1 Host: bass.cmcl.cs.cmu.edu:8000 <CRLF> kittyhawk> telnet bass 8000 Trying 128.2.222.85... Connected to BASS.CMCL.CS.CMU.EDU. Escape character is '^['. GET /cgi-bin/adder?1&2 HTTP/1.1 Host: bass.cmcl.cs.cmu.edu:8000 <CRLF> HTTP/1.1 200 OK Server: Tiny Web Server Content-length: 102 Content-type: text/html <CRLF> Welcome to add.com: THE Internet addition portal. <p>The answer is: 1 + 2 = 3 <p>Thanks for visiting! Connection closed by foreign host. kittyhawk> Proxies A *proxy* is an intermediary between a client and an *origin server*. - To the client, the proxy acts like a server. - To the server, the proxy acts like a client. Why Proxies? Can perform useful functions as requests and responses pass by - Examples: Caching, logging, anonymization Diagram: - Client A requests `foo.html` - Proxy cache requests `foo.html` - Proxy cache returns `foo.html` - Client B requests `foo.html` - Proxy cache returns `foo.html` - Origin Server requests `foo.html` - Origin Server returns `foo.html` Fast inexpensive local network Slower more expensive global network Proxy Example: Interface Simplification - Allegheny County real estate information Proxy Example: Interface Simplification Search Results Simplified Version - Run special proxy on home machine - http://bryant.dsl.telerama.com/name.html Simplified Version: On Cellphone - Request - Response Generating CGI Request Form Portion of Web Page: ```html <!-- Lookup Pittsburgh Real Estate - Microsoft Internet Expl... -- <Form METHOD=GET ACTION="http://bryant.dsl.telerama.com/cgi-bin/name.pl"> <p>Last First</p> <Input NAME="name" TYPE=text SIZE="25"> <Input TYPE=submit> </Form> http://bryant.dsl.telerama.com/cgi-bin/name.pl?name=lemieux+mario ``` For More Information Study the Tiny Web server described in your text - Tiny is a sequential Web server. - Serves static and dynamic content to real browsers. - text files, HTML files, GIF and JPEG images. - 220 lines of commented C code. - Also comes with an implementation of the CGI script for the add.com addition portal. Summary #1 Clients and servers communicate using the HyperText Transfer Protocol (HTTP) - HTTP Request - HTTP Response Web client (browser) ↔ Web server HTTP request HTTP response (content) Summary #2 The content returned in HTTP responses can be either static or dynamic. - Static content: - Specified in the URI, server fetches file and sends content to client - Dynamic content: Content produced on-the-fly in response to an HTTP request - Server forks client, passes arguments using CGI - Client generates content A **proxy** is an intermediary between a client and an **origin server**. - To the client, the proxy acts like a server. - To the server, the proxy acts like a client.
{"Source-Url": "http://www.cs.cmu.edu/afs/cs.cmu.edu/user/fp/www/courses/15213-s05/lectures/24-webservices.pdf", "len_cl100k_base": 4810, "olmocr-version": "0.1.49", "pdf-total-pages": 51, "total-fallback-pages": 0, "total-input-tokens": 75718, "total-output-tokens": 7128, "length": "2e12", "weborganizer": {"__label__adult": 0.00041365623474121094, "__label__art_design": 0.0005035400390625, "__label__crime_law": 0.00038695335388183594, "__label__education_jobs": 0.0173797607421875, "__label__entertainment": 0.0002715587615966797, "__label__fashion_beauty": 0.00017762184143066406, "__label__finance_business": 0.0005006790161132812, "__label__food_dining": 0.0004427433013916016, "__label__games": 0.0007395744323730469, "__label__hardware": 0.002605438232421875, "__label__health": 0.0005946159362792969, "__label__history": 0.0007586479187011719, "__label__home_hobbies": 0.00013208389282226562, "__label__industrial": 0.0004782676696777344, "__label__literature": 0.0008077621459960938, "__label__politics": 0.00023627281188964844, "__label__religion": 0.0006566047668457031, "__label__science_tech": 0.09765625, "__label__social_life": 0.00030922889709472656, "__label__software": 0.1376953125, "__label__software_dev": 0.73583984375, "__label__sports_fitness": 0.0002620220184326172, "__label__transportation": 0.0006670951843261719, "__label__travel": 0.00036025047302246094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17928, 0.05433]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17928, 0.37794]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17928, 0.75476]], "google_gemma-3-12b-it_contains_pii": [[0, 184, false], [184, 715, null], [715, 1138, null], [1138, 1240, null], [1240, 1622, null], [1622, 1768, null], [1768, 2076, null], [2076, 2820, null], [2820, 3214, null], [3214, 3705, null], [3705, 4305, null], [4305, 5017, null], [5017, 5400, null], [5400, 5707, null], [5707, 6131, null], [6131, 6636, null], [6636, 7111, null], [7111, 7419, null], [7419, 8049, null], [8049, 8453, null], [8453, 8622, null], [8622, 8924, null], [8924, 9396, null], [9396, 9796, null], [9796, 10683, null], [10683, 10883, null], [10883, 10977, null], [10977, 11124, null], [11124, 11493, null], [11493, 11817, null], [11817, 12176, null], [12176, 12313, null], [12313, 12856, null], [12856, 13059, null], [13059, 13502, null], [13502, 13703, null], [13703, 14095, null], [14095, 14339, null], [14339, 15089, null], [15089, 15635, null], [15635, 15809, null], [15809, 16243, null], [16243, 16327, null], [16327, 16383, null], [16383, 16482, null], [16482, 16538, null], [16538, 16899, null], [16899, 17229, null], [17229, 17423, null], [17423, 17760, null], [17760, 17928, null]], "google_gemma-3-12b-it_is_public_document": [[0, 184, true], [184, 715, null], [715, 1138, null], [1138, 1240, null], [1240, 1622, null], [1622, 1768, null], [1768, 2076, null], [2076, 2820, null], [2820, 3214, null], [3214, 3705, null], [3705, 4305, null], [4305, 5017, null], [5017, 5400, null], [5400, 5707, null], [5707, 6131, null], [6131, 6636, null], [6636, 7111, null], [7111, 7419, null], [7419, 8049, null], [8049, 8453, null], [8453, 8622, null], [8622, 8924, null], [8924, 9396, null], [9396, 9796, null], [9796, 10683, null], [10683, 10883, null], [10883, 10977, null], [10977, 11124, null], [11124, 11493, null], [11493, 11817, null], [11817, 12176, null], [12176, 12313, null], [12313, 12856, null], [12856, 13059, null], [13059, 13502, null], [13502, 13703, null], [13703, 14095, null], [14095, 14339, null], [14339, 15089, null], [15089, 15635, null], [15635, 15809, null], [15809, 16243, null], [16243, 16327, null], [16327, 16383, null], [16383, 16482, null], [16482, 16538, null], [16538, 16899, null], [16899, 17229, null], [17229, 17423, null], [17423, 17760, null], [17760, 17928, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17928, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 17928, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17928, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17928, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17928, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17928, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17928, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17928, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17928, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17928, null]], "pdf_page_numbers": [[0, 184, 1], [184, 715, 2], [715, 1138, 3], [1138, 1240, 4], [1240, 1622, 5], [1622, 1768, 6], [1768, 2076, 7], [2076, 2820, 8], [2820, 3214, 9], [3214, 3705, 10], [3705, 4305, 11], [4305, 5017, 12], [5017, 5400, 13], [5400, 5707, 14], [5707, 6131, 15], [6131, 6636, 16], [6636, 7111, 17], [7111, 7419, 18], [7419, 8049, 19], [8049, 8453, 20], [8453, 8622, 21], [8622, 8924, 22], [8924, 9396, 23], [9396, 9796, 24], [9796, 10683, 25], [10683, 10883, 26], [10883, 10977, 27], [10977, 11124, 28], [11124, 11493, 29], [11493, 11817, 30], [11817, 12176, 31], [12176, 12313, 32], [12313, 12856, 33], [12856, 13059, 34], [13059, 13502, 35], [13502, 13703, 36], [13703, 14095, 37], [14095, 14339, 38], [14339, 15089, 39], [15089, 15635, 40], [15635, 15809, 41], [15809, 16243, 42], [16243, 16327, 43], [16327, 16383, 44], [16383, 16482, 45], [16482, 16538, 46], [16538, 16899, 47], [16899, 17229, 48], [17229, 17423, 49], [17423, 17760, 50], [17760, 17928, 51]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17928, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
b98d780bfd5d357aded654b32cb547343497a50b
COS 226 Midterm Review Spring 2015 Ananda Gunawardena (guna) guna@cs.princeton.edu guna@princeton.edu Time and location: • The midterm is during lecture on – Wednesday, March 11 from 11:12:20pm. • The exam will start and end promptly, so please do arrive on time. • The midterm room is either McCosh 10 or McDonnell A02, depending on your precept date. – Friday Precepts: McCosh 10. • Failure to go to the right room can result in a serious deduction on the exam. There will be no makeup exams except under extraordinary circumstances, which must be accompanied by the recommendation of a Dean. Rules • Closed book, closed note. • You may bring one 8.5-by-11 sheet (one side) with notes in your own handwriting to the exam. • No electronic devices (including calculators, laptops, and cell phones). Materials covered • Algorithms, 4th edition, Sections 1.3–1.5, Chapter 2, and Chapter 3. • Lectures 1–10. • Programming assignments 1–4. Concepts (so far) in a nutshell List of algorithms and data structures: - Quick find - Analysis of Algorithms - Quick union - Stacks and Queues - Binary search - Elementary Sets - Disjoint sets - Merge sort - Hash tables - Quicksort - Balanced search trees - Priority queues - Linear probing - Hash tables - 2-3 trees - Hashing - Insertion sort - Hashing - Selection sort - Hashing - Sequences search - Hashing - Binary search - BSTs - Weighted quick-union - 2-way quicksort - Direct access files - Heapsort - Lazy deletion - Heapsort - Left-leaning red-black BSTs - Heapsort - Linear probing • Recall as much as possible about each of the above topics • Write down up to 5 important things about each one Analysis of Algorithms Question True or False 1. Tilda notation includes the coefficient of the highest order term. 2. Tilda notation provides both an upper bound and a lower bound on the growth of a function. 3. Big O notation suppresses lower order terms, so it does not necessarily accurately describe the behavior of a function for small values of \( N \). Count operations - \( n \) count = 0; - for (int i = 0; i < n; i++) - for (int j = 0; j < n; j++) - if (l[i] <= h[l]) (h[l] + 1) = (h[l] + 1) + 1; Suppose that it takes 1 second to execute this code fragment when \( N = 1000 \). Using tilda notation, formulate a hypothesis for the running time (in seconds) of the code fragment as a function of \( N \). Analysis of Algorithms - Estimate the performance of an algorithm using - \( \tilde{O} \), \( \tilde{\Omega} \), \( \tilde{\Theta} \): order of growth - comparisons, array accesses, exchanges, memory requirements - best, worst, average - Performance measure based on some specific inputs Amortized analysis - Measure of average performance over a series of operations - Some good, few bad Analysis of Algorithms - More formally... - tilda notation - Techniques: - count operations - Operations \( \rightarrow \) reads/writes, compares - Derive mathematically - exploit the property of the algorithm - solve a recurrence formula to reach a closed form solution. - Obtain upper/lower bounds Count operations Example 1 ``` for (int i = 1; i <= N; i++) { for (int j = 1; j <= D; j++) { if (genomes[i-1].length > genomes[j].length()) swap(genomes[i-1], genomes[j]); else break; } } ``` <table> <thead> <tr> <th>compares</th> <th>Array accesses</th> <th>assignments</th> <th>External method calls</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Example 2 ``` while (i <= N) { int current = all[i]; int j = i; while (j > 0) { if (current < genomes[j]) break; j--; } i = j; } ``` Count operations Important when counting - Do not assume two nested loops always give you $n^2$ - Always read the code to see what it does - When doubling or halving loop control variable, it can lead to log N performance - But analyze carefully - Sometimes the sum of operations can be approximated by an integral - $\sum f(n) \sim \int f(n)$ useful formulas ``` \sum_{i=1}^{n} i = \frac{n(n+1)}{2} \sum_{i=1}^{n} i^2 = \frac{n(n+1)(2n+1)}{6} \sum_{i=1}^{n} n^2 = \frac{n(n+1)}{2} ``` Runtime complexity This is a method of describing behavior of an algorithm using runtime observations. Runtime of an algorithm depends on many factors including language, compiler, input size, memory, optimizations etc. ``` let N = Integer.parseInt(args[0]); for (int i = 0; i < N; i++) { in file = new In("genomicFile" + i + ".txt"); genomes[i] = inFile.readString(); } ``` <table> <thead> <tr> <th>$N$</th> <th>Time (s)</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0.15</td> </tr> <tr> <td>2</td> <td>0.14</td> </tr> <tr> <td>4</td> <td>0.29</td> </tr> <tr> <td>8</td> <td>0.41</td> </tr> <tr> <td>16</td> <td>0.65</td> </tr> <tr> <td>32</td> <td>1.66</td> </tr> <tr> <td>64</td> <td>3.38</td> </tr> </tbody> </table> The following runtimes were observed from an algorithm that reads a file of strings and sort them using insertion sort. The runtime analysis seems to suggest the algorithm is linear. Is this correct? ``` T(n) = aN + \text{power law} ``` Mathematically speaking - write recurrences for many of the standard algorithms - linear search $T(n) = 1 + T(n-1)$ - binary search $T(n) = 1 + T(n/2)$ - merge sort $T(n) = 2T(n/2) + n$ - quicksort $T(n) = T(n-i-1) + T(i) + n$ - insertion sort $T(n) = i + T(n-i-1)$ - solve them using many of the techniques discussed - Repeated application with base case like $T(0) = 1$ or 0 - $T(n) = 1 + T(n-1) = 1 + (1 + T(n-2)) = \ldots$ counting memory - standard data types (int, bool, double) - object overhead – 16 bytes - array overhead – 24 bytes - references – 8 bytes - Inner class reference – 8 bytes ```java public class TreeThreeTreeKey extends Comparable<Key>, Value { private Node root; private class Node { private int count; // subtree count private Key key1, key2; // the one or two keys private Value val1, val2; // the one or two values private Node left, middle, right; // the two or three subtrees ... } } ``` - How much memory is needed for a 2-3 tree object that holds N nodes? Stack and queues - Amortized constant time operations - implementation using - linked lists - resizable arrays - many variations of stacks and queues asked in design questions - design a queue that allows removing a random element (in addition to deq) - design a queue using a resizable array - design a queue using two stacks Data Structure Performance estimates (worst or amortized) <table> <thead> <tr> <th>Data Structure</th> <th>find</th> <th>insert</th> <th>delete</th> <th>update</th> </tr> </thead> <tbody> <tr> <td>unordered array</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>ordered array</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>resizable</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>linked list</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>ordered linked list</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>queue</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>stack</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>binary heap</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>BST</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>LLRB</td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Resizing arrays - Arrays are static, simple, random access data structures - Arrays can be used in many applications - If resizing can be done efficiently - resizing by 1 is a bad idea (why?) - doubling the array is a good idea (why?) - can we get amortized constant performance in arbitrary insertion into an array? Using resizable arrays - Implement a stack - amortized constant time : pop and push - Implement a queue with circular array - amortized constant time: enque and deque Stacks and Queues Resizable array questions - resizing array by one gives amortized linear time per item (bad) - resizing array by doubling/halving gives amortized constant time (good) - What if instead of doubling the size of the array, we triple the size? good or bad? - Resizing also includes shrinking the array by \( \frac{1}{2} \). When do we do that? When the array is less than half full or \( \frac{1}{4} \) full? What is a sequence of operations to justify your claim? Possible/impossible questions - is it possible to sort a list of \( n \) keys in linear time, where only \( d \) (some small constant) distinct keys exists among \( n \) keys? - Is it possible to find median in an unsorted list in linear time? Possible/impossible questions - is it possible to implement a FIFO queue using a single array, still have amortized constant time for enqueue and dequeue? - Is it possible to solve the 3-sum problem in \( n \log n \) time? Possible/impossible questions - We can build a heap in linear time. Is it possible to build a BST in linear time? - Is it possible to find the max or min of any list in \( \log N \) time? - Is it possible to create a collection where an item can be stored or found in constant time? - Is it possible to design a max heap where find max, insertions and deletions can be done in constant time? Why? - Why do we ever use a BST when we can always use a hash table? - Why do we ever use arrays when we can use linked lists? - Why do we ever use a heap when we can always use a LLRB? Union-find quick-union and quick-find Weighted quick-union Weighted Union-find question Answer to union-find question Sorting Typical question Use the invariants to identify the sort algorithm **Basic sorts** - **Insertion sort** - Invariant: $A[0..i-1]$ is sorted - Perform well in practice for almost sorted data - Can be used in quicksort and merge sort to speed things up - **Selection sort** - Invariant: $A[0..i-1]$ is sorted and are the smallest elements in the array - Not used in practice much **Standard or 2-way Quick sort** - Randomize the array - Find a pivot (A[j] usually) - Partition the array to find a pivot position $j$ such that $A[j] = pivot$ - Pointers stop and swap on equal keys to pivot - Recurse on subarrays leaving the pivot in-place - Properties - Good general purpose $n \log_2 n$ algorithm - Partitioning takes linear time - Not stable - In-place - Ideal for parallel implementations - Choosing a bad pivot can lead to quadratic performance - Works well when no duplicates **Demo of 2-way quick sort** < x | x | > x I M I W A R F D T T O S D E E P 3-way quick sort - same as 2-way quicksort - works well with duplicate keys - same process - choose a pivot, say x - partition the array as follows - Invariant - \(< x \quad == \quad x \quad > x\) - uses Dijkstra’s 3-way partitioning algorithm Top-down merge sort - facts - recursive - merging is the main operation - performance - merging 2-sorted arrays takes linear time - merge sort tree is of height \(\log N\) - consistent linearithmic algorithm - other properties - uses extra linear space - Stable - equal keys retain relative position in subsequent sorts 3-way partitioning demo Bottom-up merge sort - facts - iterative - merges sub-arrays of size 2, 4, 8 (\(\log N\) times) to finally get a sorted array - performance - merging all sub arrays takes linear time in each step - merge continues \(\log N\) times - consistent linearithmic algorithm - other properties - no extra space - stable - merge step retains the position of the equal keys Demo of 3-way quick sort Heap Sort - build a max/min heap - delete max/min and insert into the end of the array (if heap is implemented as an array) until heap is empty - performance is linearithmic - is heap sort stable? Knuth shuffle - Generates random permutations of a finite set - Algorithm ```java for (int i=n-1; i > 0; i--) j = random(0..i); exch(a[j], a[i]); ``` Sorting question - Suppose you are sorting n equal keys labeled k₁, k₂, k₃, ..., kₙ - Identify the number of compares (in terms of n) required when applying the following algorithms - Insertion sort - Selection sort - 2-way quicksort - 3-way quicksort - Mergesort - Heapsort Priority Queues Binary heaps - Invariant - for each node N - Key in N >= key in left child and key in right child - Good logarithmic performance for - insert - remove max - find max (constant) - Heap building - Bottom-up ➔ linear time (sink each level) - Top-down ➔ linearithmic (insert and swim) Heap questions - Given a heap, find out which key was inserted last? - it must be along the path of the right most leaf node in the tree - We always delete the root by exchanging that with the last leaf node - Build a heap - Bottom-up - Top-down - Applications - can be used in design questions where delete, insert takes logarithmic time and find max takes constant time Ordered Symbol Tables <table> <thead> <tr> <th>Operation</th> <th>Sequence search</th> <th>Binary search</th> <th>BST</th> </tr> </thead> <tbody> <tr> <td>search</td> <td>N</td> <td>log N</td> <td>A</td> </tr> <tr> <td>insert</td> <td>N</td> <td>N</td> <td>A</td> </tr> <tr> <td>min / max</td> <td>N</td> <td>N</td> <td>A</td> </tr> <tr> <td>floor / ceiling</td> <td>N</td> <td>log N</td> <td>A</td> </tr> <tr> <td>rank</td> <td>N</td> <td>log N</td> <td>A</td> </tr> <tr> <td>select</td> <td>N</td> <td>N</td> <td>A</td> </tr> <tr> <td>ordered iteration</td> <td>N log N</td> <td>N</td> <td>N</td> </tr> </tbody> </table> 2-3 Trees Two invariants - Balance invariant – each path from root to leaf nodes have the same length - Order invariant – an inorder traversal of the tree produces an ordered sequence 2-3 Tree operations Red-black trees - How to represent 3-nodes? - Regular BST with red “glue” links. Balanced Trees Red-black tree properties - A BST such that - No node has two red links connected to it - Every path from root to null link has the same number of black links - Red links lean left. Red-black tree questions - add or delete a key to/from a red-black tree and show how the tree is rebalanced - Determining the value of an unknown node - Less than M, greater than G, less than L - Know all the operations - Left rotation, right rotation, color flip - Know how to build a LLRB using operations - Know how to go from 2-3 tree to a red-black tree and vice versa Symbol Tables hashing - simple idea - given a key, find a hash function $H(key)$ that computes an integer value. - create a table of size $M$ and use $H(key) \% M$ to find a place. - hard to avoid collisions - separate chaining - linear probing - choose a good hash function - Easy to compute - Avoid collisions - Keep chain lengths to be $O(\log N / \log \log N)$ using a random distribution of keys Hashing type questions - Given a set of keys, which table could result in? - Look for keys that are in the table corresponding to their hash values - They were inserted first - There must be at least one key that is in the position of the hash value (first key inserted) - Know the value of a good hash function - Know how collisions are resolved using - Separate chaining - Linear probing - Know when to resize the hash table Algorithm and Data Structure Design Covered in detail in design session. See design notes on midterm site Design problems - Typically challenging - There can be many possible solutions - Partial credit awarded - Usually it is a data structure design to meet certain performance requirements for a given set of operations - Example, create a data structure that meets the following performance requirements - \( \text{findMedian} \) in \( \sim n \), \( \text{insert} \) \( \sim \log n \), \( \text{delete} \) \( \sim \log n \) - Example: A leaky queue that can remove from any point, that can insert to end and delete from front, all in logarithmic time or better - Typical cues to look for - \( \log n \) time may indicate that you need a sorted array or balanced BST or some sort of a heap - Amortized time may indicate, you can have some costly operations once in a while, but on average, it must perform as expected Design problem #1 - Design a randomizedArray structure that can insert and delete a random item from the array. Need to guarantee amortized constant performance - Insert(Item item) - delete()
{"Source-Url": "http://www.cs.princeton.edu:80/courses/archive/spring15/cos226/exams/spr15/midterm_review_sp15.pdf", "len_cl100k_base": 4680, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24864, "total-output-tokens": 5021, "length": "2e12", "weborganizer": {"__label__adult": 0.0006837844848632812, "__label__art_design": 0.0007801055908203125, "__label__crime_law": 0.0010251998901367188, "__label__education_jobs": 0.0736083984375, "__label__entertainment": 0.00017750263214111328, "__label__fashion_beauty": 0.0004336833953857422, "__label__finance_business": 0.0003952980041503906, "__label__food_dining": 0.0009450912475585938, "__label__games": 0.00238037109375, "__label__hardware": 0.0017833709716796875, "__label__health": 0.0010204315185546875, "__label__history": 0.0007863044738769531, "__label__home_hobbies": 0.00037980079650878906, "__label__industrial": 0.0008540153503417969, "__label__literature": 0.0007386207580566406, "__label__politics": 0.0007405281066894531, "__label__religion": 0.0012292861938476562, "__label__science_tech": 0.031829833984375, "__label__social_life": 0.0004773139953613281, "__label__software": 0.00742340087890625, "__label__software_dev": 0.86865234375, "__label__sports_fitness": 0.0016222000122070312, "__label__transportation": 0.001285552978515625, "__label__travel": 0.0005307197570800781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16176, 0.01553]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16176, 0.49479]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16176, 0.82014]], "google_gemma-3-12b-it_contains_pii": [[0, 1703, false], [1703, 3152, null], [3152, 5497, null], [5497, 7681, null], [7681, 9196, null], [9196, 9327, null], [9327, 10367, null], [10367, 11596, null], [11596, 12361, null], [12361, 13615, null], [13615, 15043, null], [15043, 16176, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1703, true], [1703, 3152, null], [3152, 5497, null], [5497, 7681, null], [7681, 9196, null], [9196, 9327, null], [9327, 10367, null], [10367, 11596, null], [11596, 12361, null], [12361, 13615, null], [13615, 15043, null], [15043, 16176, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16176, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 16176, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16176, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16176, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16176, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16176, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16176, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16176, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 16176, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16176, null]], "pdf_page_numbers": [[0, 1703, 1], [1703, 3152, 2], [3152, 5497, 3], [5497, 7681, 4], [7681, 9196, 5], [9196, 9327, 6], [9327, 10367, 7], [10367, 11596, 8], [11596, 12361, 9], [12361, 13615, 10], [13615, 15043, 11], [15043, 16176, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16176, 0.07657]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
004d3ebf044cdea37f17cc4fa41e462dc8a2652f
DS 2010 access control David Evans de239@cl.cam.ac.uk Access control (authorisation) in distributed systems access control within the overall system architecture: • as an individual e.g. from home • within a single administration domain e.g. CL • using external services from a domain as an individual or group member • federated domains: inter-domain authorisation We are concerned with authorisation for service use and/or object access How is access control policy expressed and enforced? Authorisation and authentication Authorisation is built above authentication (proof of identity – proof that you are who or what you say you are – will someone/something vouch for you?). Within an administration domain, principals are named and registered as individuals and members of groups. Principals authenticate in their home domain by means of e.g. passwords. The aim is to avoid having to have a username/password for every service. (...How does one remember them all? Bad idea to use the same one for all...) A Single Sign On service is needed. Authentication is covered in security courses. For background reading, slides 29-36 outline some single sign on systems for cross-domain service use: Raven, Shibboleth, OpenID Access control – from first principles Model: access matrix $A(i, j)$ rows represent principals, columns objects entry $(i, j)$ contains the rights principal $i$ has to object $j$ Implementation: since the matrix is sparse (most entries are null) 1. Using access control lists (ACLs): keep non-null entries of column $j$ with object $j$ ACL entry = principal name + access rights optimisation: group name = list of principals 2. Using capabilities: keep non-null entries of row $i$ with principal $i$ a capability (capability list entry) = object name + access rights Assume managers for the various types of object On an access request, the manager must check that the requesting principal has the appropriate right to perform the access 1. check that the ACL contains an appropriate entry 2. check that the capability passed by the principal with the request contains the right thing ACLs – cf. - capabilities ACLs Expressiveness: subtle expression of policy – entries may be for individual principals and groups with individual exceptions. Revocation: easy to revoke – but ACL changes may not have immediate effect (because the ACL may not be checked on every access once an object is open) BUT: slow to check - scalability problem - if expressiveness exploited e.g. negatives and exceptions allowed - if there are many principals and large groups - generalisation? multi-domain operation? names outside domain of registration? AND: awkward to delegate rights e.g. for a file to a printer for a single print job In a distributed system many services are not part of privileged OSs. Capabilities Quick to check – like a ticket – so scale well Anonymous – knowledge of names not needed – may generalise to multiple domains. – anonymity may be wanted by some applications for privacy reasons Problems/issues ... because they are associated with the process rather than the object ... Capability-based access control - issues as defined so far, a capability is an object name and some rights 1. **protection** must prevent unauthorised creation, tampering, theft 2. **control of propagation** can principals pass on copies? must they ask the object manager? How can this be enforced? 3. **delegation** an example of propagation often with restricted rights for a limited time or action 4. **revocation** - if the access control policy changes, and certain principals should lose their rights, their capabilities should be revoked. Can this be done without revoking all capabilities for the service/object? - if a capability is known to have been stolen or tampered with it should be revoked instantly. Will this invalidate all capabilities for this service/object? - anonymity creates more revocation problems. Capabilities in centralised and distributed systems centralised Several capability architectures were designed and built e.g. Plessey PP250, CAP Capabilities can be protected by the hardware and/or the OS A capability is named via an index into a segment or an OS table - held in protected OS space per process - held in typed capability segments in user space with operations such as \textit{insert, delete, use-as-argument} distributed Can’t be protected by hardware/OS (without gyrations...maybe) Have to be transferred across networks and pass through user space so must be encryption-protected Security terminology and implementation: capability = signed certificate e.g. X.509 authentication and attribute certificates Capabilities in distributed systems - one design Must be protected by encryption The object manager (certificate issuer) keeps a \textit{SECRET} (random number, private key, whatever) and uses a well-known one-way function $f$ A capability is \textit{constructed} using \[ \text{check digits} = f ( \text{SECRET}, \text{protected fields} ) \] \begin{verbatim} | protected fields | check digits (signature) | \end{verbatim} When a capability is \textit{presented} with an operation invocation, the manager checks that: \[ f ( \text{SECRET}, \text{protected fields} ) = \text{check digits} \] If not, the invocation is rejected. More generally, the invoked service may not be the capability issuer. The service can check back with the issuer (cf. Certification Authority) Access Control Encryption-protected capabilities – issues? 1. **protection** protect against tampering – adding rights, NOT against theft – eavesdropping on the network and replay attacks 2. **control of propagation** still no control over propagation 3. **delegation** object manager must be asked to create a capability with reduced rights to pass to another principal for delegated authority; duration is not controlled – nor further transfer 4. **revocation** (recall: needed when access control policy changes as well as for stolen capabilities) - expiry time as a protected field (like X.509) – crude mechanism - hot list of invalid invoking principals per service/object (spoofing? check overhead?) - change the SECRET – not selective – all old capabilities will not work and authorised principals will have to request new capabilities. Principal-specific capabilities include the name of the principal in the capability for generation and checking as an argument to $f$, perhaps as a protected field in the capability. 1. protection from tampering – YES, from theft – YES: if you authenticate the presenting principal 2. control of propagation YES – a capability for the receiving principal can only be created by the object manager 3. delegation YES – a capability for the receiving principal must be created by the object manager 4. revocation can be more selective – still involves overhead of checking hot list e.g. revocation list of principals excluded by policy change stolen capabilities should be detected on authenticating the presenting principal (unless the presenter is successfully masquerading as the owner!) All the above raise the question of the structure and scope of principal names and how and where principals are authenticated. ACLs in distributed systems We first followed the capability thread. We have discussed principal-specific capabilities Now, return to consider ACLs. ACLs comprise lists of principals (or groups) ACL entry = principal (or group) name, rights (from slide 3) where principals and groups are defined and registered within some administration domain. Without group names ACLs may become unmanageable, long lists of principals. Within the administration domain where a group and its constituent principals are registered, a group name can be expanded into a list of principals for checking. How can group names be used outside the domain where the group is registered? We generalise groups to roles and consider role-based access control (RBAC) Role-based access control (RBAC) Services may classify their clients into named roles e.g. - login service: *logged-in-user* (after authentication) - patient monitoring service: *surgeon, doctor, nurse, patient* - online exam service: *candidate, examiner, chief examiner* - digital library service: *reader, librarian, administrator* Access rights (privileges) are assigned to roles for use of services (method invocation) or more fine-grained access to individual objects or broad categories of object managed by a service Scope of role names may be the local domain of the service or some role names may be organisation-wide, across federated domains - e.g. *sales-manager* used in all branches of a world-wide company - *police-sergeant* used in all of the UK’s county police forces - *NHS-doctor* used throughout the UK NHS Administration: note the indirection: \textit{principals} $\rightarrow$ \textit{roles}, \quad \textit{roles} $\rightarrow$ \textit{privileges} Service developers need only specify \textit{authorisation in terms of roles}, independently of the administration of principals e.g. annual student cohort, staff leaving and joining Principals are \textit{authenticated}, as always, and must also \textit{prove their right to acquire/activate a role}. They thus prove they are \textit{authorised} to use a service Compare with ACLs – like ACLs containing only group names. Compare with capabilities – can a capability that proves role membership be engineered? RBAC seems promising for fast authorisation checking. RBAC – 3: Parametrised roles Roles may be parametrised for fine-grained access control to capture: - relationships between principals: **Policy:** “only the doctor treating a patient may access the medical record” e.g. *treating-doctor ( hospital-ID, doctor-ID, patient-ID )* - patients and others may express *exclusions* as authorisation policy e.g. *doctor (doctor-ID)* **Policy:** “where doctor is not Shipman”, “where doctor is not <x> (a relative)” Compare with ACLs containing only groups, with exclusions of individual members - semantics of precedence of evaluation in ACLs has always been a difficult area. RBAC – 4: Role hierarchies Some RBAC systems define role hierarchies with privilege inheritance up the hierarchy. The hierarchy may mirror organisational structure, which reflects power and responsibility rather than functional competence. Privilege inheritance is even less defensible for functional roles. Also: privilege inheritance violates the principle of minimum necessary privilege and makes reasoning about privileges difficult – see many ACM SACMAT papers and the attendant world of hurt. Role hierarchies are defined in the later NIST RBAC standards. Our work has avoided privilege inheritance (see OASIS case study). RBAC – 5: Inter-domain authorisation RBAC eases authorisation outside principals’ home domains, because: • Roles change less frequently than principals leave and join them • Administration of users and role membership is separate from service development and use. • Negotiation on use of services external to domains can be in terms of roles, e.g. payment for a role to use a service • Federated domains may contain agreed role names in each domain. Makes policy easier to negotiate and express. e.g. *sales-department-staff, sales-manager, salesman* RBAC – 6: Authorisation context Authorisation policy could include other constraints on use of a role e.g. time of day, as well as relationships and exclusions. see OASIS case study – environmental constraints The privileges associated with a role might not be static. e.g. *student (course-ID, student-ID)* may read solutions to exercises only after marked work has been returned. e.g. Conference management system – a small-scale example follows of use of an external service from a number of domains. Example: conference management (e.g. Easychair, CMT, EDAS, ... ) selection from workflow and policy Program chair registers names, email addresses, initial password, and roles of the programme committee: roles PC-chair(s), PC-member all are sent an email asking them to register their account, change their password Authors submit papers, acquiring role contact-author, returned a UID for the paper contact-author may submit new versions up to the deadline PC-members are assigned papers to review. They may delegate some reviews: role reviewer per paper, separate from PC-member Conflicts of interest are expressed by submitting-authors and PC members and are enforced by the system PC members must never be able to know the reviewers and see the reviews of their own papers PC members can see only their own reviews until after the review deadline. After this, in a discussion phase, PC members may be able to see the ranked order and other reviews (except for their own papers). Systems vary in this respect. Note: small scale example e.g. 50 PC members, 200 papers...but what about multiple conferences? Note: rights will change after deadlines (an example of context) Design of capabilities/certificates can incorporate RBAC Traditional capabilities in centralised systems: <table> <thead> <tr> <th>object-ID</th> <th>rights</th> </tr> </thead> </table> proves the presenter has the rights to the object RBAC <table> <thead> <tr> <th>role</th> <th>parameters</th> </tr> </thead> </table> proves the presenter holds the role + parameters must be checked against access control policy Capabilities/certificates in distributed systems check digits = \( f \) (SECRET, protected fields) | protected fields (object-ID, rights) | check digits (signature) | RBAC in distributed systems check digits = \( f \) (SECRET, protected fields) | protected fields (role, parameters) | check digits (signature) | RBAC - discussion RBAC provides: 1. **Expressiveness** - subtle expression of access control policy. - if roles are parametrised, exclusions and relationships can be captured. - environmental/context checks (time/place) can also be included. 2. **Efficiency** - checking faster than ACLs - use of certificate technology comparable with capabilities - or use a secure channel and role authentication in source domain 3. **Cross-domain interworking** - easy to negotiate - authorisation policy expressible and enforceable - heterogeneity of certificates – can check back with issuing domain OASIS RBAC Open Architecture for Securely Interworking Services Case study from Opera Group research • OASIS services name their clients in terms of **roles** • OASIS services specify **policy** in terms of **roles** - for **role entry** (activation) - for **service invocation** (authorisation, access control) both in Horn clause form OASIS model of role activation A role activation rule is of the form: \[ \text{condition1, condition2, ….. } \rightarrow \text{ target role} \] Where the conditions can be: - prerequisite role - appointment credential - environmental constraint All are parametrised OASIS: maintaining role membership as we have seen, a role activation rule: \[ \text{cond1}^*, \text{cond2}, \text{cond3}^*, \ldots \mid - \text{target role} \] role membership rule: the role activation conditions that must remain true (e.g. *) for the principal to remain active in the role monitored using event-based middleware OASIS model of authorisation An authorisation rule is of the form: $$\text{condition}_1, \text{condition}_2, \ldots \mid\text{- access}$$ where the conditions can be - an active role - an environmental constraint all are parametrised A Service Secured by OASIS Access Control RMC = role membership certificate - = role entry - = use of service OASIS role activation illustrated role entry policy specification of service B, in Horn clause form conditions for principal P to activate some role prerequisite roles: P has RMC issued by A P has RMC issued by B appointment certificate: P has specified appointment environmental constraints role parameters checked in DB time is as specified P is issued new RMC by B administrative database for domain of service B appointment certificate (persistent) RMC for principal P for service A service A CR RMC event channels for revocation new RMC for principal P for service B time service for domain of service B service B RMC CR RMC CR Access Control Active Security Environment Monitoring membership rules of active roles RMC = role membership certificate CR = credential record ECR = external credential record Access Control It is not realistic for every service to manage secrets and issue certificates The CIA service, for services in its domain: - keeps the activation policies - activates roles - issues and validates certificates - maintains credential record structures for active roles - handles revocation via event channels The CIA service, for services in other domains: - validates certificates it has issued - handles revocation of its certificates Access Control OASIS philosophy and characteristics • Distributed architecture, not a single organisation. Incremental deployment of independently developed services in independent administration domains. • RBAC for scalability, parametrised roles for expressiveness of policy (e.g. exclusion of values, relationships between parameters). • Policy expression is per service, per domain • Roles are activated within sessions. Persistent credentials may be required for role activation. • Independent designs of RMCs may coexist – service at which RMC is presented checks back with issuer for RMC validation • Service (domain) level agreements on use of others’ RMCs • Anonymity if and when required • Immediate revocation on an individual basis • No role hierarchies with inheritance of privileges Background on cross-domain authentication (From slide 2) – here is an outline of some single sign on systems **Raven** for use of websites across all the domains of Cambridge University - common naming of principals (CRSIDs, nested domains) - authentication is sufficient for authorisation **Shibboleth** organisation-centric. organisation negotiates use by its members of external services **OpenID** user-centric used by many large websites (BBC, Google, MySpace, PayPal, ....) Raven • Aim: avoid proliferation of passwords for UCam web services – Raven is a Ucam-webauth Single Sign On system instance – Developed within Cambridge (by Jon Warbrick) • Three parties in the Ucam_webauth protocol: – User’s web-browser – Target web-server – Raven web-server • Authentication token passed as an HTTP cookie – Thus should be passed using HTTPS…but often isn’t Example Raven dialogue • User requests protected page • Target web-server checks for Ucam-WLS-Session cookie • If found, and decodes correctly, page is returned. Done. • Otherwise, redirect client browser to Raven server – Encodes information about the requested page in the URL • Raven gets and checks credentials – (Also permits users to “cancel”) • Raven redirects client browser to the protected page. Done. – (An HTTP 401 error will be generated if users cancelled) Raven coordinates participants using time - Target web-server verifies Ucam-WLS-Session cookie - Public-key of Raven server pre-loaded on target web-server - Target web-server and Raven do not interact directly - Client browser receives, stores, and resends cookies - What about malicious client behaviour or interception? - e.g. replay attacks? - Raven requires time-synchronisation - A site-specific clock-skew margin can be configured Shibboleth provides federated authentication • System for federated authentication and authorisation – Internet2 middleware group standard – Implements SAML: Security Assertion Markup Language – Facilitates single-sign-on across administrative domains • Raven actually speaks both Ucam-webauth and Shibboleth – Shibboleth has the advantage of wider software support • Identity providers (IdPs) supply user information • Service providers (SPs) consume this information and get access to secure content Shibboleth exchange • Similar to Raven, but with some extra indirection – User requests protected resource from SP – SP crafts authentication request – User redirected to IdP or “Where Are You From” service • E.g. UK Federation WAYF service – User authenticates (external to Shibboleth) – Shibboleth generates SAML authentication assertion handle – User redirected to SP – SP may issue AttributeQuery to IdP’s attribute service – SP can make access control decision OpenID • Another cross-domain single-sign-on system • Shibboleth is organisation-centric – Organisations must agree to accept other organisations’ statements regarding foreign users – Lots of support within the UK Joint Information Systems Committee (JISC) for accessing electronic resources • OpenID is user-centric – Primarily about identity – OpenIDs are permanent URI or XRI structures OpenID (cont) • User provides their ID to relying party web site – OpenID 1.0 retrieves URL, learns identity provider – OpenID 2.0 retrieves XRDS, learns identity provider • XRDS/Yadis indirection affords greater flexibility • Many big commercial players offer OpenID assertions • Lots of open source software support for OpenID also • In terms of responsibility, consider use for: – Access to a web resource – Access to a wireless network
{"Source-Url": "http://www.cl.cam.ac.uk/teaching/0910/DistSys/6-access.pdf", "len_cl100k_base": 4711, "olmocr-version": "0.1.50", "pdf-total-pages": 37, "total-fallback-pages": 0, "total-input-tokens": 52546, "total-output-tokens": 6201, "length": "2e12", "weborganizer": {"__label__adult": 0.0004024505615234375, "__label__art_design": 0.0006818771362304688, "__label__crime_law": 0.0026760101318359375, "__label__education_jobs": 0.00261688232421875, "__label__entertainment": 9.554624557495116e-05, "__label__fashion_beauty": 0.00022304058074951172, "__label__finance_business": 0.001316070556640625, "__label__food_dining": 0.0003249645233154297, "__label__games": 0.0004858970642089844, "__label__hardware": 0.0029315948486328125, "__label__health": 0.0009226799011230468, "__label__history": 0.0003743171691894531, "__label__home_hobbies": 0.00019347667694091797, "__label__industrial": 0.0010538101196289062, "__label__literature": 0.0002853870391845703, "__label__politics": 0.0005364418029785156, "__label__religion": 0.00043702125549316406, "__label__science_tech": 0.283203125, "__label__social_life": 0.0002287626266479492, "__label__software": 0.109619140625, "__label__software_dev": 0.59033203125, "__label__sports_fitness": 0.00024139881134033203, "__label__transportation": 0.0006175041198730469, "__label__travel": 0.00022864341735839844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21519, 0.00479]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21519, 0.18966]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21519, 0.86916]], "google_gemma-3-12b-it_contains_pii": [[0, 55, false], [55, 499, null], [499, 1235, null], [1235, 2135, null], [2135, 3146, null], [3146, 4015, null], [4015, 4756, null], [4756, 5547, null], [5547, 6436, null], [6436, 7369, null], [7369, 8116, null], [8116, 8948, null], [8948, 9662, null], [9662, 10302, null], [10302, 10935, null], [10935, 11493, null], [11493, 12001, null], [12001, 13181, null], [13181, 13852, null], [13852, 14472, null], [14472, 14815, null], [14815, 15086, null], [15086, 15420, null], [15420, 15659, null], [15659, 15771, null], [15771, 16435, null], [16435, 16614, null], [16614, 17068, null], [17068, 17851, null], [17851, 18338, null], [18338, 18731, null], [18731, 19214, null], [19214, 19664, null], [19664, 20177, null], [20177, 20664, null], [20664, 21065, null], [21065, 21519, null]], "google_gemma-3-12b-it_is_public_document": [[0, 55, false], [55, 499, null], [499, 1235, null], [1235, 2135, null], [2135, 3146, null], [3146, 4015, null], [4015, 4756, null], [4756, 5547, null], [5547, 6436, null], [6436, 7369, null], [7369, 8116, null], [8116, 8948, null], [8948, 9662, null], [9662, 10302, null], [10302, 10935, null], [10935, 11493, null], [11493, 12001, null], [12001, 13181, null], [13181, 13852, null], [13852, 14472, null], [14472, 14815, null], [14815, 15086, null], [15086, 15420, null], [15420, 15659, null], [15659, 15771, null], [15771, 16435, null], [16435, 16614, null], [16614, 17068, null], [17068, 17851, null], [17851, 18338, null], [18338, 18731, null], [18731, 19214, null], [19214, 19664, null], [19664, 20177, null], [20177, 20664, null], [20664, 21065, null], [21065, 21519, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21519, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21519, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21519, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21519, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21519, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21519, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21519, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21519, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21519, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21519, null]], "pdf_page_numbers": [[0, 55, 1], [55, 499, 2], [499, 1235, 3], [1235, 2135, 4], [2135, 3146, 5], [3146, 4015, 6], [4015, 4756, 7], [4756, 5547, 8], [5547, 6436, 9], [6436, 7369, 10], [7369, 8116, 11], [8116, 8948, 12], [8948, 9662, 13], [9662, 10302, 14], [10302, 10935, 15], [10935, 11493, 16], [11493, 12001, 17], [12001, 13181, 18], [13181, 13852, 19], [13852, 14472, 20], [14472, 14815, 21], [14815, 15086, 22], [15086, 15420, 23], [15420, 15659, 24], [15659, 15771, 25], [15771, 16435, 26], [16435, 16614, 27], [16614, 17068, 28], [17068, 17851, 29], [17851, 18338, 30], [18338, 18731, 31], [18731, 19214, 32], [19214, 19664, 33], [19664, 20177, 34], [20177, 20664, 35], [20664, 21065, 36], [21065, 21519, 37]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21519, 0.01795]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
c9cd7c885df081fa98b19556d37e5f090b8bda59
BERTPHONE: Phonetically-aware Encoder Representations for Utterance-level Speaker and Language Recognition Shaoshi Ling, Julian Salazar, Yuzong Liu, Katrin Kirchhoff Amazon AWS AI {shaosl, julsal, liuyuzon, katrinki}@amazon.com Abstract We introduce BERTPHONE, a Transformer encoder trained on large speech corpora that outputs phonetically-aware contextual representation vectors that can be used for both speaker and language recognition. This is accomplished by training on two objectives: the first, inspired by adapting BERT to the continuous domain, involves masking spans of input frames and reconstructing the whole sequence for acoustic representation learning; the second, inspired by the success of bottleneck features from ASR, is a sequence-level CTC loss applied to phoneme labels for phonetic representation learning. We pretrain two BERTPHONE models (one on Fisher and one on TED-LIUM) and use them as feature extractors into x-vector-style DNNs for both tasks. We attain a state-of-the-art $C_{avg}$ of 6.16 on the challenging LRE07 3sec closed-set language recognition task. On Fisher and VoxCeleb speaker recognition tasks, we see an 18% relative reduction in speaker EER when training on BERTPHONE vectors instead of MFCCs. In general, BERTPHONE outperforms previous phonetic pretraining approaches on the same data. We release our code and models as an AWS GitHub repository. 1. Introduction Motivated by improvements from pretraining bidirectional contextual representations like BERT [1] for language understanding tasks, we propose BERTPHONE, a pretraining approach that gives versatile representations for speech processing tasks. It is a deep Transformer encoder [2] that turns an utterance into a frame-wise vector sequence capturing both acoustic and phonetic information. Notably, BERTPHONE can be used as a fixed feature extractor on which task-specific models for speaker, language, and speech recognition can be trained. BERT is a deep neural network trained on text under a masked language modeling (MLM) objective, where subword tokens are masked then predicted via classification. In contrast, speech features are continuous, which suggests their reconstruction as a possible generative analogue. However, acoustic frames are highly correlated over time, which models may exploit during training [3]. More generally, a contrastive loss may be better at inducing high-level latent knowledge [4]. For example, phonetic understanding in spoken language recognition is often incorporated via intermediate “bottleneck” representations from neural networks trained to classify context-dependent phonemes [5]. Hence, BERTPHONE uses two objectives to pretrain on large amounts of speech data, which we describe in Section 3. First, to induce higher-level acoustic representations, we propose masking contiguous spans of acoustic frames then reconstructing all frames independently under an $L_1$ penalty. Second, to induce and implicitly align phonetic information to each frame, we propose applying connectionist temporal classification (CTC) [6], a sequence loss computed against each utterance’s unaligned context-independent phoneme labels. After pretraining, we use the final layer of BERTPHONE as a fixed featurizer for a self-attentional x-vector model (Section 4) that is trained for either (text-independent) speaker recognition (SR) [7] or language recognition (LR) [8]. In Sections 5 and 6, we pretrain two BERTPHONE models (Fisher, TED-LIUM) then train classifiers for each of SR and LR; altogether, we consistently improve on end-to-end systems and other phonetic pre-training methods. Finally, in Section 7 we explore different weightings of both losses, train over BERTPHONE’s layers, and try graphemes as an alternate supervision for CTC. BERTPHONE gives competitive phoneme error rates, suggesting downstream use for end-to-end automatic speech recognition (ASR) as well. 2. Related work 2.1. Phonetic information for SR and LR Over the past decade, the speech processing community has adopted deep neural networks (DNNs) for ASR, in both hybrid (HMM-DNN) [9] and end-to-end configurations [6, 10]. Past works have explored the “indirect method” of using the hybrid system’s DNN as a frame-level feature extractor for SR and LR [11]. As the DNN is already trained on large speech corpora, it is thought to have captured higher-level phonetic understanding that is useful for LR [12], and to a lesser extent for SR [13, 14],... Figure 1: BERTPHONE pretraining. Contiguous spans of features are masked to zero, as represented by the gray circles. Two objectives are applied: one reconstructs the original frames from before masking, while the other predicts context-independent phoneme labels for each frame then computes CTC loss against the ground truth phoneme sequence. This is often done with bottleneck features [5] (more generally, intermediate frame-wise features from a DNN), though DNN posteriors have also been used as sufficient statistics for i-vector systems [13]. However, since SR can require higher-level acoustic information (speaker traits) that are less relevant for ASR, the DNN’s knowledge may be insufficient. This has led to original speech features being presented in tandem with DNN features in SR systems [11]. Another approach is to multitask the frame-level DNN with SR directly [15]. In contrast, we train an end-to-end ASR system as our feature extractor (removing the need for forced alignment) to capture phonetic knowledge. We only use the final layer for both tasks. To enable this, instead of multitasking with SR or presenting early layers in tandem, we use a self-supervised reconstruction task to induce the desired acoustic knowledge. ### 2.2. Deep acoustic representations Recent works have improved the effectiveness of DNNs in ASR by pretraining on large amounts of unlabeled speech. These have involved recurrent neural networks with self-supervised objectives that are either contrastive (classifying a future audio sample from negative examples) or reconstructive (recreating future audio frames or wave samples). A contrastive loss was used by CPC [4] for phone classification and SR, and wav2vec [16] for ASR. Full reconstruction was used for autoencoding by [17], and future frame reconstruction was used in autoregressive predictive coding (APC) [3] for phone classification and SR. Concurrent to our work, a number of preprints have also proposed pretraining speech representations bidirectionally (with recurrent networks or Transformer encoders), with either a contrastive [18, 19] or reconstructive [20, 21, 22] objective. None of these explore the benefits of weak or multi-task supervision, which we use to enable frozen, multi-purpose representations. These works also do not evaluate LR or large-scale out-of-corpus SR (VoxCeleb), which are the focus of our work. ### 3. BERTPHONE pretraining #### 3.1. Architecture Our pretraining scheme is depicted in Figure 1. We use a deep Transformer encoder [2], which consists of an embedding layer followed by a stack of self-attention layers, as implemented by BERT [1]. To simplify our analyses, we do not add convolutional layers or 2D attention layers before the embedding layer, though other self-attentional speech models have found these helpful [23]. Let $X = [x_1; \cdots; x_T]$ denote a sequence of input features. The embedding layer is applied, after which a learned matrix of position embeddings ($size d_{emb} \times T_{max}$) are added. A self-attention layer consists of two sublayers applied in sequence. The first sublayer performs multi-head self-attention, which computes $n_{heads}$ sets of queries $Q$, keys $K$, and values $V$ by left-multiplying inputs $X$ with learned weights $W_Q, W_K \in \mathbb{R}^{d_{emb} \times d_{emb}}$ and $W_V \in \mathbb{R}^{d_{emb} \times d_{emb} / n_{heads}}$ respectively. The output of the $i$-th attention head is: $$HdAtt^{(i)} = \text{softmax}(Q^{(i)T}K^{(i)}/\sqrt{d_{emb}})V^{(i)T}$$ where the softmax is applied row-wise. Heads are concatenated to give $d_{emb}$-dimensional features $\text{MltHdAtt} = [HdAtt^{(1)}; \cdots; HdAtt^{(n_{heads})}]^T$. The second sublayer learns a position-wise feed-forward net- work, which at position $t$ computes \[ \text{FFN}(x_t) = W_2 \text{GELU}(W_1 x_t + b_1) + b_2 \quad (2) \] where $W_1, W_2 \in \mathbb{R}^{d_{emb} \times d_{ff}}$ for some inner dimension $d_{ff}$, and GELU is the non-linearity used in lieu of ReLU by BERT. Each sublayer has a residual connection and layer normalization, e.g., $\text{LN}(\text{MLtHdAtt}(X) + X)$. 3.2. Training criteria We train on the sum of two losses in our framework: $L_1$ reconstruction under masking, and connectionist temporal classification (CTC) [6]. Inputs to our network are cepstral mean-normalized MFCCs of speech utterances. We stack every 3 frames to reduce the memory cost of long sequences [24]. The first loss is our proposed bidirectional predictive coding and denoising task. At 5% of the positions, we mask a span of 3 stacked frames (representing 9 frames or ~100ms of audio). These are replaced with zero vectors (recall inputs are mean-normalized), a method proposed in the supervised setting by SpecAugment [25]. In this way we mask ~15% of tokens, similar to BERT’s pre-training scheme [1]. However, we follow SpanBERT [26] in masking spans of inputs instead, to induce higher-level representations by increasing the difficulty of the task (as speech frames are highly correlated over time). We also follow RoBERTa [27] and generate new masking patterns for each batch. Finally, instead of reconstructing only masked positions, we reconstruct everywhere to induce acoustic information at all positions and more explicitly train BERTPHONE as a type of denoising autoencoder [17]: \[ L_{\text{recons}} = \frac{1}{T} \sum_{t=1}^{T} |x_t - \text{FFN}(z_t)|, \quad (3) \] where $z_t$ are outputs of the Transformer encoder (our “BERTPHONE vectors”). FFN follows Eq. 2 but here taking ReLU, $d_{ff} = d_{emb}$, and matching $x_t$’s dimension. The second loss is supervised, to induce phonetic understanding and leverage our use of labeled speech corpora. To support end-to-end pretraining (i.e., no force-alignment), we propose the use of sequence-level CTC loss. Let $Y = [y_1; \ldots; y_U]$ be the output sequence of tokens ($U \leq T$). Recall that CTC is a maximum likelihood objective $L_{\text{CTC}} = -\log P(Y | X)$, where likelihood is given by summing over probabilities of all $T$-length symbol sequences $S$ (over the original label alphabet augmented with a special blank token) that collapse to $Y$ after merging repetitions and removing blanks (the operation represented by $B$): \[ P(Y | X) = \sum_{S \in B^{-1}(Y)} \prod_{t=1}^{T} P(s_t | X). \quad (4) \] We see that position-wise tokens in the “path” $S$ are treated as conditionally independent from one another. The probabilities $P(s_t | X)$ are given by taking $z_t$ and applying a linear projection then softmax. The combined loss is \[ L = \lambda \cdot \sqrt{T} \cdot L_{\text{recons}} + (1 - \lambda) \cdot L_{\text{CTC}}, \quad (5) \] where $\sqrt{T}$ empirically rescales $L_{\text{recons}}$ (which is averaged over frames) to be proportionate with $L_{\text{CTC}}$ (a sequence-level loss), and where $\lambda$ is a hyperparameter. 4. Task-specific model Figure 2: Architecture of our x-vector + SAP model, depicted here for closed-set LR. For SR we also train with cross-entropy (CE) loss, but we extract embeddings at the final layer then use PLDA for classification. Although our pretraining is BERT-like, our method of application is closer to ELMo [28] representations in NLP, and to bottleneck and tandem features in SR and LR: we keep BERTPHONE frozen. Furthermore, we only use the final layer’s output, to see whether our multitask setup has captured a balance of acoustic and phonetic information. Instead, we treat BERTPHONE vectors as features (similar to wav2vec [16] or DeCoAR [22]) and train a segment-level DNN classifier on them. 4.1. Architecture Our classifier (Figure 2) is based on the DNN used to extracting utterance-level x-vectors for speaker and language recognition [7, 8]. We re-implement the x-vector architecture in MXNet with two substitutions: The first 5 time-delay neural network (TDNN) layers are replaced with 5 layers of 1D convolutions (CNN) with corresponding kernel sizes (2,2,3,1,1) and channel counts (512, ..., 512, 512×3). Then, instead of extracting mean and standard deviation on the frame-wise vectors, we perform multi-head self-attentive pooling (SAP) [29]. By assigning relative importance to each time step, SAP focuses on frames relevant to the utterance-level decision. SAP improves duration robustness for speaker verification [30] and is used in state-of-the-art end-to-end LR models [31]. The output of a single head is a 512-dim. vector: \[ v_{\text{SAP}}(i) = \text{softmax}(u^{(i)} \tanh(WX))X^T. \tag{6} \] This is analogous to the self-attention layer (Eq. 1), with \(u^{(i)}\) the head’s query weight vector and \(W\) is a shared linear transformation that gives keys. We take 5 heads and get a combined 2560-dim. vector, similar to the x-vector system. The rest proceeds in the same way: two dense layers of size 512, then a linear projection and softmax. Batch norm and ReLU are applied between all layers except after SAP. 4.2. Training criterion We train x-vector + SAP models on BERTPHONE for each of LR and SR using cross-entropy loss. For closed-set LR, we use the softmax layer directly at inference time, since the LR task has fixed language categories. For SR, we use probabilistic linear discriminant analysis (PLDA) [32] to compare pairs of speaker embeddings. 5. Experimental setup We use Kaldi [36] for data preparation. Our BERT and x-vector + SAP model implementations are based on MXNet’s GluonNLP toolkit [37]. We release our code and models as an AWS GitHub repository. 5.1. BERTPHONE pretraining We use 40-dimensional mean-normalized MFCCs (window size of 25ms, hop length of 10ms) as inputs (before stacking every 3 frames). Our BERTPHONE parameters, learning schedule, and training details are consistent with the BERT base described in [1], with 12 self-attention layers, \(d_{\text{emb}} = 768\) hidden dimensions, \(d_{\text{ff}} = 4d_{\text{emb}}\), and \(n_{\text{heads}} = 12\) heads. The main difference is while BERT trains on contiguous excerpts of text, we take our variable-length speech utterances and load them in batches of 80, spread over multiple GPUs. Our warmup is over the first 3,000 batches, to a maximum learning rate of 5e-5. We train for around 30 epochs. Since the VoxCeleb dataset for SR is sampled at 16kHz while others are sampled at 8kHz (and to match previous configurations for phonetic pretraining), we train two rate-specific BERTPHONE models: - Fisher English (8kHz). A corpus of telephone conversations [38], via the train split of Kaldi’s s5 recipe. - TED-LIUM (16kHz). A corpus of English TED talks [39], via the train split of Kaldi’s s5r3 recipe. To give phoneme labels for CTC on both datasets, we use the CMUdict lexicon. We omit lexical stresses to give an alphabet of 39 non-silence classes. 5.2. Speaker recognition (SR) To validate the joint usability of our representations, we take Fisher BERTPHONE’s representations for the text-independent Fisher SR task described in [15], using the same training and evaluation sets\(^1\). This selects 172 hours of data from 5,000 speakers for training, and takes a disjoint set of 1,000 speakers for evaluation. Each person is enrolled using 10 segments (about 30sec in total) and evaluated on three 3sec segments. To evaluate the success of BERTPHONE on large scale and out-of-corpus SR, we take TED-LIUM BERTPHONE for the text-independent VoxCeleb task [40]. We use Kaldi’s voxceleb/v2 recipe. The training set includes VoxCeleb1 development set and all of VoxCeleb2. The VoxCeleb1 test set is used as the evaluation set. The number of speakers in the training set is 7,146, and the number of utterances is 2,081,192 including augmentation [7]. We perform stochastic gradient descent with batch size 128, momentum 0.9, weight decay 1e-4, and a learning rate of 0.01, which is decayed by 10 when validation loss plateaus. 5.3. Language recognition (LR) We take Fisher BERTPHONE’s representations for the closed-set general LR task of the 2007 NIST LR Evaluation (LRE07), where the objective is to identify an utterance’s language from a set of 14 languages. We use the train split of the lre07v2/2 recipe, which includes LRE07’s training data along with CALLFRIEND, LRE96/03/05/09, and SRE08, analogous to past work [31, 34, 35]. The utterances are split into 4sec segments with 50% overlap, similar to [41]. Every epoch, 8,000 to 12,000 segments are randomly selected per language and distributed over batches to mitigate class imbalance. We use the same optimization as in SR, though since no validation set is available we decay the learning rate when training loss plateaus, doing this twice. At test time, VAD was applied to reduce the length of the 6,474 closed-set test utterances. These are then split into non-overlapping segments of ~4sec; however, the self-attention pooling layer (Section 4) occurs over frame features from all segments to give a single, utterance-level language prediction. \(^1\)https://github.com/mycrazycracy/Speaker_embedding_with_phonetic_information Table 1: Speaker recognition results. **Italicized models** use: **Left:** pretraining on Fisher for a Fisher SR task; **Right:** pretraining on TED-LIUM for the VoxCeleb SR task. Numbers below the midline are ours; non-pretrained i-vector and x-vector numbers on the left are from [15], and on the right are from the Kaldi codebase. The minDCF variants are minimum detection costs as defined in [15, 14]. <table> <thead> <tr> <th>System</th> <th>(Fisher →) Fisher</th> <th>(TED-LIUM →) VoxCeleb</th> </tr> </thead> <tbody> <tr> <td></td> <td>EER minDCF08 minDCF10</td> <td>EER minDCF0.01 minDCF0.001</td> </tr> <tr> <td>i-vector</td> <td>2.10 0.093 0.334</td> <td>5.24 0.493 0.616</td> </tr> <tr> <td>x-vector</td> <td>1.73 0.086 0.363</td> <td>3.13 0.326 0.500</td> </tr> <tr> <td>x-vector + phonetic vec. + multi-tasking [15]</td> <td>1.39 0.073 0.308</td> <td>- - -</td> </tr> <tr> <td>x-vector + adv. loss + multi-tasking [14]</td> <td>- - -</td> <td>3.17 0.336 -</td> </tr> <tr> <td>x-vector + SAP model on MFCCs</td> <td>1.50 0.079 0.316</td> <td>3.06 0.322 0.514</td> </tr> <tr> <td>x-vector + SAP model on BERTPHONE</td> <td><strong>1.23</strong> 0.067 0.268</td> <td><strong>2.51</strong> 0.300 0.439</td> </tr> </tbody> </table> 6. Results 6.1. Speaker recognition Table 1 shows our model’s task performance in terms of equal error rate (EER) and minimum detection cost (minDCF) versus other embedding plus phonetic information approaches. In the Fisher to Fisher case, BERTPHONE even improves over the shared features and multi-tasking approach, where the phonetic extractor is learned jointly between ASR and SR [15]. On the large-scale, out-of-corpus VoxCeleb SR task, training on TED-LIUM BERTPHONE gives 18% relative reduction in EER over training directly on MFCCs. Our model also improves on recent work that uses the same pretraining set [14] via multi-tasking and adversarial training, although their x-vector baseline is weaker. 6.2. Language recognition Table 2 shows the performance of our x-vector + SAP model trained on MFCCs and on BERTPHONE vectors, on the LRE07 closed-set task. Performance is reported as average detection cost $C_{avg}$ and equal error rate (EER%). We get significant improvements over both end-to-end and phonetically-aware systems from the past two years. We achieve state-of-the-art on the 3sec and 10sec conditions despite having only trained on ~4sec segments, which is a testament to the effectiveness of self-attention in prioritizing relevant frames. Though we underperform pretrained systems at 30sec, we still improve on all end-to-end methods. 7. Analysis 7.1. Reconstruction versus CTC loss <table> <thead> <tr> <th>Features</th> <th>PER</th> <th>LR EER (3sec)</th> <th>SR EER</th> </tr> </thead> <tbody> <tr> <td>BERTPHONE, $\lambda$ = 0.0</td> <td>11.5</td> <td>4.63</td> <td>5.23</td> </tr> <tr> <td>BERTPHONE, $\lambda$ = 0.2</td> <td>13.1</td> <td>5.19</td> <td><strong>1.23</strong></td> </tr> <tr> <td>BERTPHONE, $\lambda$ = 0.5</td> <td>13.9</td> <td>5.42</td> <td>1.27</td> </tr> <tr> <td>BERTPHONE, $\lambda$ = 0.8</td> <td>14.4</td> <td>6.07</td> <td><strong>1.23</strong></td> </tr> <tr> <td>BERTPHONE, $\lambda$ = 1.0</td> <td>-</td> <td>14.97</td> <td>1.50</td> </tr> <tr> <td>MFCC</td> <td>-</td> <td>13.21</td> <td>1.50</td> </tr> </tbody> </table> Table 3: BERTPHONE models pretrained on Fisher with different interpolation weights, with their phoneme error rates (PER) on the development set, plus EERs after training on downstream SR and LR tasks. In Table 3 we interpolate between \( \lambda = 0 \) (CTC only) and \( \lambda = 1 \) (reconstruction only). LR and SR performance is equivalent or slightly degrades when BERTPHONE is only trained to reconstruct. For LR, we find that CTC-only did best; any reconstruction resulted in degradation, presumably as it degraded the quality of phonetic information encoded (though all models with \( \lambda < 1 \) remain state-of-the-art). For SR, the model does best when some CTC loss is introduced, in line with previous work on the relevance of phonetic information to SR. As expected, using vectors from a CTC-only model actively degrades SR performance. In practice, one might balance these concerns and take, e.g., \( \lambda = 0.2 \). ### 7.2. Intermediate BERTPHONE representations ![Figure 3: Per-layer weights \( s^\text{task}_\ell \) learned for SR and LR tasks when trained across representations from Fisher BERTPHONE, \( \lambda = 0.2 \).](image) While utterance-level SR and LR are both classification problems on speech, one would expect different features to be discriminative for each task. Instead of multi-tasking to induce a balance of features in the last layer, one could instead learn to take a linear combination of features across layers. Inspired by ELMo [28], we train the x-vector + SAP model to instead use a global, softmax-normalized set of learned weights \( s^\ell \) to pool representations over the \( L = 12 \) layers. \[ s^\text{task}_\ell = \sum_{\ell=1}^{L} s^\text{task}_\ell z^\text{pretrain}_\ell. \tag{7} \] Since each \( z^\text{pretrain}_\ell \) is layer-normalized, we interpret the weights without rescaling. In Figure 3, we see that given this flexibility, LR uses representations largely from later layers, peaking at layer 10. This is consistent with LR primarily using phonetic information. We speculate that layer 11 and 12 begin to specialize in preparation for the CTC objective (so that conditional dependence between positions are captured before the output layer). In contrast, SR uses more of the middle layers, with modes at layer 6, 9, and 12. This suggests a healthy balance of acoustic and phonetic information being leveraged. In all, this matches one’s intuition that LR uses higher-level features (e.g., a language’s preferred phonetic sequences) while SR uses primarily lower-level features (qualities like pitch and vocal range), plus possible phonetic preferences (given our text-independent setting). Finally, we note that these models did not perform any better than using the last layer. We speculate that a weighted summation on fixed vectors is rather unnatural and requires an unfrozen BERTPHONE to perform well. ### 7.3. Choice of pretraining alphabet We evaluate how the choice of label set for CTC affects downstream performance. We pretrained two additional models (Fisher BERTPHONE, \( \lambda = 0.2 \)) with other label sets: phonemes with lexical stress (primary and secondary) using CMUdict, and characters (uncased letters, digits, space, punctuation). We train task-specific models atop these systems as before; results are in Table 4. <table> <thead> <tr> <th>Pretraining alphabet</th> <th>LR EER (3sec)</th> <th>SR EER</th> </tr> </thead> <tbody> <tr> <td>characters</td> <td>6.44</td> <td>1.36</td> </tr> <tr> <td>phonemes (no stress)</td> <td>5.19</td> <td>1.23</td> </tr> <tr> <td>phonemes (with stress)</td> <td>5.33</td> <td>1.26</td> </tr> </tbody> </table> Table 4: Downstream results for different choices of CTC labels, when pretraining Fisher BERTPHONE, \( \lambda = 0.2 \). As expected, performance improves when going from characters to context-independent phonemes, the latter being more conditionally-independent (the CTC assumption) and whose prediction more explicitly codes phonetic information. The character-based model remains competitive, which is not too surprising as character CTC is known to still learn phonetic representations internally [42]. We see equal or slight degradation when using phonemes with lexical stresses indicated. We note that all three models have token error rates between 10% to 13% (upon greedy decoding via CTC), corresponding with their alphabet size. Hence, with WFST-based decoding using CMUdict and a language model, BERTPHONE could be easily adapted to give a competitive end-to-end ASR model that looks similar to a self-attention + CTC system (SAN-CTC) [24]. ### 8. Conclusion We introduced BERTPHONE, a self-attentive, phonetically-aware, acoustic contextual representations which can be used with small task-specific models to jointly improve performance on multiple speech tasks, namely language and speaker recognition. Future work can explore the additional gains from unfreezing BERTPHONE as done in the original BERT work, although this removes the multi-functional property of our representations. In addition to tuning $\lambda$, one could also try using intermediate layers to improve performance. One can also evaluate the use of BERTPHONE for speech recognition pretraining by adding further layers and implementing CTC decoding. Finally, note that the $L_1$ loss can be used by itself on unlabeled audio, suggesting the possibility of training on larger, unlabeled audio corpora. 9. Acknowledgements We thank Davis Liang and Sundararajan Srinivasan for helpful feedback regarding this work. 10. References
{"Source-Url": "https://www.isca-speech.org/archive/pdfs/odyssey_2020/ling20_odyssey.pdf", "len_cl100k_base": 6707, "olmocr-version": "0.1.51", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 30912, "total-output-tokens": 9899, "length": "2e12", "weborganizer": {"__label__adult": 0.0006818771362304688, "__label__art_design": 0.0012264251708984375, "__label__crime_law": 0.0006437301635742188, "__label__education_jobs": 0.0014495849609375, "__label__entertainment": 0.0007758140563964844, "__label__fashion_beauty": 0.000339508056640625, "__label__finance_business": 0.0002536773681640625, "__label__food_dining": 0.0005788803100585938, "__label__games": 0.0012578964233398438, "__label__hardware": 0.0022220611572265625, "__label__health": 0.0012845993041992188, "__label__history": 0.0003783702850341797, "__label__home_hobbies": 0.00012135505676269533, "__label__industrial": 0.0006365776062011719, "__label__literature": 0.0015287399291992188, "__label__politics": 0.0005826950073242188, "__label__religion": 0.0007491111755371094, "__label__science_tech": 0.39208984375, "__label__social_life": 0.0002168416976928711, "__label__software": 0.0260009765625, "__label__software_dev": 0.56591796875, "__label__sports_fitness": 0.0003991127014160156, "__label__transportation": 0.000560760498046875, "__label__travel": 0.0002428293228149414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33811, 0.05506]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33811, 0.24912]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33811, 0.85762]], "google_gemma-3-12b-it_contains_pii": [[0, 4483, false], [4483, 8239, null], [8239, 12580, null], [12580, 17495, null], [17495, 20482, null], [20482, 25140, null], [25140, 29534, null], [29534, 33811, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4483, true], [4483, 8239, null], [8239, 12580, null], [12580, 17495, null], [17495, 20482, null], [20482, 25140, null], [25140, 29534, null], [29534, 33811, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33811, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33811, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33811, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33811, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33811, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33811, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33811, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33811, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33811, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33811, null]], "pdf_page_numbers": [[0, 4483, 1], [4483, 8239, 2], [8239, 12580, 3], [12580, 17495, 4], [17495, 20482, 5], [20482, 25140, 6], [25140, 29534, 7], [29534, 33811, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33811, 0.12791]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
3ce2810153a04b5eb219ecbd903d8daca67974b9
GNU Bayonne: telephony application server of the GNU project David Sugar Open Source Telecom. Somerset, NJ, 08873 sugar@gnu.org http://www.gnu.org/software/bayonne Abstract GNU Bayonne is a middleware telephony server that can be used to create and deploy script driven telephony application services. These services interact with users over the public telephone network. GNU Bayonne can be used to create carrier applications like Voice Mail and calling card systems, as well as enterprise applications such as unified messaging. It can be used to provide voice response for e-commerce systems and has been used in this role in various e-gov projects. GNU Bayonne can also be used to telephony enable existing scripting languages such as perl and python. 1 Introduction Our goal in GNU Bayonne was to make telephony services as easy to program and deploy as a web server is today. We choose to make this server easily programmable thru server scripting. We also desired to have it highly portable, and allow it to integrate with existing application scripting tools so that one could leverage not just the core server but the entire platform to deliver telephony functionality and integrate with other resources like databases. GNU Bayonne, as a telephony server, also imposes some very real and unique design constraints. For example, we must provide interactive voice response in real-time. “realtime” in this case may mean what a person might tolerate, or delay of 1/10th of a second, rather than what one might measure in milliseconds in other kinds of real-time applications. However, this still means that the service cannot block, for, after all, you cannot flow control people speaking. Since each vendor of telephony hardware has chosen to create their own unique and substantial application library interface, we needed GNU Bayonne to sit above these and be able to abstract them. Ultimately we choose to create a driver plugin architecture to do this. What this means is that you can get a card and api from Aculab, for example, write your application in GNU Bayonne using it, and later choose, say, to use Intel telephony hardware, and still have your application run, unmodified. This has never been done in the industry widely because many of these same telephony hardware manufacturers like to produce their own middleware solutions that lock users into their products. 2 GNU Common C++ To create GNU Bayonne we needed a portable foundation written in C++. I wanted to use C++ for several reasons. First, the highly ab- Abstract nature of the driver interfaces seemed very natural to use class encapsulation for. Second, I found I personally could write C++ code faster and more bug free than I could write C code. Why we choose not to use an existing framework is also simple to explain. We knew we needed threading, and socket support, and a few other things. There were no single framework that did all these things except a few that were very large and complex which did far more than we needed. We wanted a small footprint for Bayonne, and the most adaptable framework that we found at the time typically added several megs of core image just for the runtime library. GNU Common C++ (originally APE) was created to provide a very easy to comprehend and portable class abstraction for threads, sockets, semaphores, exceptions, etc. This has since grown into its own and is now used as a foundation of a number of projects as well as being a part of GNU. 3 GNU ccScript In addition to having portable C++ threading, we needed a scripting engine. This scripting system had to operate in conjunction with a non-blocking state-transition call processing system. It also had to offer immediate call response, and support several hundred to a thousand instances running concurrently in one server image. Many extension languages assume a separate execution instance (thread or process) for each interpreter instance. These were unsuitable. Many extension languages assume expression parsing with non-deterministic run time. An expression could invoke recursive functions or entire subprograms for example. Again, since we wanted not to have a separate execution instance for each interpreter instance, and have each instance respond to the leading edge of an event callback from the telephony driver as it steps thru a state machine, none of the existing common solutions like tcl, perl, guile, etc, would immediately work for us. Instead, we created a non-blocking and deterministic scripting engine, GNU ccScript. GNU ccScript is unique in several ways. It is step executed, and is non-blocking. Statements either execute and return immediately, or they schedule their completion for a later time with the executive. A given “step” is executed, rather than linearly. This allows a single thread to invoke and manage multiple interpreter instances. While GNU Bayonne can support interacting with hundreds of simultaneous telephone callers on high density carrier scale hardware, we do not require hundreds of native “thread” instances running in the server, and we have a very modest cpu load. Another way GNU ccScript is unique is in support for memory loaded scripts. To avoid delay or blocking while loading scripts, all scripts are loaded and parsed into a virtual machine structure in memory. When we wish to change scripts, a brand new virtual machine instance is created to contain these scripts. Calls currently in progress continue under the ‘old vm and new callers are offered the new vm. When the last old call terminates, the entire old vm is then disposed of. This allows for 100% uptime even while services are modified. Finally, GNU ccScript allows direct class extension of the script interpreter. This allows one to easily create a derived dialect specific to a given application, or even specific to a given GNU Bayonne driver, simply by deriving it from the core language thru standard C++ class extension. 4 TGI support and plugins While GNU Bayonne offers a ccScript virtual interpreter for creating telephony applications, we wanted to be able to integrate support for databases and other things. There are systems and scripting environments such as Perl which already offer database connectivity. So we created a concept called “TGI,” which, like CGI, allows external executables to be invoked from within a call flow script, and the results to be recorded so that information can be passed both to and from the user. The TGI model for GNU Bayonne is very similar to how CGI works for a web server. In TGI, a separate process is started, and it is passed information on the phone caller thru environment variables. Environment variables are used rather than command line arguments to prevent snooping of transactions that might include things like credit card information and which might be visible to a simple “ps” command. The TGI process is tethered to GNU Bayonne thru stdout and any output it generates is used to invoke server commands. These commands can do things like set return values, such as the result of a database lookup, or they can do things like invoke new sessions to perform outbound dialing. A “pool” of available processes are maintained for TGI gateways so that it can be treated as a restricted resource, rather than creating a gateway for each concurrent call session. It is assumed gateway execution time represents a small percentage of total call time, so it is efficient to maintain a small process pool always available for quick TGI startup and desirable to prevent stampeding if say all the callers hit a TGI at the exact same moment. TGI does involve a lot of overhead, and so in addition we have the ability to create direct command extensions to the native GNU Bayonne scripting languages. These command extensions can be processed thru plugin modules which can be loaded at runtime, and offer both scripting language visible interface extensions, and, within the plugin, the logic necessary to support the operation being represented to the scripting system. These are much more tightly coupled to the internal virtual machine environment and a well written plugin could make use of thread pools or other resources in a very efficient manner for high port capacity applications. 5 Architecture As can be seen, we bring all these elements together into a GNU Bayonne server, which then executes as a single core image. The server itself exports a series of base classes which are then derived in plugins. In this way, the core server itself acts as a “library” as well as a system image. One advantage of this scheme is that, unlike a true library, the loaded modules and core server do not need to be relocatable, since only one instance is instantiated in a specific form that is not shared over arbitrary processes. When the server comes up, it creates gateways and loads plugins. The plugins themselves use base classes found in the server and derived objects that are defined for static storage. This means when the plugin object is mapped through download, its constructor is immediately executed, and the object’s base class found in the server image registers the object with the rest of GNU Bayonne. Using this method, plugins in effect automatically register themselves through the server as they are loaded, rather than through a separate runtime operation. The server itself also instantiates some objects at startup even before main() runs. These are typically objects related to plugin registration or parsing of the config file. 6 Hardware Requirements Since GNU Bayonne has to interact with telephone users over the public telephone network or private branch exchange, there must be hardware used to interconnect GNU Bayonne to the telephone network. There are many vendors that supply this kind of hardware and often as PC add-on cards. Some of these cards are single line telephony devices such as the Quicknet LineJack card, and others might support multiple T1 spans. Some of these cards have extensive on-board DSP resources and TDM busses to allow interconnection and switching. GNU Bayonne tries to abstract the hardware as much as possible and supports a very broad range of hardware already. GNU Bayonne offers support for /dev/phone Linux kernel telephony cards such as the Quicknet LineJack, for multiport analog DSP cards from VoiceTronix and Dialogic, and digital telephony cards including CAPI 2.0 (CAPI4Linux) compliant cards, and digital span cards from Intel/Dialogic and Aculab. We are always looking to broaden this range of card support. At present both voice modem and OpenH323 support is being worked on. Voice modem support will allow one to use generic low cost voice modems as a GNU Bayonne telephony resource. The openh323 driver will actually require no hardware but will enable GNU Bayonne to be used as an application server for telephone networks and softswitch equipment built around the h323 protocol family. At the time of this writing I am not sure if either or both of these will be completed in time for the 1.0 release. 7 GNU Bayonne and XML Scripting Some people have chosen to create telephony services through web scripting, which is an admirable ambition. To do this, several XML dialects have been created, but the idea is essentially the same. A query is made, typically to a web server, which then does some local processing and spits back a well formed XML document, which can then be used as a script to interact with the telephone user. These make use of XML to generate application logic and control much like a scripting language, and, perhaps, is an inappropriate use of XML, which really is designed for document presentation and inter-exchange rather than as a scripting tool. However, given the popularity of creating services in this manner, we do support them in GNU Bayonne. GNU Bayonne did not choose to be designed with a single or specific XML dialect in mind, and as such it uses a plugin. The design is implemented by dynamically transcoding an XML document that has been fetched into the internal ccScript virtual machine instructions, and then execute the transcoded script as if it were a native ccScript application. This allows us to transcode different XML dialects and run them on GNU Bayonne, or even support multiple dialects at once. Since we now learn that several companies are trying to force thru XML voice browsing standards which they have patent claims in, it seems fortunate that we neither depend on XML scripting nor are restricted to a specific dialect at this time. My main concern is if the W3C will standardize voice browsing itself only to later find out that the very process of presenting a document in XML encoded scripting to a telephone user may turn out to have a submarine patent, rather than just the specific attempts to patent parts of the existing W3C voice browsing standard efforts. 8 Current Status At the time of this paper’s publication, the 1.0 release of GNU Bayonne should already be in active distribution. This release represents several years of active development and has been standardized in how it operates and how it is deployed. Even before this point, and for the past 6 months, active development has happened on a second generation GNU Bayonne server, and snapshots of this new server are currently available for download. Where GNU Bayonne is evolving will be explained further on. 9 GNU Bayonne the Meta Projects GNU Bayonne does not exist alone but is part of a larger meta-project, “GNUCOMM.” The goals of GNUCOMM is to provide telephony services for both current and next generation telephone networks using freely licensed software. These services could be defined as services that interact with desktop users such as address books that can dial phones and softphone applications, services for telephone switching such as the IPSwitch GNU softswitch project and GNU oSIP proxy registrar, services for gateways between current and next generation telephone networks such as troll and proxies between firewalled telephone networks such as Ogre, realtime database transaction systems like preViking Infotel and BayonneDB, and voice application services such as those delivered thru GNU Bayonne. 10 Transactional Databases BayonneDB is mentioned briefly for transactional services. When we conceived of the need for a transactional database server, we considered that database queries might be slow. The telephony server does not want to do nothing while a transaction is completing, especially if it takes many seconds to happen. Maybe the caller needs to be played music on hold or given other options. To accomplish non-blocking transactions that allow the telephony server to continue call processing, we choose a peer messaging architecture. A request would be sent to an external server for a transaction, and when the transaction completes, a result message would be sent to the server. There can be time-out and retransmission controls which allow this to be conducted thru UDP packets rather than potentially blocking TCP sessions. This set of protocols and specifications was created initially by Zaheer Milari and myself and published early last year. BayonneDB was an attempt to implement the concepts in an operational server. Like Bayonne, BayonneDB offers abstraction thru plugins and is based on GNU Common C++. In the case of BayonneDB, it is designed to abstract the interface to the underlying database server used to complete the transaction request. Being threaded, BayonneDB can maintain a persistent threadpool of database connections to optimize overall query performance. A short di- BayonneDB Architecture ![Architecture Diagram] Figure 2: Architecture of BayonneDB Services for Carriers ![Carrier Applications Diagram] Figure 4: Carrier Applications Today 11 Enterprise Applications In our broadest view of enterprise telephony applications, we can see using GNU Bayonne as a part of an overall solution. GNU Bayonne must be able to interact with enterprise data, whether thru transaction monitors such as BayonneDB or thru perl scripts executed via TGI. It may need to interact with other services such as email when delivering voice messages to a unified mailbox, or the local phone switch thru.a resource such as Babylon. We will explain Babylon a bit later. Our view of GNU Bayonne and telephony application services are that it is a strategic and integral part of the commercial enterprise. Proprietary solutions that are in common use today have often been designed from the question of how to lock a user into a specific OEM product family and control what a user or reseller can do or integrate such products, rather than from the question of what the enterprise user needs and how to provide the means to enable it. This has often kept telephony separate and walled off from the rest of the enterprise. We do not wish to see it separate but a natural extension, whether of web services, of contact management, of customer relations, etc. 12 Carrier Applications When we look at carrier class applications for GNU Bayonne today, we typically consider applications like operator assistance systems, prepaid calling platforms, and service provider voice mail. Each of these has different requirements. What they have in common is that a front end central office switch might be used, such as a Lucent Excel or even a full ESS. 5 switch. Application logic and control for voice processing would then be hosted on one or more racks of GNU Bayonne servers most likely interconnected with multiple T1 spans. If database transactions are involved, such as in pre-paid calling, perhaps we would distribute a BayonneDB server to provide database connectivity for each rack. A web server may also exist if there is some web service component. Operator assist services are probably the easiest to understand. Very often a carrier might need to provide directory assistance or some other form of specialized assist service. A call will come in from the switching center to a GNU Bayonne server, which will then decide what to do with the call. If the caller is from a location that is known, perhaps the call will be re-routed by GNU Bayonne thru an outgoing span to a local service center. Online operator assistance might be done by creating an outgoing session to locate an operator and then bridge the callers, all on a GNU Bayonne server. In service provider voice mail one doesn’t have to bridge calls. Service provider voice mail is typically much simpler than enterprise voice mail; there is no company voice directory, there is no forwarding or replying between voice mailboxes, there may be no external message notification. All these things make it an easy to define application on first appearance. What it must be is reliable, and ideally scalable. The problem with service provider voice mail is where to store the potentially large pool of message boxes. We don’t know what callers might call in for messages or when. If the call capacity is larger than a single server can handle even with multiple T1 spans, then we might need to deal with a reliable message store hosted on a machine outside the GNU Bayonne servers. We could also scatter mail-boxes over multiple machines by hashing the mailbox address into a GNU Bayonne server address, and load balance over multiple servers that way. If we have a common external message store, perhaps we can have it on a fibre channel bus. GNU Bayonne doesn’t like blocking, and traditional network file systems, like NFS, can have long timeout and blocking intervals. Messages can also be transported from a central store over different protocols. One thought I had was a UDP based transport protocol for voice messaging. Since the need is not for full duplex voice, many of the issues in regard to latency and packet size can be relaxed for transporting a voice stream over what is typically required to make VoIP systems work. With a network addressable message store, GNU Bayonne can provide a reliable platform for service provider voice mail. Many applications carriers wish to deploy do not necessarily require “carrier grade” Linux to appear before they can be used. In fact, IDT Corp, a major provider of prepaid calling in the world today, uses over 500 rack mounted commodity PC’s running things including a standard distribution of “RedHat” GNU/Linux to reliably service over 20 million call minutes per day in their main switching center. This does not mean there is no value in the carrier grade kernel work, just that it is not necessary to create and sell some types of GNU/Linux voice processing solutions for carriers today. We have looked at the issues involved in high reliability/carrier grade enhanced Linux and we intend to address those as described a little further. 13 GNU Bayonne clustering In England one enterprising fellow is working on GNU Bayonne tandem switching nodes. A tandem switching node essentially routes call traffic between spans based on various rules, perhaps to achieve a low interconnection count or to find the least cost available route in a telephone network. This touches upon an interesting and unique feature of GNU Bayonne which we have not yet talked about; GNU Bayonne servers talk to each other. When Bayonne servers talk with each other, they do two things. Each node elects a “buddy” node to act as a failover for itself. Elections are held every few minutes and the design of this is that a single node will only elect itself to buddy up to two additional nodes in the network. Buddies are useful in failover, since they are aware of all transactions and the state of each GNU Bayonne server, and can complete transactions if a given machine (node) goes down. By having a limited set of buddies chosen thru election, we assure there is no network stampede when a node goes down on the part of other nodes wishing to complete transactions for it. Since global call state is shared among GNU Bayonne servers, each server knows what the other one is doing and what its current utilization is like. This can be very useful in a tandem switching application where one needs to know where available endpoints are and if there are ports available at each end point for a given call request. GNU Bayonne cluster networking is still in its infancy, and we are looking for ways to express networking thru the application scripting language. The main use of clustering at the moment is to overcome the inherent limits of system reliability for acceptance of GNU Bayonne in developing carrier class applications. Over time, this need will be lessened as we take advantage of the work being done on carrier grade GNU/Linux. 14 The NG Server Even before GNU Bayonne 1.0 had been finalized, work had been started by late last year on a successor to GNU Bayonne. This successor attempts to simplify many of the architectural choices that were made early on in the project to make it easier to adapt and integrate GNU Bayonne in new ways. One of the biggest challenges in the current GNU Bayonne server is the creation of telephony card plugins. These often involve the implementation of a complete state machine for each and every driver, and very often the code is duplicated. GNU Bayonne “2” solves this by pushing the state machine into the core server and making it fully abstract thru C++ class extension. This allows drivers to be simplified, but also enabled us to build multiple servers from a single code base. Another key difference in GNU Bayonne “2” is much more direct support for carrier grade Linux solutions. In particular, unlike GNU Bayonne, this new server can take ports in and out of service on a live server, and this allows for cards to be hotplugged or hot swapped. In a carrier grade platform, the kernel will provide notification of changeover events and application services can listen for and respond to these events. GNU Bayonne is designed to support this concept of notification for management of resources it is controlling. Finally, GNU Bayonne “2” is designed from the ground up to take advantage of XML in various ways. It uses a custom XML dialect for a configuration language. It also acts as a web service with both the ability to request XML content that describe the running state of GNU Bayonne services and the ability to support XMLRPC. This fits into our vision for making telephony servers integrate with web services, and will be described further in a separate pa- 15 Acknowledgments There are a number of contributors to GNU Bayonne. These include Matthias Ivers who has provided a lot of good bug fixes and new scheduler code. Matt Benjamin has provided a new and improved tgi tokeniser and worked on Pika outbound dialing code. Wilane Ousmane helped with the French phrasebook rulesets and French language audio prompts. Henry Molina helped with the Spanish phrasebook rulesets and Spanish language audio prompts. Kai Germanschewski wrote the CAPI 2.0 driver for GNU Bayonne, and David Kerry contributed the entire Aculab driver tree. Mark Lipscombe worked extensively on the Dialogic driver tree. There have been many additional people who have contributed to and participated in related projects like GNU Common C++ or who have helped in other ways. Proceedings of the Ottawa Linux Symposium June 26th–29th, 2002 Ottawa, Ontario Canada Conference Organizers Andrew J. Hutton, Steamballoon, Inc. Stephanie Donovan, Linux Symposium C. Craig Ross, Linux Symposium Proceedings Formatting Team John W. Lockhart, Wild Open Source, Inc. Authors retain copyright to all submitted papers, but have granted unlimited redistribution rights to all as a condition of submission.
{"Source-Url": "https://www.kernel.org/doc/ols/2002/ols2002-pages-521-529.pdf", "len_cl100k_base": 5104, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 23812, "total-output-tokens": 5564, "length": "2e12", "weborganizer": {"__label__adult": 0.0004930496215820312, "__label__art_design": 0.00026154518127441406, "__label__crime_law": 0.0003600120544433594, "__label__education_jobs": 0.00040984153747558594, "__label__entertainment": 0.000152587890625, "__label__fashion_beauty": 0.0001474618911743164, "__label__finance_business": 0.0007090568542480469, "__label__food_dining": 0.00032830238342285156, "__label__games": 0.0006804466247558594, "__label__hardware": 0.00948333740234375, "__label__health": 0.0004017353057861328, "__label__history": 0.0003075599670410156, "__label__home_hobbies": 7.635354995727539e-05, "__label__industrial": 0.0006670951843261719, "__label__literature": 0.0001819133758544922, "__label__politics": 0.000209808349609375, "__label__religion": 0.00040030479431152344, "__label__science_tech": 0.06805419921875, "__label__social_life": 7.832050323486328e-05, "__label__software": 0.05621337890625, "__label__software_dev": 0.85888671875, "__label__sports_fitness": 0.0002448558807373047, "__label__transportation": 0.001041412353515625, "__label__travel": 0.00022745132446289065}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25719, 0.02027]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25719, 0.40826]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25719, 0.94398]], "google_gemma-3-12b-it_contains_pii": [[0, 2545, false], [2545, 5958, null], [5958, 8862, null], [8862, 12326, null], [12326, 15656, null], [15656, 17417, null], [17417, 20949, null], [20949, 24508, null], [24508, 25299, null], [25299, 25386, null], [25386, 25719, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2545, true], [2545, 5958, null], [5958, 8862, null], [8862, 12326, null], [12326, 15656, null], [15656, 17417, null], [17417, 20949, null], [20949, 24508, null], [24508, 25299, null], [25299, 25386, null], [25386, 25719, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25719, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25719, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25719, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25719, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25719, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25719, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25719, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25719, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25719, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25719, null]], "pdf_page_numbers": [[0, 2545, 1], [2545, 5958, 2], [5958, 8862, 3], [8862, 12326, 4], [12326, 15656, 5], [15656, 17417, 6], [17417, 20949, 7], [20949, 24508, 8], [24508, 25299, 9], [25299, 25386, 10], [25386, 25719, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25719, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
08ec5ca96690f9ad62ffcc248210bfc47cbd0cb4
Designing An Automatic Web-Based Certificate Verification System For Institutions (CASE STUDY: MICHAEL OKPARA UNIVERSITY OF AGRICULTURE, UMUDIKE) Nwachukwu-Nwokeafor K.C Computer Engineering Department Michael Okpara University of Agriculture, Umudike Nwachukwunwoks@gmail.com, Igbajar Abraham Computer Engineering Department Michael Okpara University of Agriculture, Umudike igbajar35@gmail.com Abstract—Over the years there have been a great need of easy and fasted means of verify result/certificate to reduce the level or certificate forgery and to ease the stress and also save the time of certificate verification which is done manually today, an employer or anybody concern will have to come or send delegate to school to verify a particular certificate, some employer never did and this has resulted on accepting a forge certificate. The system can be implemented as a standalone site (recommended) or can also be embedded in an already existing institution official website. The aim of this study is to design an online certificate verification system based on the verification process adopted by the university to verify her results. Keywords— Database, MYSQL, Certificate, Encrypting, Authenticate, online. Introduction (Heading 1) 1. INTRODUCTION The Internet has been one of the most frequently used means of exchanging information among people. The electronic data transmission breaks the distance barrier. Steve Lawrence, C. Lee Giles, Kurt Bollacker (1999) The rapid increase in the volume of scientific literature has led to researchers constantly fighting information overload in their pursuit of knowledge. Furthermore, everyone should be able to browse the same websites and have the opportunity for equal user experience in the Internet. In other words, the Internet should be usable by everyone. Online Certificate Verification system represents the direction of future certificate authenticating development. Promotion of Online Certificate Verification system will bring great benefits to the society and the economy. Online Certificate Verification system improves the speed and quality of services of Certificate Authentication, promotes the globalization of markets, and cuts down cost. The system is designed for employers and universities. 1.1 Background of the study This Paper is centered on the need to adopt automatic online verification system that will save the huddles of manual verification method. 1.2 AIM: The aim of this study is: - To design an online certificate verification system based on the verification process adopted by the university to verify her results. - To phase out outdated paper/manual method, though the papers will still be kept safe in schools for future reference. - To make Certificate accessibility Easy - To save time and Resources - To prevent risk in transits in result verification, e.t.c 1.3 Objectives of system design - Efficiency: These involve accuracy, speed and comprehensiveness of the system output. - Flexibly: This is the ability of the system to adjust to the changing requirement of user. It also involves portability of system. - Practicability: This is the ability of the System to be able to withstand long period of time operation by the user. - Security: The act of applying security for the protection of the data and it involve hardware reliability. A system analysis needs to have a workable security platform to protect the system from damage error and by unauthorized user. 2. LITERATURE REVIEW Academic affair is usually in charge of student result and up to date of the honors awarded to each students. It has a collection of sources, resources, and services, and the structure in which it is housed. There can be no doubt that much of the certificates in this area speculates on the future role of safe means (mini-library) – none of which is particularly clear. Since (1995), or what Tenopir calls the “post web world” (2003), libraries have been seen as in danger of “substitution” The web is becoming “a ubiquitous source of information” giving an “illusion of depth and comprehensiveness” that leads to a questioning of the value of libraries and their collections. This review will not speculate on these future roles, but will focus instead on the certainty of changing technology, increasingly digital information resources and societal shifts that have changed user expectations of online certificate verification system. Digital libraries are situated in unique environments, thus environmental factors have also been examined for their impact on users' awareness, acceptance and use habits often the social contexts or domains surrounding an online certificate verification system are very different, for instance, varies from institution to institution. Adams and Blandford (2004) demonstrate why it is important within academic contexts to market a digital library as a learning resource, but to show users within a clinical domain (e.g., a health care setting) how a digital library system can support evidence-based medicine with information that is “paramount” to decision making. In this thesis, we present the core elements of the manifesto and introduce central aspects of the digital library framework. 2.1 Existing system Certificate verification method that is prevalent today is a manual process, in this process the institution/organization that want to verify a result will have to trip to the university or send a written request so as to verify result. The request will then go to academic affair which refer to the library or safe files to look for the duplicate certificate, this can really be time consuming, also sometimes files are lost when moved from one office to another, and in some cases, can be missing or be difficult to locate. Below is the graphical view of the existing system Fig. 2.1 Graphical view of the existing system 2.2 The proposed system This work considered the adoption of the Iterative and Incremental model as shown below: Fig. 2.2 Typical Iterative model diagram “Tutorial Point” (2015) This model is most often used in the following scenarios: - Requirements of the complete system are clearly defined and understood. - Major requirements must be defined; however, some functionalities or requested enhancements may evolve with time. - There is a time to the market constraint. - A new technology is being used and is being learnt by the development team while working on the paper. - Resources with needed skill set are not available and are planned to be used on contract basis for specific iterations. - There are some high risk features and goals which may change in the future. Fig. 2.3 The schematic view of the proposed system design 2.3 Brief Explanation of the system and operation of the design In this system efforts are made to explore many options like, consultations of relevant text books that treated verification systems, library process some programming text books that treated hypertext preprocessor (php) programming language, java scripting and other text books. Online Certificate Verification (OCV) was implemented using webserver. The software components of the graphical user interface (GUI) screens serve as the front-end. The database is implemented using my structured query language MYSQL. The system is tested with verifying a certificate designed for an institution (i.e MOUAU). This certificate verification system will follow the same process in an institution, but this time it will be online and automated. The system will be customized to have each certificate number linked to the student’s detail. Each person or organization that want to verify the certificate must have the certificate number and type it into the field named “enter certificate number”. A survey has been carried out with some interview question on “the view of the public on this”. Interview were made in various places i.e schools, company and in public places. Interview questions asked ranged from; the ease of use, the feeling of anonymity, security and interface design, and the Information gathered was highly satisfactory. The first approach will be to set up the system using existing university portal. One drawback to using this approach includes the multi-valued nature of the data tables in the university portal and different entries. Under certificate table a single field can hold multiple values. These multiple values can be “related” to values in other fields, also how this relationship is captured and exploited will need to be determined. However information about these relationships will not be available to the public. 2.4 The constraints of the design Some constraint to this will be the selection of dictionary items to document. As stated earlier, each field can have multiple dictionary items. For example, a field that contains a company name can have a dictionary item that displays the field in its raw form. Another dictionary item for the same field can display the field in all capital letters. Yet another can display only the first 15 characters of the field. Each of these may be used within the system environment for different purposes, such as a number of different certificates. However, selecting from among them in order to make one or more of these fields Available for the database will not be a simple task. Simply selecting all dictionary items may be acceptable where there are small numbers of virtual fields. However, selecting a large number of virtual fields may lead to complexity in documentation and querying and may introduce redundancy. There are other similar issues with selecting the appropriate dictionary items for MySQL database. In a MySQL database environment, tables have dictionary items for each field that define the source of the data. These dictionary items are stored in a logical table that, Under OCV can be inspected programatically. This additional information can provide insight into relationships between fields, Relationships between tables, and formatting of fields. This paper will be a first attempt to deploy system in the university environment. The University portal environment is one of a subset of database environments typically referred to as “multi-value” databases. Many of these environments have data table “dictionaries” that describe the data more completely than using MySQL database. Further, since “joins” are defined in dictionary items, mapping the query-building capability of Unity onto RETRIEVE will be a unique challenge. 2.5 The need for a new system This paper will be able to provide a means to compare the certificate on hand and the original certificate given by the institution, comparing the content of the sample certificate in the test environment using OCV system with using host based programs for Documentation. Additionally, OCV will be modified to create RETRIEVE query statements, or a version of Unity with this feature will be created. This system will save time and resources that are used in the existing system. 2.6 Expectation Analysis and Sample of the New System During the first phase of the paper, sample tables will be selected from those available. These tables will be tested for with sample certificates. During this phase, the types of limitations inherent in accessing multi-valued information via database will be documented. This information will be required for comparison with the scripting method. The second phase of the paper will focus on minimizing the issues with database found by the testing in phase one. During this phase it may be necessary to create a program on the host system to create “clean” or SQL compliant dictionary items for fields that currently are not compliant. This will increase the number of fields available to the database. This phase should be completed within the second month of the project. At this time it may also be possible to create programs to add additional fields to the @select record of each table. These programs would “inspect” dictionary items not in the @select dictionary and decide whether or not they should be placed there. This decision would be based, among other things, on whether the dictionary entry is for a field that already has an entry in the @select dictionary and the suitability of the dictionary item compared to others for the same field. Work during the third phase of the project will repeat the mapping of the data tables using the tools present in OCV. The information recorded during this phase will also be used in the report for comparison against other methods. This, being the second time through a process, should give us an idea not only of the time required to document using Unity, but, also, an improved estimate of time to implementation with an experienced user. The fourth phase is a more difficult one. During this phase we need to implement a set of programs to increase the data available for the creation of OCV. The fifth phase of this project will involve modifying OCV to generate RETRIEVE query statements that will execute on the host system rather than through a database. This will be useful because of the multi-valued nature of the host system. 3.1 Tools and Resources HTML / CSS Javascript Php programming Server (either, wamp or xamp or linux server or IIS windows) 3.2 Functions of OCVS: It will serve as a quick and more reliable means of verifying institutions Original Certificate. This will save time and Energy and some form of inconveniences that is being experience by intuitions or organizations trying to verify result. 3.3 Design specification of OCVS: We now discuss the technicalities of the OCVS design architecture, including the top-down structure and database environment. 3.4 Top-down structure: OCVS is made of four major subsystems which interact with users by means of GUI screens, namely, online user Registration System (OURS), Online Certificate verification System (OCVS), User Statistics System (USS) and OCVS Database Manager (ODM) as explained below; - **Online User Registration System (OURS):** OURS is the subsystem responsible for managing the user registration process. It provides interfaces where users can enter their records as well as update their records. - **Online Certificate Verification System (OCVS):** OCVS manages all online Certificates. It provides interactive interfaces for registered users to access their desired certificates. OCVS works alongside ODM to deny people who do not have a valid certificate number to access any certificate. - **User Statistics System (USS):** USS is the subsystem responsible for automatically displaying the statistics of the user, whether they are online or offline and to record the activities and the interaction the user had with the system including time of logging and log out. Alongside OCVS and ODM, USS provides a user-friendly interface where an institution original certificates are displayed. Also, it provides facilities for the printing of hard copies of certificates. - **OCVS Database Manager (ODM):** ODM has an important role to play in the operation of OCVS, in that, it is the part that stores information such as user registration details, certificates of students. Online Certificate Verification System database environment: - **OCVS entities:** OVIS database comprises four major entities, namely, User Registration, User_Log_Book, Certificate details, The user Registration entity or table gives information about registered users. It contains fields or attributes like the username and password (in hashed format) Name,Address and others. The user_Log_Book keeps records on the date and time the registered user logs into the OCVS to use. It has attributes as username, Date, Time and. The certificate image and Certificate detail contain information about students who owns the certificate that is being verified. - **Queries:** In a database environment, a query is the object which facilitates retrieval, insertion, update and deletion of records. ODM makes use of various queries in order to maintain system data. Some of these include Retrieve_User_Record (which retrieves the details of a registered user from the user-_Registration table), Update_User_Record (which updates the records of a user), users_from_a_Region (which displays the details of all registered users from a given region in the country/world), Certificate image (which displays a copy of the original certificate that was awarded to the students at their time of graduation, and log in Status_of_users (which displays the list of users who have already registered in the system and may have had some interactions with the system. - **Entity-relationship diagram:** The relationship between the various entities that make up a system is an important aspect of any database design. This relationship could be of three kinds; one-to-one, one-to-many and many-to-many. The entity-relationship diagram on OCVS ODM an important synergy of the online certificate verification. 4. Installation of a database management system (DBMS): A DBMS called the OCV Database Manager (ODM) would be installed to manage system data. ODM Would allow for certificates to be retrieved, added, updated, deleted, sorted and traversed easily and quickly. ODM would also store data with regard to the certificates and other information relevant to the system. A Graphical User Interface (GUI) application has been developed to allow prospective students’ certificate detailed authentication to be added and verified online. The application would also enable a functionality that allows only school authorities in charge to add and remove or modify certificate details when they require it. 4.1 Implementation of controls to avoid double calling up different result instead of the one requested. User identification and access control functionalities are built into the system so as to guard against sniffing and unnecessary download of students certificates. Also, these features would help to prevent anyone without the proper authorization to gain access into the system. This means no body can print or view certificate indiscriminately. 4.2 Implementation of controls to eliminate invalid number to access a certificates: The design architecture of the system does not easily allow a user to enter just any number and view certificate. For example, trying to enter an invalid number, an error message will display "invalid certificate number. - **Generation of reports**: The system would allow the preparation and printing of various reports, ranging from log in user, history of visitors and what they did on the portal. 4.3 Provision of personalized portal to all registered Users: Each registered User would be able to interact with the system as if it were made personally for him, upon log in. This function is designed to make the system Interactive as well as user friendly. For example, a registered user, "Okoroji Hope", upon logging in would meet a welcome screen with the greeting message “Welcome Okoroji Hope”. 4.4 Verifying Certificate: A prospective user would need to register at a designated and having his/her unique username and password to access the portal, and he/she will be using it each time he wants to verify the institution's Certificate. The person would then be expected to provide a password known only to him during registration. The username and password would then be entered into the system to give access to the user into the verification system so as to use it. If any of the data is wrong, the user would be denied access. Registered user who forget their username or passwords may contact the school webmaster for help or use the "forgot password" link on the page to recover it. The user can view the certificate using the certificate number, but cannot make changes to the certificate. 4.5 System design The package require to run the verification process was written with php and javascript, then installed into the system to be used for the execution of the process, it was also be tested and run with different certificates for its limit and level of its accuracy. To obtain the best possible result the certificates and detail must conform to the certificate number used. Parallel approach is system adopted in the implementation of the design for the system, parallel approach is a system whereby the computer based system is allowed to run alongside the existing manual method. After a given time depending on the configurations and the speed of the system use in the implementation of the verification a report from the system will be displayed on the screen which can be evaluated and if the results are acceptable the system can be fully adopted. 4.6 Sample code used in the design - **Database connection** ```php $server="localhost"; $user="Idumuesah"; $pass="Aliloje"; $db="tutorials"; / / connect to mysql mysql_connect($server, $user, $pass) or die("Sorry, can't connect to the mysql."); // select the db mysql_select_db($db) or die("Sorry, can't select the database."); ?> Database code ``` 4.7 The output of the design, the implementation window ![Online Certificate Verification System](image-url) Online certificate verification system offers convenience to the users and considerable ease to employers as they can get original certificate from school easily and more quickly than conventional methods of manual verification. Even though computerized verification systems have a number of advantages over manual ones, there are a few challenges that must be overcome in order that their benefits may be fully realized. First of all, there is the high initial cost of implementing such a system, as is typical of all computerized systems. (This is often prohibitive to developing countries like Nigeria). Secondly, there is the challenge of users’ education. The public/user needs to be educated adequately on the use of online verification system. A third concern is the verification system security; how the system could be attacked by hackers and fraudsters, including system administrators (who may be in favour of some particular students). Finally, verification system security should be allowed to be scrutinized by experts in systems analysis and design, computer programming and computer system security. This would be the only way to ensure public confidence in such systems. REFERENCES
{"Source-Url": "http://www.jmest.org/wp-content/uploads/JMESTN42351206.pdf", "len_cl100k_base": 4341, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22204, "total-output-tokens": 5491, "length": "2e12", "weborganizer": {"__label__adult": 0.0004749298095703125, "__label__art_design": 0.0018053054809570312, "__label__crime_law": 0.0007424354553222656, "__label__education_jobs": 0.0272674560546875, "__label__entertainment": 0.00011038780212402344, "__label__fashion_beauty": 0.0003020763397216797, "__label__finance_business": 0.0007662773132324219, "__label__food_dining": 0.0005245208740234375, "__label__games": 0.0005192756652832031, "__label__hardware": 0.0014677047729492188, "__label__health": 0.0009603500366210938, "__label__history": 0.0005645751953125, "__label__home_hobbies": 0.0001786947250366211, "__label__industrial": 0.0005788803100585938, "__label__literature": 0.0006051063537597656, "__label__politics": 0.00035190582275390625, "__label__religion": 0.0006279945373535156, "__label__science_tech": 0.0791015625, "__label__social_life": 0.0002371072769165039, "__label__software": 0.0248870849609375, "__label__software_dev": 0.85693359375, "__label__sports_fitness": 0.0002193450927734375, "__label__transportation": 0.0006270408630371094, "__label__travel": 0.00027489662170410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24864, 0.01926]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24864, 0.31212]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24864, 0.91807]], "google_gemma-3-12b-it_contains_pii": [[0, 3334, false], [3334, 7061, null], [7061, 12943, null], [12943, 17629, null], [17629, 21194, null], [21194, 23852, null], [23852, 24864, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3334, true], [3334, 7061, null], [7061, 12943, null], [12943, 17629, null], [17629, 21194, null], [21194, 23852, null], [23852, 24864, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24864, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24864, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24864, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24864, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24864, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24864, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24864, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24864, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24864, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24864, null]], "pdf_page_numbers": [[0, 3334, 1], [3334, 7061, 2], [7061, 12943, 3], [12943, 17629, 4], [17629, 21194, 5], [21194, 23852, 6], [23852, 24864, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24864, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
9422650befd6ef9ca290fc2c79830d88e2b8d2cd
INTEL® HPC DEVELOPER CONFERENCE FUEL YOUR INSIGHT IMPROVE VECTORIZATION EFFICIENCY USING INTEL SIMD DATA LAYOUT TEMPLATE (INTEL SDLT) Alex M Wells Anoop Madhusoodhanan Prabha Intel Corporation November 2016 Long Story Short.. - Object-Oriented code involves modeling collections as Array of Structures (AoS). - AoS data layout leads to lower vectorization efficiency. - Intel SIMD Data Layout Template (Intel SDLT) helps developers stick to their Object-Oriented Design but still get better vectorization efficiency. Agenda - Performance Problems posed by Object-Oriented Design - Why Intel SDLT? - Components of Intel SDLT - How to use Intel SDLT in your application? - Q&A QuantLib Performance data C++ classes meeting SDLT recipe comprising of POD data types. Comparison of data optimization methods (higher is better) Changed the data layout using SDLT Hand Optimized C version with individual arrays for each class member Performance Problems posed by Object-Oriented Design - Applications are designed as interaction between objects (Object-Oriented Design). - Every real world entity is modeled as an object (user defined data type like `struct` or `class`). - Collection of this above entity will become an Array of Structures (AoS). ```cpp struct YourStruct { float x; float y; float z; }; typedef std::vector<YourStruct> Container; Container input(count); Container result(count); YourStruct * input = new YourStruct[count]; YourStruct * result = new YourStruct[count]; ``` - **Heap Array** - **Stack Array** - **Vector** What’s wrong with AoS?.... SIMD - SIMD – “Single Instruction Multiple Data” - Vectorization with AOS in memory data layout requires multiple load/shuffle/insert or gather instructions. - Increase in vector width demands more instructions for vector construction. - Reduced CPU frequency in SIMD mode might not overcome SIMD improvement over the scalar code operating at a higher frequency. SIMD is effective with Unit Stride Access - If memory layout has multiple instances of a data member adjacent in memory and aligned on a byte boundary matching the vector register width - Single load/store instruction to move the data into or out of a vector register - Many SIMD operations can reference an aligned unit-stride memory access as part of the instruction, avoiding a separate load/store instruction altogether - A properly aligned Structure of Arrays (SOA) in memory data layout provides SIMD compatible Unit-Stride memory accesses - SIMD efficiency & speedup can be restored Issues with SoA integration - Demands for change of the data structure and deviate from Object Oriented Design. - Demands for change of C++ algorithms. - Explicitly handle allocation/freeing of SOA arrays and make sure they are aligned. What is Intel SDLT? - A C++11 template library providing concepts of Containers, accessors and Offsets - Containers encapsulate the in memory data layout of an Array of “Plain Old Data“ objects. - SIMD loops use accessors with an array subscript operator (just like C++ arrays) to read from or write to the objects in the Containers. - Offsets can be embedded in accessors or applied to a Index passed to the accessors array subscript operator. - Since these concepts are abstracted out, multiple concrete versions can exist and can encapsulate best known methods, thus avoiding common pitfalls in generating efficient SIMD code. Why Intel SDLT? - SDLT provides a means to preserve the Array of Structure (AoS) interface for the developers but lays out the data in Structure of Array (SoA) format which is more SIMD friendly and increases the vectorization efficiency. - SDLT provides 1D containers which provides the same interface as std::vector making the ease of integration and interoperability easy. - `push_back`, `resize`, etc. - iterator support: `begin()`, `end()`, etc. - Works well with all STL algorithms - `for_each`, `find`, `search`, etc. - Multi-Dimensional Container Support - Enabled in SDLT version 2 shipped with Intel C++ Compiler Version 17.0 SDLT Containers • What if that std::vector could store data SOA internally while exposing an AOS view to the programmer? – Primary goal of SDLT Containers is to meet the requirement above. • SDLT Containers abstract the in memory data layout to: – AOS (Array of Structures) – SOA (Structure of Arrays) SDLT 1D container - **Intent**: is data be kept in an SOA or ASA Container the entire time instead of converting from AOS. - SDLT's container will internally store the members of YourStruct in a one dimensional "Structure of Arrays" (SOA) layout. - Places aligned arrays inside a single allocated buffer vs. a separate allocation per array - Just like std::vector the Containers own the array data and its scope controls the life of that data. ```cpp typedef sdlt::s1d_container<YourStruct> Container; Container inputContainer(count); Container resultContainer(count); ``` SDLT Primitives • How do the Containers discover the data members of your struct? • C++ lacks compile time reflection, so the user must provide SDLT with some information on the layout of YourStruct. • This is easily done with the SDLT_PRIMITIVE helper macro that accepts a struct type followed by a list of its data members. - A struct must be declared as a primitive before it is used as template parameter to a Container. ```cpp struct YourStruct { float x; float y; float z; }; struct AABB { YourStruct topLeft; YourStruct bottomRight; }; SDLT_PRIMITIVE(YourStruct, x, y, z) SDLT_PRIMITIVE(AABB, topLeft, bottomRight) ``` To separate data ownership semantics from data access, a separate class called an accessor is used to access the transformed data that is owned by the Container. Use the C++11 keyword "auto" to let the compiler deduce the type. ```cpp Container::const_accessor<> input = inputContainer.const_access(); Container::accessor<> result = resultContainer.access(); ``` SDLT Accessor Contd.. Embedded Offset ```cpp auto input = inputContainer.const_access(); auto input2 = inputContainer.const_access(256); auto input3 = inputContainer.const_access(sdlt::aligned<8>(256)); auto input4 = inputContainer.const_access(sdlt::fixed<256>(); ``` Subscript operator ```cpp void setAllValuesTo( Container::accessor iValues, YourStruct &iDefaultValue) { for(int i=0; i < iValues.get_size_d1(); ++i) { iValues[i] = iDefaultValue; } } ``` SDLT Accessor Contd.. - The subscript operator[index] returns a Proxy Object - The main use of the Proxy Objects is to import/export data to/from a local variable - Can assign local stack instance of YourStruct to the Proxy ```c YourStruct result = ... iValues[index] = result; ``` - Can retrieve YourStruct from the Proxy to a local stack object ```c iValues[index].y() = new_y_value; YourStruct local = iValues[index]; ``` - SDLT's design makes use of local objects and the compiler's dead code elimination features. - Overloaded +=, -=, *-, etc. operators. Multiple Dimensions with sdl::n_container using namespace sdl; auto shape = n_extents[128][256][512]; typedef n_container<YourStruct, layout::soa, decltype(shape)> Container3d; Container3d input(shape), output(shape); auto inputs = input.const_access(); auto outputs = output.access(); for(int z = 0; z < 128; ++z) { for(int y = 0; y < 256; ++y) { #pragma omp simd for(int x = 0; x < 512; ++x) { YourStruct val = inputs[z][y][x]; YourStruct result = ... // compute result outputs[z][y][x] = result; } } } • The shape is described with the sdl::n_extents generator object • Use accessors with multiple array subscript operators, just like multi-dimensional C arrays Issues with Large Arrays with SOA Memory layout of a 3d SOA - For any given element: - $a[z\_index][y\_index][x\_index]$ - Distance between data members: - $\geq 65\text{mb} \sim 128\times 256\times 512\times \text{sizeof(data\_member)}$ - Each data member: - Possibly in different virtual memory pages. - Appears as a separate data stream to hardware prefetchers - As # data members or hyper threads increase, so does - DTLB pressure - Hardware prefetcher pressure SOA Per Row Memory Layout - For any given element: - `a[z_index][y_index][x_index]` - Distance between data members: - `>= 2kb ~ 512 * sizeof(data_member)` - Each data member: - Likely in same virtual memory page. - Likely appears as a separate data stream to hardware prefetchers - As # data members or hyper threads increase - Hardware prefetcher pressure ``` _row = YourStruct, layout::soa_per_row, decltype(n_extents[128][256][512]) ``` Adding Xtra Blocking Dimension - For any given element: - \( a[z_{\text{index}}][y_{\text{index}}][x_{\text{index}}/8][x_{\text{index}}\%8] \) Distance between data members: - \( \geq 32b \sim 8*\text{sizeof(data}_{\text{member}}) \) - Each data member: - IS in the same same virtual memory page. - Appears as a segment of a single linear data stream to hardware prefetchers Hardware prefetcher friendly ```cpp constexpr int vec_width=8; n_container<YourStruct, layout::soa_per_row, decltype(n_extents[128][256][512/vec_width][vec_width])> ``` By Hand, Combine AoS with Fixed Size SoA ```c constexpr int lane_count = 8; struct SimdYourStruct { float x[lane_count]; float y[lane_count]; float z[lane_count]; } __attribute__((aligned(32))); ``` <table> <thead> <tr> <th>ASA[0]</th> <th>ASA[...]</th> <th>ASA[512]</th> </tr> </thead> <tbody> <tr> <td>X[0]</td> <td>X[...]</td> <td>X[7]</td> </tr> <tr> <td>Y[0]</td> <td>Y[...]</td> <td>Y[7]</td> </tr> <tr> <td>Z[0]</td> <td>Z[...]</td> <td>Z[7]</td> </tr> <tr> <td>X[0]</td> <td>X[...]</td> <td>X[7]</td> </tr> <tr> <td>Y[0]</td> <td>Y[...]</td> <td>Y[7]</td> </tr> <tr> <td>Z[0]</td> <td>Z[...]</td> <td>Z[7]</td> </tr> </tbody> </table> ```c int count = 4096; int structCount = count / lane_count; SimdYourStruct inputASA[structCount]; SimdYourStruct outputASA[structCount]; ``` ```c for(int structIndex = 0; structIndex < structCount; ++structIndex) { #pragma omp simd for(int laneIndex = 0; laneIndex < lane_count; ++laneIndex) { YourStruct val; val.x = inputASA[structIndex].x[laneIndex]; val.y = inputASA[structIndex].y[laneIndex]; val.z = inputASA[structIndex].z[laneIndex]; YourStruct result = ... // compute result outputASA[structIndex].x[laneIndex] = val.x; outputASA[structIndex].y[laneIndex] = val.y; outputASA[structIndex].z[laneIndex] = val.z; } } ``` With SDLT, Combine AoS with Fixed Size SoA constexpr int lane_count=8; int count = 4096; auto shape = n_extents[count/lane_count][fixed<lane_count>()]; typedef n_container<YourStruct, layout::soa_per_row, decltype(shape)> Container; Container inputASA(shape), outputASA(shape); auto inputs = input.const_access(); auto outputs = output.access(); for(int structIndex=0; structIndex < extent_d<0>(inputs); ++structIndex) { #pragma omp simd for(int laneIndex=0; laneIndex < extent_d<1>(inputs); ++laneIndex) { YourStruct val = inputs[structIndex][laneIndex]; YourStruct result = ... // compute result outputs[structIndex][laneIndex] = result; } } - `sdl::fixed<int>` represents an integral constant known at compile time - Template function `sdl::extent_d<int>` determines the extent of a dimension for a multi-dimensional object Before Intel SDLT enabling class CartesianPoint{ float x, y, z; CartesianPoint(){ x = y = z = 0.0f; } explicit CartesianPoint(float x1, float y1, float z1){ x = x1; y = y1; z = z1; } CartesianPoint(const CartesianPoint& other) : x(other.x), y(other.y), z(other.z){ } CartesianPoint& operator=(float n){ x = y = z = n; return *this; } }; class CartesianPointImage{ public: vector<CartesianPoint> cartpoints; int num_of_points() const { return static_cast<int>(cartpoints.size()); } CartesianPointImage(size_t num_of_elements){ cartpoints.resize(num_of_elements); for (unsigned int i = 0; i < num_of_elements; i++) cartpoints[i] = CartesianPoint(i, i, i); } }; After Intel SDLT enabling Vector Advisor’s Recommendation for Intel SDLT **Issue:** Inefficient memory access patterns present - There is a high of percentage memory instructions with irregular (variable or random) stride accesses. Improve performance by investigating and handling accordingly. **Recommendation:** Use SoA instead of AoS - An array is the most common type of data structure containing a contiguous collection of data items that can be accessed by an ordinal index. You can organize this data as an array of structures (AoS) or as a structure of arrays (SoA). While AoS organization is excellent for encapsulation, it can hinder effective vector processing. To fix, rewrite code to organize data using SoA instead of AoS. **Read More:** - Programming Guidelines for Vectorization - Case study: Comparing Arrays of Structures and Structures of Arrays Data Layouts for a Compute-Intensive Loop and Vectorization Resources for Intel® Advisor Users **Recommendation:** Use Intel SDLT - The cost of rewriting code to organize data using SoA instead of AoS may outweigh the benefit. To fix: Use Intel SIMD Data Layout Templates (Intel SDLT), introduced in version 16.1 of the compiler, to mitigate the cost. Intel SDLT is a C++11 template library that may reduce code rewrites to just a few lines. **Read More:** - Introduction to the Intel® SIMD Data Layout Templates (Intel® SDLT) - Vectorization Resources for Intel® Advisor Users Case Study - Efficient SIMD in Animation with SDLT and Data preconditioning - DreamWorks Animation (DWA): How We Achieved a 4x Speedup of Skin Deformation with SIMD Resources Intel SIMD Data Layout Template Info - Introducing the Intel SIMD Data Layout Template (Intel SDLT) to boost efficiency in your vectorized C++ code - Introduction to the Intel SDLT - Averaging Filter with Intel SDLT - Boosting the performance of Cartesian to Spherical co-ordinates conversion using Intel SDLT Code Modernization Links - Modern Code Developer Community - [https://software.intel.com/modern-code](https://software.intel.com/modern-code) - Intel Code Modernization Enablement Program - [https://software.intel.com/code-modernization-enablement](https://software.intel.com/code-modernization-enablement) - Intel Parallel Computing Centers - [https://software.intel.com/ipcc](https://software.intel.com/ipcc) - Technical Webinar Series Registration - Intel Parallel Universe Magazine Questions? INFORMATION IN THIS DOCUMENT IS PROVIDED "AS IS". NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO THIS INFORMATION INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Copyright © 2016, Intel Corporation. All rights reserved. Intel, Pentium, Xeon, Xeon Phi, Core, VTune, Cilk, and Intel are trademarks of Intel Corporation in the United States and other countries. Optimization Notice Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor- derpendent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #20110804 THANK YOU FOR YOUR TIME Alex M Wells & Anoop Madhusoodhanan Prabha www.intel.com/hpcdevcon Possible Overhead When Storing Objects • We are assigning a new `Boundary` object to a container. • We only want to change the "y" component of the bounds, – Because we can only import entire objects – We must initialize a new `Point3ds` – Transfer the entire object into the container. – Will include the "x" and "z" components despite the fact they haven't changed. – Because it's an assignment, the compiler can't figure out the values haven't changed. – Perhaps another thread had changed the values, and we are reassigning them back. • The point is that it won't eliminate the assignments to the "x" and "z" inside the container. ```cpp for(int i=0; i < count; ++i) { const Point3ds point = points[i]; const Point3ds boundary = bounds[i]; if( point.y > boundary.y ) { Point3ds newBoundary(boundary.x, newpoint.y, boundary.z); bounds[i] = newBoundary; } } ``` SDLT Proxy Objects Provide Interface to Data Members - The proxy objects SDLT returns from the \([i]\) operator provide an interface to access the individual data members of the primitive. - The interface provides a method using each data member's name and returns a proxy to that data member for element \([i]\) inside the Container. - Now only the "y" component will be updated and the loop is much more efficient.
{"Source-Url": "https://www.intel.com/content/dam/www/public/us/en/documents/presentation/improving-vectorization-efficiency.pdf", "len_cl100k_base": 4414, "olmocr-version": "0.1.49", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 48049, "total-output-tokens": 6096, "length": "2e12", "weborganizer": {"__label__adult": 0.0004379749298095703, "__label__art_design": 0.000400543212890625, "__label__crime_law": 0.00033020973205566406, "__label__education_jobs": 0.00027060508728027344, "__label__entertainment": 6.133317947387695e-05, "__label__fashion_beauty": 0.0001920461654663086, "__label__finance_business": 0.0001996755599975586, "__label__food_dining": 0.0003783702850341797, "__label__games": 0.0006694793701171875, "__label__hardware": 0.005992889404296875, "__label__health": 0.00040078163146972656, "__label__history": 0.00020515918731689453, "__label__home_hobbies": 0.00012111663818359376, "__label__industrial": 0.0007047653198242188, "__label__literature": 0.00013828277587890625, "__label__politics": 0.00022149085998535156, "__label__religion": 0.0005235671997070312, "__label__science_tech": 0.020751953125, "__label__social_life": 5.316734313964844e-05, "__label__software": 0.005153656005859375, "__label__software_dev": 0.96142578125, "__label__sports_fitness": 0.0003829002380371094, "__label__transportation": 0.0007081031799316406, "__label__travel": 0.00021183490753173828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18661, 0.01202]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18661, 0.61119]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18661, 0.74052]], "google_gemma-3-12b-it_contains_pii": [[0, 50, false], [50, 209, null], [209, 520, null], [520, 679, null], [679, 1037, null], [1037, 1658, null], [1658, 2049, null], [2049, 2646, null], [2646, 2884, null], [2884, 3521, null], [3521, 4166, null], [4166, 4476, null], [4476, 5054, null], [5054, 5706, null], [5706, 6071, null], [6071, 6560, null], [6560, 7125, null], [7125, 7863, null], [7863, 8346, null], [8346, 8802, null], [8802, 9361, null], [9361, 10500, null], [10500, 11367, null], [11367, 12159, null], [12159, 12214, null], [12214, 13652, null], [13652, 13818, null], [13818, 15237, null], [15237, 15248, null], [15248, 17241, null], [17241, 17334, null], [17334, 17334, null], [17334, 18242, null], [18242, 18661, null]], "google_gemma-3-12b-it_is_public_document": [[0, 50, true], [50, 209, null], [209, 520, null], [520, 679, null], [679, 1037, null], [1037, 1658, null], [1658, 2049, null], [2049, 2646, null], [2646, 2884, null], [2884, 3521, null], [3521, 4166, null], [4166, 4476, null], [4476, 5054, null], [5054, 5706, null], [5706, 6071, null], [6071, 6560, null], [6560, 7125, null], [7125, 7863, null], [7863, 8346, null], [8346, 8802, null], [8802, 9361, null], [9361, 10500, null], [10500, 11367, null], [11367, 12159, null], [12159, 12214, null], [12214, 13652, null], [13652, 13818, null], [13818, 15237, null], [15237, 15248, null], [15248, 17241, null], [17241, 17334, null], [17334, 17334, null], [17334, 18242, null], [18242, 18661, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18661, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18661, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18661, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18661, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18661, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18661, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18661, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18661, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18661, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18661, null]], "pdf_page_numbers": [[0, 50, 1], [50, 209, 2], [209, 520, 3], [520, 679, 4], [679, 1037, 5], [1037, 1658, 6], [1658, 2049, 7], [2049, 2646, 8], [2646, 2884, 9], [2884, 3521, 10], [3521, 4166, 11], [4166, 4476, 12], [4476, 5054, 13], [5054, 5706, 14], [5706, 6071, 15], [6071, 6560, 16], [6560, 7125, 17], [7125, 7863, 18], [7863, 8346, 19], [8346, 8802, 20], [8802, 9361, 21], [9361, 10500, 22], [10500, 11367, 23], [11367, 12159, 24], [12159, 12214, 25], [12214, 13652, 26], [13652, 13818, 27], [13818, 15237, 28], [15237, 15248, 29], [15248, 17241, 30], [17241, 17334, 31], [17334, 17334, 32], [17334, 18242, 33], [18242, 18661, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18661, 0.02062]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
91cd4f423c7fe6aa8b032ca7a2a2d45f312358c0
A NOVEL APPROACH FOR HOTEL MANAGEMENT SYSTEM USING CASSANDRA Hima S¹, Varalakshmi P² and Surekha Mariam Varghese³ Department of Computer Science and Engineering, M.A. College of Engineering, Kothamangalam, Kerala, India ABSTRACT Apache Cassandra is a distributed storage system for managing very large amounts of structured data. Cassandra provides highly available service with no single point of failure. Cassandra aims to run on top of an infrastructure of hundreds of nodes possibly spread across different data centers with small and large components fail continuously. Cassandra manages the persistent state in the face of the failures which drives the reliability and scalability of the software systems. Cassandra does not support a full relational data model because it resembles a database and shares many design and implementation strategies. In this paper, discuss an implementation of Cassandra as Hotel Management System application. Cassandra system was designed to run on cheap commodity hardware. Cassandra provides high write throughput and read efficiency. KEYWORDS Cassandra, Data model. 1. INTRODUCTION Apache Cassandra is an open source, distributed, highly available, decentralized, elastically scalable, fault-tolerant, consistent, column-oriented database. Cassandra’s distribution design is based on Amazon’s Dynamo and its data model on Google’s Bigtable. Cassandra was introduced at Facebook; it is now used at some of the most popular sites on the Web [1]. Apache Cassandra is a type of NoSQL database designed to handle large amounts of data across many servers. This database provides high availability and no single point of failure. Some of the important points of Apache Cassandra: (1) It is scalable, consistent and fault-tolerant, (2) It is key-value as well as column-oriented database, (3) Its data model is based on Google’s Bigtable and distribution design is based on Amazon’s Dynamo, (4) Introduced at Facebook, it differs sharply from relational database management systems, (5) Cassandra implements a Dynamo-style replication model, also adds a more powerful “column family” data model, and (6) Cassandra is being used by some of the biggest companies such as Facebook, Twitter, Cisco, Rackspace, ebay, Twitter, Netflix, and more. Cassandra has become so popular because of its outstanding technical features. Given below are some of the features of Cassandra: - Elastic scalability: Cassandra allows adding more hardware to accommodate more customers and more data as per requirement. - Always on architecture: Cassandra is continuously available for critical business applications that cannot afford single point of failure. - Fast linear-scale performance: Cassandra increases throughput as the number of nodes in the cluster is increased. Therefore it provides a quick response time. Flexible data storage: Cassandra handles all possible data formats including: structured, semi-structured, and unstructured. It can dynamically provide changes to data structures according to user need. Easy data distribution: Cassandra provides the flexibility to distribute data where user need by replicating data across multiple data centers. Transaction support: Cassandra supports properties like Atomicity, Consistency, Isolation, and Durability (ACID). Fast writes: Cassandra was designed to run on cheap commodity hardware. It performs fast writes and can store hundreds of terabytes of data, without sacrificing the read efficiency. The rest of this paper is organized as follows. Section 2 discusses NoSQL database. Section 3 presents the Cassandra Architecture. Section 4 describes the data model of Cassandra. Section 5 describes the implementation details of Hotel Management System. The conclusion is given in Section 6. 2. EXISTING RELATIONAL DATABASE Relational Databases are also popular like NoSQL database. But it has various drawbacks. Typically address these problems in one or more of the following ways, sometimes in this order: Throw hardware at the problem by adding more memory, adding faster processors, and upgrading disks. This is known as vertical scaling. When the problems arise again, the answer appears to be similar: now that one box is maxed out, you add hardware in the form of additional boxes in a database cluster. Now the problems are data replication and consistency during regular usage and in failover scenarios. Now need to update the configuration of the database management system. This might mean optimizing the channels the database uses to write to the underlying filesystem. Then turn off logging or journaling, which frequently is not a desirable (or, depending on situation, legal) option. Having put what attention into the database system, turn to the application. Then try to improve indexes. Also optimize the queries. But presumably at this scale weren’t wholly ignorant of index and query optimization, and already had them in pretty good shape. So this becomes a painful process of picking through the data access code to find any opportunities for fine tuning. This might include reducing or reorganizing joins, throwing out resource-intensive features such as XML processing within a stored procedure, and so forth. Of course, presumably doing that XML processing for a reason, so if it do somewhere, move the problem to the application layer, hoping to solve it there and crossing fingers that don’t break something else in the meantime. Employ a caching layer. For larger systems, this might include distributed caches such as memcached, EHCache, Oracle Coherence, or other related products. Now we have a consistency problem between updates in the cache and updates in the database, which is exacerbated over a cluster. It is possible to duplicate some of the data to make it look more like the queries that access it. This process, called denormalization, is antithetical to the five normal forms that characterize the relational model, and violate Codd’s 12 Commandments for relational data. Like Cassandra it also supports ACID properties. ACID is an acronym for Atomic, Consistent, Isolated, Durable, which are the gauges we can use to assess that a transaction has executed properly and that it was successful: **Atomic** Atomic means “all or nothing”; that is, when a statement is executed, every update within the transaction must succeed in order to be called and another related update failed. The common example here is with monetary transfers at an ATM: the transfer requires subtracting money from one account and adding it to another account. This operation cannot be subdivided; they must both succeed. **Consistent** Consistent means that data moves from one correct state to another correct state, with no possibility that readers could view different values that don’t make sense together. For example, if a transaction attempts to delete a Customer and her Order history, it cannot leave Order rows that reference the deleted customer’s primary key; this is an inconsistent state that would cause errors if someone tried to read those Order records. **Isolated** Isolated means that transactions executing concurrently will not become entangled with each other; they each execute in their own space. That is, if two different transactions attempt to modify the same data at the same time, then one of them will have to wait for the other to complete. **Durable** Once a transaction has succeeded, the changes will not be lost. This doesn’t imply another transaction won’t later modify the same data; it just means that writers can be confident that the changes are available for the next transaction to work with as necessary. 3. **NOSQL DATABASE** A NoSQL database (also called as Not Only SQL) is a database that provides a mechanism to store and retrieve data other than the tabular relations used in relational databases. These databases are schema-free, support easy replication, have simple API, eventually consistent, and can handle huge amounts of data. The primary objective of a NoSQL database is to have - simplicity of design, - horizontal scaling, and - finer control over availability. NoSQL databases use different data structures compared to relational databases. It makes some operations faster in NoSQL. The suitability of a given NoSQL database depends on the problem it must solve. 4. CASSANDRA ARCHITECTURE The design goal of Cassandra is to handle big data workloads across multiple nodes without any single point of failure. Cassandra has peer-to-peer distributed system, and data is distributed among all the nodes in a cluster [2]. - All the nodes in a cluster play the same role. Each node is independent and at the same time interconnected to other nodes. - Each node in a cluster can accept read and write requests, regardless of where the data is actually located in the cluster. - When a node goes down, read/write requests can be served from other nodes in the network. 4.1. Data Replication in Cassandra In Cassandra, one or more of the nodes in a cluster act as replicas for a given piece of data. If it is detected that some of the nodes responded with an out-of-date value, Cassandra will return the most recent value to the client. After returning the most recent value, Cassandra performs a read repair in the background to update the stale values. The figure 1 shows a schematic view of how Cassandra uses data replication among the nodes in a cluster to ensure no single point of failure. Cassandra uses the Gossip Protocol to allow the nodes to communicate with each other and detect any faulty nodes in the cluster. 4.2. Components of Cassandra The key components of Cassandra are as follows: - **Node**: It is the place where data is stored. - **Data center**: It is a collection of related nodes. - **Cluster**: A cluster is a component that contains one or more data centers. - **Commit log**: The commit log is a crash-recovery mechanism in Cassandra. Every write operation is written to the commit log. - **Mem-table**: A mem-table is a memory-resident data structure. After commit log, the data will be written to the mem-table. Sometimes, for a single-column family, there will be multiple mem-tables. - **SSTable**: It is a disk file to which the data is flushed from the mem-table when its contents reach a threshold value. - **Bloom filter**: These are quick, nondeterministic, algorithms for testing whether an element is a member of a set. It is a special kind of cache. Bloom filters are accessed after every query. 4.3. Cassandra Query Language Users can access Cassandra through its nodes using Cassandra Query Language (CQL). CQL treats the database (Keyspace) as a container of tables. Programmers use cqlsh: a prompt to work with CQL or separate application language drivers. 4.4. Write Operations Every write activity of nodes is captured by the commit logs written in the nodes. Then the data will be captured and stored in the mem-table. Whenever the mem-table is full, data will be written into the SSTable data file. All writes are automatically partitioned and replicated throughout the cluster. Cassandra periodically consolidates the SSTables, deleting unnecessary data. 4.5. Read Operations During read operations, Cassandra gets values from the mem-table. It checks the bloom filter to find the appropriate SSTable that holds the required data. 5. DATA MODEL The data model of Cassandra is significantly different from the normal RDBMS [2]. 5.1. Cluster Cassandra database is distributed over several machines that operate together [3]. The outermost container is known as the Cluster. For failure handling, every node contains a replica. In case of a failure, the replica takes charge. Cassandra arranges the nodes in a cluster, in a ring manner, and assigns data to them. 5.2. Keyspace Keyspace is the outermost container for data in Cassandra. The basic attributes of a Keyspace in Cassandra are: - **Replication factor**: It is the number of machines in the cluster that will receive copies of the same data. - **Replica placement strategy**: It is the strategy to place replicas in the ring. The different strategies such as simple strategy (rack-aware strategy), old network topology strategy (rack-aware strategy), and network topology strategy (data center-shared strategy) are available. - **Column families**: Keyspace is a container for a list of one or more column families. A column family is a container of a collection of rows. Each row contains ordered columns. Column families represent the structure of data. Each keyspace has at least one and often many column families. 6. Implementation Details The implementation of Apache Cassandra includes installing and configuring Cassandra. Initially download Cassandra from cassandra.apache.org. Copy the folder named cassandra. Move to bin folder. Open the Cassandra.yaml file which is available in the bin folder of the Cassandra folder. Verify that the following configurations. - **data_file_directories** “/var/lib/cassandra/data” - **commitlog_directory** “/var/lib/cassandra/commitlog” - **saved_caches_directory** “/var/lib/cassandra/saved_caches” **Setting the path** Set the path as Cassandra_Home= C:\apache-cassandra-1.2.19 **Starting Cassandra** $ cd $CASSANDRA_HOME $.bin/cassandra –f **Starting cqlsh** Start cqlsh using the command cqlsh as shown below. It gives the Cassandra cqlsh prompt as output. $ cqlsh Connected to Test Cluster at 127.0.0.1:9042. [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native protocol v3] cqlsh> An application of Cassandra implementation is Hotel Management System (HMS) [5]. Cassandra database is chosen for this application because of its increasing throughput as the number of nodes increases, continuous availability for critical business applications and elastic scalability. Moreover Cassandra handles all possible data formats and distribution of data by replicating data across multiple data centres. Cassandra supports ACID properties and it works on cheap commodity hardware. In the keyspace of Hotel Management System Figure 2 we have the following column families: Hotel, HotelByCity, Guest, Reservation, PointOfInterest, Room, Room Availability. In this design, transferred some of the tables, such as Hotel and Guest, to column families. Other tables, such as PointOfInterest, have been denormalized into a super column family. We have created an index in the form of the HotelByCity column family. We have combined room and amenities into a single column family, Room. The columns such as type and rate will have corresponding values; other columns, such as hot tub, will just use the presence of the column name itself as the value, and be otherwise empty. Hotel Management System includes details about different hotels, guests who stay in the hotels, availability of rooms for each hotel, and a record of the reservation, which is a certain guest in a certain room for a certain period of time (called the “stay”). Hotels typically also maintain a collection of “points of interest,” which are shopping galleries, monuments, museums, parks, or other places near the hotel that guests might like to visit during their stay. <table> <thead> <tr> <th>Hotel</th> <th>HotelByCity</th> <th>PointOfInterest</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td>+hotelID</td> <td>+city</td> <td>#hotellID</td> </tr> <tr> <td>+name</td> <td>+state</td> <td></td> </tr> <tr> <td>+phone</td> <td>+hotel1</td> <td></td> </tr> </tbody> </table> | +address | +hotel2 | +... | +city | +... | | | +state | | | | +zip | | | | Guest | RoomAvailability | | | | | | | +phone | +roomID | | | +fname | +date | | | +lname | +kk: <unspecified> = 22 | | | +email | +qq: <unspecified> = 14 | | Reservation | Room | | | | | | | +resID | +type | | | | +rate | | | | +coffee | | | | +tv | | | | +hottub | | | | +... | | Figure. 2 Hotel Management System Our application Hotel Management System designed with Cassandra includes the following characteristics: - Find hotels in a given area. - Find information about a specific hotel, such as its name, location, room availability etc. - Find interesting locations near to a given hotel. - Find availability of rooms in a given date range. - Find the amenities and rate for a room. - Possible to book the selected rooms by entering guest information. The database in Cassandra is created using keyspace. A keyspace in Cassandra is a namespace which defines data replication on nodes. A cluster contains one keyspace per node. The application we’re building will do the following things: 1. Create the database structure. 2. Populate the database with hotel and point of interest data. The hotels are stored in standard column families, and the points of interest are in super column families. 3. Search for a list of hotels in a given city. This uses a secondary index. 4. Select one of the hotels returned in the search, and then search for a list of points of interest near the chosen hotel. 5. Booking the hotel by doing an insert into the Reservation column family should be straightforward at this point, and is left to the reader. 6.1. Table Operations To create a table use the command CREATE TABLE. The tables required for the Hotel Management System application can be created using this command. The syntax is CREATE (TABLE | COLUMNFAMILY) <tablename> ('<column-definition>', '<column-definition>')</n The primary key is represented by a column that is used to uniquely identify a row. Therefore, defining a primary key is mandatory while creating a table. A primary key is also made of one or more columns of a table [4]. 6.2. CURD Operations To create data in a table use the command INSERT. The syntax for creating data in a table is INSERT INTO <tablename> (<column1 name>, <column2 name>....) VALUES (<value1>, <value2>....) UPDATE is the command used to update data in a table. The syntax of update is UPDATE <tablename> SET <column name> = <new value> <column name> = <value>.... WHERE <condition> Reading Data using SELECT Clause from a table in Cassandra. Using this clause we can read a whole table, a single column, or a particular cell. The syntax of SELECT is SELECT FROM <table name> WHERE <condition> Delete data from a table using the command DELETE. Its syntax is DELETE FROM <identifier> WHERE <condition> 6.3. Performance Evaluation One of the hallmarks of Cassandra is its high performance, for both reads and writes operations. When new nodes are added to a cluster, Cassandra scales it linearly. The performance of Hotel Management System application is evaluated with various hardware requirements such as Intel core CPU @ 1.80 GHz, 64-bit operating system, x64 based processor, 4.00GB RAM. The software specifications include Apache Cassandra version 1.2.19. Figure 3 gives performance of Cassandra operations. In the graph of performance evaluation of Cassandra database X axis represents the throughput in ops/sec and Y axis represents average latency in ms. Here three operations such as update, insert and read are evaluated for performance. In the graph it is clear that update operation has very high throughput while it is in low latency. Similarly insert operation has high throughput [7] while it is in low latency which is greater than latency of update operation. In the case of read operation which has low throughput while it is in high latency. ![Performance Evaluation](image) **Figure. 3 Performance Evaluation** 7. CONCLUSION NoSQL database: Cassandra is built, implemented, and operated a scalable storage system providing high performance, and wide applicability. Demonstrated that Cassandra can support a very high update throughput while delivering low latency. It is very efficient as compared with other databases. REFERENCES Authors **Hima S.** is currently pursuing M.Tech in Computer Science and Engineering in Mar Athanasius College of Engineering. She completed her B.Tech from Mohandas College of Engineering and Technology, Thiruvananthapuram. Her areas of research are Image Processing, Database and Data Mining. **Varalakshmi P.** is currently pursuing M.Tech in Computer Science and Engineering in Mar Athanasius College of Engineering. She completed her B.Tech from P.R.S. College of Engineering and Technology, Thiruvananthapuram. Her areas of research are Data Mining, Databases and Image Processing. **Surekha Mariam Varghese** is currently heading the Department of Computer Science and Engineering, M.A. College of Engineering, Kothamangalam, Kerala, India. She received her B-Tech Degree in Computer Science and Engineering in 1990 from College of Engineering, Trivandrum affiliated to Kerala University and M-Tech in Computer and Information Sciences from Cochin University of Science and Technology, Kochi in 1996. She obtained Ph.D in Computer Security from Cochin University of Science and Technology, Kochi in 2009. She has around 25 years of teaching and research experience in various institutions in India. Her research interests include Network Security, Database Management, Data Structures and Algorithms, Operating Systems and Distributed Computing, Machine learning. She has published 17 papers in international journals and international conference proceedings. She has been in the chair for many international conferences and journals.
{"Source-Url": "http://wireilla.com/papers/ijfcst/V6N3/6316ijfcst01.pdf", "len_cl100k_base": 4511, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 21167, "total-output-tokens": 5155, "length": "2e12", "weborganizer": {"__label__adult": 0.0002961158752441406, "__label__art_design": 0.0003542900085449219, "__label__crime_law": 0.0004355907440185547, "__label__education_jobs": 0.0014867782592773438, "__label__entertainment": 7.301568984985352e-05, "__label__fashion_beauty": 0.00015437602996826172, "__label__finance_business": 0.0006060600280761719, "__label__food_dining": 0.0005855560302734375, "__label__games": 0.0007534027099609375, "__label__hardware": 0.00180816650390625, "__label__health": 0.0007452964782714844, "__label__history": 0.0003323554992675781, "__label__home_hobbies": 0.0001627206802368164, "__label__industrial": 0.0004014968872070313, "__label__literature": 0.0002503395080566406, "__label__politics": 0.00024700164794921875, "__label__religion": 0.0003314018249511719, "__label__science_tech": 0.10076904296875, "__label__social_life": 0.00010508298873901369, "__label__software": 0.03826904296875, "__label__software_dev": 0.8505859375, "__label__sports_fitness": 0.00020265579223632812, "__label__transportation": 0.0005035400390625, "__label__travel": 0.0006060600280761719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21889, 0.01676]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21889, 0.38321]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21889, 0.90819]], "google_gemma-3-12b-it_contains_pii": [[0, 2842, false], [2842, 6014, null], [6014, 8353, null], [8353, 9613, null], [9613, 11811, null], [11811, 14477, null], [14477, 16053, null], [16053, 19022, null], [19022, 20345, null], [20345, 21889, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2842, true], [2842, 6014, null], [6014, 8353, null], [8353, 9613, null], [9613, 11811, null], [11811, 14477, null], [14477, 16053, null], [16053, 19022, null], [19022, 20345, null], [20345, 21889, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21889, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21889, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21889, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21889, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21889, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21889, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21889, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21889, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21889, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21889, null]], "pdf_page_numbers": [[0, 2842, 1], [2842, 6014, 2], [6014, 8353, 3], [8353, 9613, 4], [9613, 11811, 5], [11811, 14477, 6], [14477, 16053, 7], [16053, 19022, 8], [19022, 20345, 9], [20345, 21889, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21889, 0.14024]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
1cf6d47632ff69409c5093ff18d70e8922431ad4
Mashups in Second Life @ VU Anton Eliëns FEW VU University Amsterdam eliens@cs.vu.nl Frans Feldberg FEWEB VU University Amsterdam jfelfberg@feweb.vu.nl Elly Konijn FSW VU University Amsterdam ea.konijn@fsw.vu.nl Egon Compter Communicatie VU University Amsterdam e.compter@dienst.vu.nl ABSTRACT In this paper we explore how to enhance our presence in Second Life by utilizing Web Services in meaningful compositions (mashups). After discussing the technical requirements that must be met, we discuss possible applications of mashups in Second Life, including serious games, and delineate a behavioral model that allows for tracking the behavior of visitors of our world. Taking our requirements analysis and envisioned design, which essentially includes the 3D nature of Second Life worlds, as a starting point, the paper provides an overview of research and development(s) that may contribute to the realization of meaningful mashups and serious games in Second Life. Keywords and phrases: Second Life, Web Services, Mashups, Virtual Economy, Serious Games, User Tracking Introduction Second Life seems to be overtaking the world. In the whole range of community-building platforms, Second Life stands out as an immersive 3D world with an almost stunning adoption, by both individuals, companies and institutions, followed attentively by the Press. Not entirely without an understanding of the value of press coverage, the VU University Amsterdam decided to create presence in Second Life, by creating a virtual campus, to realize a (virtual) community of learners, Eliens et al. (2007). And, indeed, we succeeded in being the first university in The Netherlands with presence in Second Life and, as hoped, this was covered in the 8 o’clock nation-wide TV news. More substantial than getting into a nation-wide television broadcast, however, is our aim to communicate our institutional goals, creating a community of learners, by creating a virtual campus in Second Life, offering an information portal as well as a meeting point, in a media platform that is widely adopted by our target community. Virtual presence in Second Life, obviously, is not enough. The relatively long history of virtual worlds has shown that lack of interesting content and functionality easily leads to boredom, desinterest, and hence churn, users dropping off. As a consequence, there is a need for sustainable functionality, that both motivates people to come back and participate, and, otherwise why choose Second Life, makes essential use of the 3D immersive environment offered by Second Life. In this paper, we will explore how to use web services in meaningful compositions or mashups to enhance our presence in Second Life, and create a community where visitors actively participate in both education and research. structure The structure of this paper is as follows. First, we will briefly describe the the construction of our virtual campus and discuss the likely success factors of Second Life. Then, we will outline our our ideas of how to use Second Life as a platform for interaction and serious games, based on what we envision as targets for education and research. We will investigate what technological support is needed to create mashups in Second Life and, after that, whether Second Life offers the functionality needed to incorporate web services. Further, we will characterize applications based on web services that make essential use of the 3D environment and fit within the virtual economy as it exists in Second Life, in particular serious games and (corporate) awareness systems. And, finally, in before giving our conclusions, we will sketch a behavioral model to characterize user interaction in Second Life, that may contribute to the realization of meaningful mashups and serious games in Second Life. VU @ Second Life – Presence in a Participatory Culture What has been characterized as a shift of culture, from a media consumer culture to a participatory culture, Jenkins (2006), where users also actively contribute content, is for our institution one of the decisive reasons to create a presence in Second Life, to build a virtual platform that may embody our so-called community of learners, where both staff and students cooperate in contributing content, content related to our sciences, that is. In December 2006, we discussed the idea of creating presence in Second Life. Our initial targets were to build a first prototype, to explore content creation in Second Life, to create tutorials for further content creation, and to analyze technical requirements and opportunities for deployment in education and research. Two and a half months later, we were online, with a virtual campus, that contains a lecture room, a telehub from which teleports are possible to other places in the building, billboards containing snapshots of our university’s website from which the visitors can access the actual website, as well as a botanical garden mimicking the VU Hortus, and even a white-walled experimentation room suggesting a ‘real’ scientific laboratory. All building and scripting were done by a group of four students, from all faculties involved, with a weekly walkthrough in our ‘builders-meeting’ to re-assess our goals and solve technical and design issues. The overall style is realistic, although not in all detail. Most important was to create a visual impression of resemblance and to offer the opportunity to present relevant information in easily accessible, yet immersive, ways. Cf. Bolter & Grusin (2000). Creating a Platform for Interaction – Virtual versus Real The first idea that comes to mind for deploying Second Life as a platform for education, naturally, is to offer courses online in our virtual lecture room. But, although we do have plans to give lectures (college) on law, probably including the enactment of a particular case, we do consider this approach as rather naive, and frankly we see no reason to include what may be considered an outdated paradigm of learning in our virtual campus, where there might be more appealing alternatives. Similarly, using the virtual laboratory for experiments might not be the best way to offer courses, although, again, we do intend to provide a model of a living cell, allowing students to study the structure, functionality and behavior of organic cells in virtual space. Is decision-making in a virtual environment the same as or similar to decision-making in the real world? And what about investments, and starting a new company? From a research perspective, the Second Life economy, powered by Linden dollars and governed by the Lindex-exchange, provides an interesting platform to study decision-making behaviors, for example with a group of students in a course about decision-support systems. However, instead of studying patterns of communication, it seems at this stage more interesting to explore how to enhance communication in a shared virtual world by actually deploying virtual objects, instead of relying on chatting and textual information, and to design tasks that require cooperation in an essential manner. One way to establish a relation with reality is to provide a virtual context to objects and activities existing in actual reality. For example in a cultural heritage application, we explored transparent overlays in 3D to relate paintings to the world they depict, Rutledge et al. (2000). In addition, we would like to deploy Second Life as a platform for serious games, such as service management games, Eliens & Chang (2007). Serious corporate games can then be located in virtual replica of the actual offices in Second Life, thus promoting (corporate) identification with its real counterpart. The goal of this paper is to explore the use of web services, among others, the creation of such games, and to provide a behavioral model that allows us to give an interpretation to users’ behavior patterns, and may perhaps even help to guide users’ behavior in Second Life, by providing appropriate recommendations. Web Services and Mashups – a brief overview By now the phrase Web 2.0 as well as applications representing it, such as Flickr and YouTube, are well Fig 1. (a) visitors outside (b) visitors inside Our virtual campus is meant to serve as an information portal and as a meeting ground, where students, staff and visitors can meet and communicate, as well as a place where were teachers and researchers can conduct experiments aimed at discovering new ways of teaching and doing research. The 1st of March 2007, we went live. See fig. 1. After the TV news broadcast, which resulted in a dramatic increase in visitors, we were even surprised by the invasion of an army of Mario Brothers. Hilarious and, fortunately, non-harmful. All in all, the results exceeded our expectations, the students were praised for the results of their building efforts, but as a team we must continue to think about how to deploy Second Life as a platform for education and research projects. As we argue in Eliens et al. (2007), the surprising success and appeal of Second Life may be attributed to an optimal combination of avatar modification options, gesture animations, in-game construction tools, and facilities for communication and social networking, such as chatting and instant messaging. Incorporating elements of community formation, and very likely also the built-in physics and the inclusion of elementary economic principles, seem to be the prime distinguishing factors responsible for the success of Second Life. In addition, the possibility of recording collaborative enacted stories, using built-in machinima\(^1\) certainly may contribute to its appeal. \(^1\)www.machinima.org established, and enjoyed by a wide community. Each day new items are added to the growing list of mashups and the number of web services that constitute the building blocks of mashups also shows a steady growth. Mashups seem to be the easy way to start up a company, since the technology is relatively easy and, making use of appropriate services, initial investment costs can be low. C.f. Shanahan (2007). What Web 2.0 stands for, from a technical perspective, is succinctly expressed in Dorai’s Learnlog: XML Is The Fabric Of Web 2.0 Applications: - The client side is AJAX (Asynchronous Javascript and XML) - The server application typically exposes data through XML - The interaction model is web services - Mashups combine multiple webservice to create new types of applications And eventhough many alternative representations, such as JSON (Javascript Object Notation) are increasingly being used, all in all XML may be regarded as the interlingua of the Web 2.0. Before taking a closer look at the communication protocol(s) underlying Web 2.0 and de-construct the tight link of AJAX to HTML in-page formatting, it is worthwhile, following Shanahan (2007), to give an overview of a selected number of services, that may be used to create mashups: - google – code.google.com/ - yahoo – developer.yahoo.com/ - del.icio.us – del.icio.us/help/api/ - flickr – www.flickr.com/services/ - bbc – www0.rdthdo.bbc.co.uk/services/ - youtube – www.youtube.com/dev Although mashups featuring google maps seem to be the dominant mashup type, other services such as offered by del.icu.us, Flickr and BBC might prove to be more worthwhile for ‘serious’ applications. For example, for developing e-commerce applications Amazon offers services for product operations, such as item search and similarity lookup, remote shopping carts, to create and manage purchase collections, customer content, to access information contributed by customers, and third party listings, to find related resellers. It is important to note that many of these services, as for example the shopping cart services, may be used independently of the commercial offerings of Amazon! Most of the service providers and services mentioned above are accessible using a choice of protocols, including WSDL, SOAP, XML-RPC and the REST protocol. The REST protocol seems to be most widespread and as we will discuss in the next section, it seems to be the most appropriate protocol in Second Life. REST stands for Representational State Transfer. In essence, the REST protocol uses the url as a command-line for stateless RPC invocations, which allows for services to be executed by typing in the address box of a web browser. A great tutorial about the REST protocol can be found in Joe Gregorio’s column column: - The Restful Web. As fully explained in Van der Vlist et al. (2007), the phrases representation, state and transfer, respectively, stand for: - representation – encoding in a particular format - state – data encapsulated in an object - transfer – using HTTP methods In practice, the use of REST means that the state associated with a resource or service must be managed by the client. Together with mechanisms such as content-negotiation and URL-rewriting, REST provides a simple, yet powerful method to invoke services using HTTP requests. A common misunderstanding is that AJAX is intimately tied to web browsers and in-page HTML formatting. This misunderstanding is due to the fact that AJAX is often used to improve the user experience of web pages bij emulating RIA (Rich Internet Applications) using DHTML and CSS. However, the real meaning of AJAX in our view is that AJAX allows for asynchronous client-controlled server requests, that are executed without an immediate visible effect for the user. We explored the use of AJAX and web services in an X3D/VRML implementation of PANORAMA, a system meant to support social awareness in a work environment. Vyas et al. (2007), Vyas et al. (2007b). PANORAMA represents casual encounters in the work environment and displays self reflections, that is postcards and other item contributed by employees, at a large display in a central public space. In section 6 we will discuss how to realize PANORAMA in Second Life, from both technical and aesthetic perspectives. Fig. 2, taken from Si & Eliens (2007), illustrates the architecture of an Ajax-based web implementation of PANORAMA, which includes facilities for game playing as occasional battle(s), using the google GWT toolkit to allow users to contribute their image material, and a PHP server for storage and retrieval of images. --- 2www.programmableweb.com/mashuplist/ 3dorai.wordpress.com/tag/mashups/ 4www.json.org/ 5aws.amazon.com 6www.xml.com/pub/a/2004/12/01/restful-web.html The Web 2.0 offers a lively arena for consumers and developers alike, with a multitude of blogs discussing the future of the web. For example, in Dion Hinchcliffe rebuttal of Jeffrey Zeldman’s Web 3.0 Web 1.0 = Web 2.0 blog, entitled Is Web 2.0 Entering “The Trough of Disillusionment”? it is suggested that our services could even be more powerful by creating semantic mashups. Although the notion of semantic web technology is widely known and accepted, for reference we include a quote of Nova Spivack quoted from Dan Farber and Larry Dignan’s blog Web 2.0 isn't dead, but Web 3.0 is bubbling up: The Semantic Web is a set of technologies which are designed to enable a particular vision for the future of the Web a future in which all knowledge exists on the Web in a format that software applications can understand and reason about. By making knowledge more accessible to software, software will essentially become able to understand knowledge, think about knowledge, and create new knowledge. In other words, software will be able to be more intelligent not as intelligent as humans perhaps, but more intelligent than say, your word processor is today. But even in the semantic web community the discussion whether to go for folksonomies or formal ontologies rages, Mika (2007), and it is not clear at this stage what will prove to be more powerful, HTML-scraping, tags, microformats, or full ontologies. Anyway, from our perspective, incorporating web services in Second Life, we are not in a position to enter this discussion, but simply explore what is useful in realizing meaningful mashups in 3D virtual environments. To conclude this brief overview of web services and mashups we wish to give another quote from Dorai’s Learnlog, this time from Jon Udell, in his blog on his move to Microsoft: The most powerful mashups don’t just mix code and data, they mix cultures. which provides a challenge that transcends all issues of mere technological correctness. Technological Infrastructure – using Web Services in Second Life Second Life offers an advanced scripting language with a C-like syntax and an extensive library of built-in functionality. Although it has support for objects, LSL (the Linden Scripting Language) is not object-oriented. Cf. Eliens (2000). Scripts in Second Life are server-based, that is all scripts are executed at the server, to allow sharing between visitors. Characteristic for LSL are the notions of state and eventhandler, which react to events in the environments. Among the built-in functions there are functions to connect to a (web) server, and obtain a response, in particular (with reference to their wiki page): - request – wiki.secondlife.com/wiki/LlHTTPRequest - escape – wiki.secondlife.com/wiki/LlEscapeURL - response – wiki.secondlife.com/wiki/Http_response Other functions to connect to the world include sensors, for example to detect the presence of (visitors’) avatars, and chat and instant messaging functions to communicate with other avatars using scripts. In addition, LSL offers functions to control the behavior and appearance of objects, including functions to make objects react to physical laws, to apply force to objects, to activate objects attached to an avatar (as for example the phantom Mario sprites mentioned earlier), and functions to animate textures, that can be used to present slide shows in Second Life. On the Mashable Social Networking News site a brief overview is given of the use of web services in Second Life, entitled Second Life + Web 2.0 = Virtual World Mashups. To access Second Life from outside-in (that is from a web browser), so-called slurls may be used, for example to reach VT Second Life, and all slurls listed in del.icio.us under slurlmarker may be used, also to activate in-world teleporting using scraping techniques. As remarked in the hackdiary by Matt Biddulph, Second Life (currently) lacks the ability to parse XML or JSON, so the best way to incorporate web services is to set up a web server with adequate resources. As Matt Biddulph indicates, to access flickr photographs for a particular user (avatar), a web server may contain the following resources: - /seen?user=SomeAvatar – records the presence of SomeAvatar - /touched?user=SomeAvatar – invokes flickr API with users tag - /set_tag?user=SomeAvatar&tag=FavoriteTag – records SomeAvatar’s favourite tag --- 7 web2.sys-con.com/read/172417.htm 8 www.web2journal.com/read/361294.htm 9 blogs.zdnet.com/BTL/?p=3934 10 mashable.com/2006/05/30/second-life-web-20-virtual-world-mashups/ 11 slurl.com/secondlife/VU%20University%20NL/29/151 12 del.icio.us/tag/slurlmarker 13 www.hackdiary.com/archives/000085.html For example, in response to a 'touch' event, invoking touch results in consulting the database for the user's tag and asking the Flickr API for a random photo with that tag. It then returns a string containing the url for a particular photograph. LSL functions used in this application include sensors, to check for presence, listen functions, to respond to spoken commands, and touch events, for the physical interface. In addition to supporting strings and lists, LSL provides a perl-like split function to convert a string into a list of strings, thus allowing for processing multiple items in response to a server request. Another example of using web services in Second Life is writing blogs15 from within Second Life using the BlogHUD16 developed by Koz Farina who also is reported to have found a flash hack that allows for reading RSS feeds. As explained by Koz Farina > Quicktime supports Flash, but only up to Flash version 5. We're up to version 9 on that now! Luckily, I have been dabbling with Flash since the early days, so already knew how to do this 'the old way'... So, Flash is doing all the work. No LSL at all... I heart feeds. Did I say 'I heart feeds?' The RSS display uses the ability to stream Quicktime video in Second Life, and again the mashup is not created in Second Life but by appropriate server support. In a similar vein we may incorporate live streaming video19 for example by using WireCast17 to capture and organize live camera input, possibly together the screen output of other applications such as powerpoint, which must then be sent to a streaming server supporting Quicktime, such as Apple’s Darwin18 which may then be accessed from Second Life to texture a display object. Finally, as another Web 2.0 to Web 3D phenomenon, announced in New World Notes15 we may mention the used of Twitter messages, that allow residents to send and receive message about ongoing activities. A similar service is reported to exist for jaiku messages. Living in a Virtual Economy – Applications Mashups on the Web are interesting representatives of what one may call a virtual economy, with a business-model that is not grounded in traditional production and trade values, but rather consists of value-added services with an indirect, albeit substantial, financial spin-off, due to recommendations and referrals. The basic mechanisms in a recommender economy are, according to Kassel et al. (2007): - cross sale – users who bought A also bought B - up sale – if you buy A and B together ... Where the principles underlying this virtual economy have definitely proven their value in first (ordinary) life economy, what are the chances that these principles are also valid in Second Life? According to the media companies selling their services to assist the creation of presence in Second Life, there are plenty New Media Opportunities In The Online World Second Life22 to a possibly even greater extent, as they boldly claim, as in what they call the precessor of Second Life, the World Wide Web. To assess the role web services, including semantic web services, may play in Second Life, it seems worthwhile to investigate to what extent web services can be deployed to deliver more traditional media, such as digital TV. To support the business model of digital TV, which in outline may be summarized as providing additional information, game playing and video on demand, with an appropriate payment scheme, Daskalova & Atanasova (2007) argue in favor of the use of a SOA (Service Oriented Architecture), to allow for a unified, well-maintainable approach in managing collections of audio-visual objects. Such services would include meta-data annotation, water-marking for intellectual property protection, and search facilities for the end-user. Atanasova et al. (2007) even propose to wrap each individual audio-visual object in a (semantic) web service and provide compound services based on semantic web technologies such as OWL-S (OWL-based Web Service Ontology) and WSMO (Web Service Modelling Ontology) using semi-automatic methods together with appropriate semantic web tools for the description and composition of such services. Obviously, there is a great technical challenge in creating such self adjusting service environments. With respect to the application of web services in Second Life, however, a far more modest aim, it seems that nevertheless the business model associated with the delivery of media items through digital TV channels may profitably be used in Second Life, and also the idea of wrapping media items in web services has in some way an immediate appeal. Leaving the economic issues aside we will briefly characterize two applications that we envisage to realize within the virtual campus of VU @ Second Life. The first application is a game meant to give information about climate change. More specifically, as a submission for the Dutch Contest for the Communication of Science, we have developed, in close collaboration with the 14 nwn.blogs.com/nwn/2006/10/really_simple_s.html 15 bloghud.com/ 16 blogs.electricsheepcompany.com/chris/?p=206 17 www.varasoftware.com/products/wirecast/ 18 developer.apple.com/opensource/server/streaming/ 19 nwn.blogs.com/nwn/2007/03/post_1.html 20 twitter.com/ 21 devku.org/docs 22 www.youtube.com/watch?v=8NOHJRJB9uyI 23 www.daml.org/services/owl-s/ 24 www.wsmo.org/ 25 composing-the-semantic-web.blogspot.com/ VU Climate Center, the Clima Futura game, that invites players to manage resources related to climate change, over a period of 100 years, and also allows the player to play minigames, to gain access to audio-visual information about scientific developments as well as political issues related to the inconvenient truth of climate change. Although we do not intend to realize the whole game in Second Life, we actually use flash to reach an audience as wide as possible, as a pilot parts of the game could fruitfully be realized in the VU virtual campus, in particular the search for knowledge, that is looking for an expert in a particular area of (climate-related) research. A similar quest was implemented in our Half Life 2 based game VULife, Eliens & Blikharie (2006), where the player had to visit nine information spots, which resulted in displaying in a HUD nine square matrix the location of a hidden treasure, which was then actually the power to use arms. Technical issues in realizing Clima Futura in Second Life are support for ranking, as well as meta-information with respect to locations where relevant information can be found, which may be realized with the techniques indicated in section 5. Another issue is giving flexible access to video material related to specific topics in climate change. Another application we wish to discuss is PANORAMA, already mentioned in section 5. In developing PANORAMA we proceeded from the assumption that people somehow like to have a feel of what is going on in the workspace, although not in any detail, and also like to see items of personal interest, including (their own) birth-announcements and sport troffes. The original version of PANORAMA was developed in DirectX, and was meant to be displayed on a large screen, with a static viewpoint on a dynamically changing scene, reflecting the activity in the workspace and the self-reflective contributions in an artful way. Moving from a central large screen in first life to a visualisation embedded in Second Life, which is moreover subject to first-person viewpoint dynamics, is quite a challenge. The most simple solution would be to project the large PANORAMA screen onto a display object in Second Life using live video streams but this would leave that challenge unanswered. Embedding PANORAMA in Second Life would allow us to observe, in more detail than in a previous user study, the behavior of users, that is, to be more precise, the proximity to particular objects of interest, the duration of their presence, and, using the mechanisms of recommendation, their interest in related items. Technically, such monitoring can be achieved using the sensors and listeners described in section 5. To make sense of such data, however, we need some model that allows for an interpretation that is more meaningful than the mere registration of presence. Models of Behavior – Tracking User Interaction Our virtual campus in Second Life already allows for performing simple statistics, by recording the presence of users at particular spots in the virtual world, using sensors and listeners installed in 3D objects. Since the LSL script-based counters appear to be rather volatile, tracking data are sent to a web server and stored in a database. This mechanism can easily be extended to a more encompassing form of user tracking, recording for a particular user not only presence at particular spots, but also the duration of presence, the actual proximity to objects, and the proximity to other users, as well as explicitly spoken comments or actions such as the donation of (Linden) money. In Eliens & Wang (2007), we introduced the notion of serial recommender, which generates guided tours in 3D digital dossier, Eliens et al. (2007), based on (expert) user tracking. To incrementally refine such tours for individual users, we used a behavioral model originally developed in Oard et al. (2006) to cope with privacy issues in distributed recommender systems. In this section we will briefly sketch this model and discuss how it applies to generating annotations for objects in Second Life, and how these annotations may be used to provide recommendations to other users. The model distinguishes between: \[ U = \text{user}, I = \text{item}, B = \text{behavior}, R = \text{recommendation}, F = \text{feature} \] Using these notions, we may characterize observations about users’ behaviors, and relate these behaviors to features of the object involved. Based on observed behavior we may then, taking the relevant feature space into account, indicate ratings for both features and objects. For example, observing that a user spends a particular amount of time and gives a rating \( r \), we may apply this rating to all features of the item, which will indirectly influence the rating of items with similar features. \[ B = [\text{time} = 20\text{sec}, \text{rating} = r] \] \[ F = [\text{proximity} = 0.2, \text{type} = \text{postcard}, \text{topic} = \text{hawaii}] \] \[ R = [\text{object} = \text{postcard}, \text{topic} = \text{hawaii}, \text{rating} = r] \] Following Oard et al. (2006), we may in a somewhat loose way define the mapping \( U \times I \times B \rightarrow U \times F \) which provides recommendations from observations, either directly by applying the \( U \times F \rightarrow F \times F \) mapping, or indirectly by a mapping \( U \times F \rightarrow U \times U \rightarrow F \times F \), which uses an intermediate matrix (or product space) \( U \times U \) indicating the (preference) relation between users or user-groups. As a technical aside, taken as a matrix, we may fill the entries with distance or weight values. Otherwise, when we use product spaces, we need to provide an additional mapping to the range of \([0, 1]\), where distance can be taken as the dual of weight, that is \(d = 1 - w\). As indicated above, the feature space may both include attributes of objects, as well as properties of the user (avatar) and the behavior related to particular objects or (user) avatars. At this stage it is not clear whether tracking should be done in an interval based manner, at specific times, or triggered by proximity to particular objects. The latter solution seems to be the most practical and effective solution, and does allow for recording the proximity to other users. As special case arises when users decide to engage in a guided tour, that takes them along a number of the hotspots in the virtual world, based on some criterion, by pre-programmed tele-hubs. In that case we may apply the personalisation strategy introduced in Decision, which includes a notion of distance between objects, as an expressing of relative interest. In a more formal way, for a serial recommender, that provides the user with a sequence of items \(\ldots, s_{n-1}, s_n, \ldots\) and for \(s_n\) possibly alternatives \(a_1, a_2, \ldots\), we may adapt the (implied) preference of the user, when the user chooses to select alternative \(a_k\) instead of accepting \(s_n\) as provided by the recommender, by adjusting the weight of the items involved, or features thereof, which amounts in other words to an additional constraint on the distance measure for the objects or features involved. Differently put, when we denote by \(s_{n-1} \mapsto s_n/[a_1, a_2, \ldots]\) the presentation of item \(s_n\) with as possible alternatives \(a_1, a_2, \ldots\), we know that \(d(s_{n-1}, a_k)<d(s_{n-1}, s_n)\) for some \(k\), if the user chooses for \(a_k\). For example, from observation \(B_n\) we can deduce \(R_n\): \[ B_n = \{ \text{time = 20sec, forward = a_k} \} \quad F_n = \{ \text{proximity = 0.2, type = postcard, topic = hawaii} \} \\ R_n = \{ d(s_n, a_k)<d(s_n, s_{n+1}) \} \] leaving the feature vector \(F_n\) unaffected, unless other information is available indicating like or dislike due to specific properties. Together, the collection of recommendations, or more properly revisions \(R_n\) over a sequence \(S\), can be solved as a system of linear equations to adapt or revise the (original) ratings. This does of course not explain nor how ratings come into existence, nor what features are considered relevant, or even how guided tours should be generated. However, as we have demonstrated in Ballegooij & Eliens (2001), based on a rudimentary tagging scheme, we may in response to a query generate a guided tour taking the topographical constraints of the virtual world into account, for example to make a user familiar with the (virtual replica of the) actual workspace. It seems that this approach can be generalized to one that uses alternative descriptive methods, as long as they support feature-based information retrieval.\(^{27}\) Obviously, both user tracking and recommendations may be fruitfully used in the realization of serious (corporate) games, as well as to support exploratory activity in non-serious games and (corporate) awareness systems. ### Conclusions Based on an overview of research and development(s) in web technologies and our assessment of the the technical facilities offered by the Second Life platform, we may conclude that there are ample opportunities to incorporate web services and mashups in Second Life. Our intended applications, moreover, covering (corporate) game playing as well as a system for promoting (corporate) social awareness, indicate that there are clear motivations to deploy web services in Second Life, both for tracking users’ behavior and for providing additional information based on recommendations that may be derived from taking record of users’ behavior patterns. Although we have sketched a first behavioral model that allows to assign meaning to behavior and interaction, it is clear that this model must be further refined and that we need to gain experience in developing mashups in virtual space to arrive at effective and meaningful compositions of web services, supporting the realization of (serious) games, in Second Life. ### Acknowledgement(s) We (again) thank the students involved, Viola van Alphen (FEWEB), Tom Bestebreurtje (FEW), Elbert-Jan Hennipman (FEW), and last but not least Bjorn de Boer (FSW), for their enthusiasm, creativity, and of course their hard work which led to an amazing result. ### References Daskalova H. and Atanasova T. (2007), Web Services and Tools for their Composition considering Aspects of Digital TV Workflow, Workshop on Dig- \(^{27}\)www.cs.vu.nl/~eliens/research/rif.html ital Television, Proc. EUROMEDIA 2007, Delft, Netherlands
{"Source-Url": "http://www.researchgate.net/profile/Elly_Konijn/publication/221024343_Mashups_in_Second_Life__VU/links/09e415101148478765000000.pdf", "len_cl100k_base": 7510, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26937, "total-output-tokens": 9116, "length": "2e12", "weborganizer": {"__label__adult": 0.0007634162902832031, "__label__art_design": 0.003345489501953125, "__label__crime_law": 0.0007505416870117188, "__label__education_jobs": 0.0257720947265625, "__label__entertainment": 0.0010843276977539062, "__label__fashion_beauty": 0.0005030632019042969, "__label__finance_business": 0.0007348060607910156, "__label__food_dining": 0.000988006591796875, "__label__games": 0.01483154296875, "__label__hardware": 0.0011205673217773438, "__label__health": 0.001071929931640625, "__label__history": 0.0013828277587890625, "__label__home_hobbies": 0.00022208690643310547, "__label__industrial": 0.0004470348358154297, "__label__literature": 0.0019855499267578125, "__label__politics": 0.0005955696105957031, "__label__religion": 0.0010700225830078125, "__label__science_tech": 0.1385498046875, "__label__social_life": 0.0005931854248046875, "__label__software": 0.08209228515625, "__label__software_dev": 0.72021484375, "__label__sports_fitness": 0.0006036758422851562, "__label__transportation": 0.0006079673767089844, "__label__travel": 0.0007610321044921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38290, 0.02933]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38290, 0.2705]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38290, 0.91443]], "google_gemma-3-12b-it_contains_pii": [[0, 4495, false], [4495, 9716, null], [9716, 14495, null], [14495, 19181, null], [19181, 24598, null], [24598, 30209, null], [30209, 35489, null], [35489, 38290, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4495, true], [4495, 9716, null], [9716, 14495, null], [14495, 19181, null], [19181, 24598, null], [24598, 30209, null], [30209, 35489, null], [35489, 38290, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38290, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38290, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38290, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38290, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38290, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38290, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38290, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38290, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38290, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38290, null]], "pdf_page_numbers": [[0, 4495, 1], [4495, 9716, 2], [9716, 14495, 3], [14495, 19181, 4], [19181, 24598, 5], [24598, 30209, 6], [30209, 35489, 7], [35489, 38290, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38290, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
730f60ac42748c00afa11c203f9183260dd4ccb5
Vectorization Aaron Birkland Consultant Cornell CAC With contributions from TACC training materials. Parallel Computing on Stampede October 30, 2013. What is Vectorization? - **Hardware Perspective:** Specialized instructions, registers, or functional units to allow in-core parallelism for operations on arrays (vectors) of data. - **Compiler Perspective:** Determine how and when it is possible to express computations in terms of vector instructions. - **User Perspective:** Determine how to write code in a manner that allows the compiler to deduce that vectorization is possible. Vectorization: Hardware - Goal: parallelize computations over vector arrays - SIMD: Single Instruction Multiple Data - Many instances of a single operation executing simultaneously - Late ‘90s – present, commodity CPUs (x86, x64, PowerPC, etc) - Small vectors, few cycles per instruction - Newer CPUs (Sandy Bridge) can pipeline some SIMD instructions as well – best of both worlds. Vectorization via SIMD: Motivation • CPU speeds reach a plateau – Power limitations! – Many “slow” transistors more efficient than fewer “fast” transistors • Process improvements make physical space cheap • Moore’s law, 2x every 18-24 months • Easy to add more “stuff” • One solution: More cores – First dual core Intel CPUs appear in 2005 – Increasing in number rapidly (e.g. 8 in Stampede, 60+ on MIC) • Another Solution: More FPU units per core – vector operations – First appeared on a Pentium with MMX in 1996 – Increasing in vector width rapidly (e.g. 512-bit [8 doubles]) on MIC Vectorization via SIMD: History <table> <thead> <tr> <th>Year</th> <th>Registers</th> <th>Instruction Set</th> </tr> </thead> <tbody> <tr> <td>~1997</td> <td>80-bit</td> <td>MMX Integer SIMD (in x87 registers)</td> </tr> <tr> <td>~1999</td> <td>128-bit</td> <td>SSE1 SP FP SIMD (xMM0-8)</td> </tr> <tr> <td>~2001</td> <td>128-bit</td> <td>SSE2 DP FP SIMD (xMM0-8)</td> </tr> <tr> <td>---</td> <td>128-bit</td> <td>SSEx</td> </tr> <tr> <td>~2010</td> <td>256-bit</td> <td>AVX DP FP SIMD (yMM0-16)</td> </tr> <tr> <td>~2012</td> <td>512-bit</td> <td>IMCI (MIC)</td> </tr> <tr> <td>~2014</td> <td>512-bit</td> <td>AVX-512 (Xeon)</td> </tr> </tbody> </table> Vector Registers Floating Point (FP) - 64-bit - 32-bit SSE/AVX 128 - 2/4 AVX-256 - 4/8 MIC-512 - 8/16 xmm ymm zmm Speed - True SIMD parallelism – typically 1 cycle per floating point computation - Exception: Slow operations like division, square roots - Speedup (compared to no vector) proportional to vector width - 128-bit SSE – 2x double, 4x single - 256-bit AVX – 4x double, 8x single - 512-bit MIC – 8x double, 16x single - Hypothetical AVX example: 8 cores/CPU * 4 doubles/vector * 2.0 GHz = 64 Gflops/CPU DP Speed - Clearly memory bandwidth is potential issue, we’ll explore this later - Poor cache utilization, alignment, memory latency all detract from ideal - SIMD is parallel, so Amdahl’s law is in effect! - Serial/scalar portions of code or CPU are limiting factors - Theoretical speedup is only a ceiling User Perspective Let’s take a step back – how can we leverage this power • Program in assembly – Ultimate performance potential, but only for the brave • Program in intrinsics – Step up from assembly, useful but risky • Let the compiler figure it out – Relatively “easy” for user, “challenging” for compiler – Less expressive languages like C make compiler’s job more difficult – Compiler may need some hand holding. • Link to an optimized library that does the actual work – e.g. Intel MKL, written by people who know all the tricks. – Get benefits “for free” when running on supported platform Vector-aware coding - Know what makes vectorizable at all - “for” loops (in C) or “do” loops (in fortran) that meet certain constraints - Know where vectorization will help - Evaluate compiler output - Is it really vectorizing where you think it should? - Evaluate execution performance - Compare to theoretical speedup - Know data access patterns to maximize efficiency - Implement fixes: directives, compilation flags, and code changes - Remove constructs that make vectorization impossible/impractical - Encourage/force vectorization when compiler doesn’t, but should - Better memory access patterns Writing Vector Loops - Basic requirements of vectorizable loops: - Countable at runtime - Number of loop iterations is known before loop executes - No conditional termination (break statements) - Have single control flow - No Switch statements - ‘if’ statements are allowable when they can be implemented as masked assignments - Must be the innermost loop if nested - Compiler may reverse loop order as an optimization! - No function calls - Basic math is allowed: pow(), sqrt(), sin(), etc - Some Inline functions allowed Conceptualizing Compiler Vectorization - Think of vectorization in terms of loop unrolling - Unroll N interactions of loop, where N elements of data array fit into vector register ```c for (i=0; i<N; i++) { a[i] = b[i] + c[i]; } ``` ```c for (i=0; i<N; i+=4) { a[i+0] = b[i+0] + c[i+0]; a[i+1] = b[i+1] + c[i+1]; a[i+2] = b[i+2] + c[i+2]; a[i+3] = b[i+3] + c[i+3]; } ``` Load b(i..i+3) Load c(i..i+3) Operate b+c->a Store a Compiling Vector loops • Intel Compiler: – Vectorization starts at optimization level \(-O2\) – Will default to SSE instructions and 128-bit vector width • use \(-xAVX\) to use AVX and 256-bit vector width. Only runs on newer CPUs – Can embed SSE and AVX instructions in the same binary with \(-axAVX\) • Will run AVX on CPUs with AVX support, SSE otherwise – \(-\text{vec-report}=<n>\) for a vectorization report • GCC – Vectorization is disabled by default, regardless of optimization level – Need \(-f\text{tree-vectorize}\) flag, combined with optimization > \(-O2\) – SSE by default, \(-mavx\ \-march=corei7-avx\) for AVX – \(-f\text{tree-vectorizer-verbose}\) for a vectorization report Lab: Simple Vectorization In this lab you will • Use the Intel compiler to create vectorized with non-vectorized code • Compare the performance of vectorized vs non-vectorized code • Compare performance with different vector widths. • Take an initial look at compiler vectorization reports • Bonus: What is the vector efficiency (% vector instructions) of the test code? Using Amdal’s law \[ P = \frac{\left(\frac{1}{S} - 1\right)}{\left(\frac{1}{n} - 1\right)} \] where P is % parallel (e.g. % vectorized), S is speedup, n is vector length in number of floats/doubles ## Lab: Simple Vectorization <table> <thead> <tr> <th>Compile Options</th> <th>Time</th> <th>Speedup</th> </tr> </thead> <tbody> <tr> <td>-no-vec –O3</td> <td>.67s</td> <td>1x</td> </tr> <tr> <td>-O3</td> <td>.37s</td> <td>1.8x</td> </tr> <tr> <td>-O3 -xAVX</td> <td>.25s</td> <td>2.7x</td> </tr> </tbody> </table> **Host CPU** <table> <thead> <tr> <th>Compile Options</th> <th>Time</th> <th>Speedup</th> </tr> </thead> <tbody> <tr> <td>-no-vec -mmic -O3</td> <td>13.22s</td> <td>1x</td> </tr> <tr> <td>-mmic -O3</td> <td>2.78s</td> <td>4.8x</td> </tr> </tbody> </table> **MIC** Notes: - One MIC thread can only use 50% of a core - Amdahl’s law for 90% vectorized predicts (1x, 1.8x, 3x, 4.7x) Challenge: Loop Dependencies - Vectorization changes the order of computation compared to sequential case - Compiler must be able to prove that vectorization will produce correct result. - Need to consider independence of *unrolled* loop operations – depends on vector width - Compiler performs dependency analysis Consider the loop: \[ \begin{align*} \text{for}( & i=1; \ i<N; \ i++) \\ & a[i] = a[i-1] + b[i]; \end{align*} \] Applying each operation sequentially: \[ \begin{align*} \text{a[1]} &= \text{a[0]} + \text{b[1]} \rightarrow \text{a[1]} = 0 + 6 \rightarrow \text{a[1]} = 6 \\ \text{a[2]} &= \text{a[1]} + \text{b[2]} \rightarrow \text{a[2]} = 6 + 7 \rightarrow \text{a[2]} = 13 \\ \text{a[3]} &= \text{a[2]} + \text{b[3]} \rightarrow \text{a[3]} = 13 + 8 \rightarrow \text{a[3]} = 21 \\ \text{a[4]} &= \text{a[3]} + \text{b[4]} \rightarrow \text{a[4]} = 21 + 9 \rightarrow \text{a[4]} = 30 \end{align*} \] \[a = \{0, 6, 13, 21, 30\}\] Loop Dependencies: Read After Write Consider the loop: \[ \begin{align*} \text{a} &= \{0,1,2,3,4\} \\ \text{b} &= \{5,6,7,8,9\} \end{align*} \] Applying each operation sequentially: \[ \begin{align*} \text{a}[1] &= \text{a}[0] + \text{b}[1] \quad \rightarrow \quad \text{a}[1] = 0 + 6 \quad \rightarrow \quad \text{a}[1] = 6 \\ \text{a}[2] &= \text{a}[1] + \text{b}[2] \quad \rightarrow \quad \text{a}[2] = 6 + 7 \quad \rightarrow \quad \text{a}[2] = 13 \\ \text{a}[3] &= \text{a}[2] + \text{b}[3] \quad \rightarrow \quad \text{a}[3] = 13 + 8 \quad \rightarrow \quad \text{a}[3] = 21 \\ \text{a}[4] &= \text{a}[3] + \text{b}[4] \quad \rightarrow \quad \text{a}[4] = 21 + 9 \quad \rightarrow \quad \text{a}[4] = 30 \end{align*} \] \[ \text{a} = \{0, 6, 13, 21, 30\} \] for( i=1; i<N; i++) \[ \text{a}[i] = \text{a}[i-1] + \text{b}[i]; \] Loop Dependencies: Read After Write Now let's try vector operations: a = {0,1,2,3,4} b = {5,6,7,8,9} Applying vector operations, i={1,2,3,4}: \begin{align*} a[i-1] &= \{0,1,2,3\} \quad \text{(load)} \\ b[i] &= \{6,7,8,9\} \quad \text{(load)} \\ \{0,1,2,3\} + \{6,7,8,9\} &= \{6, 8, 10, 12\} \quad \text{(operate)} \\ a[i] &= \{6, 8, 10, 12\} \quad \text{(store)} \end{align*} \begin{align*} a &= \{0, 6, 8, 10, 12\} \neq \{0, 6, 13, 21, 30\} \quad \text{NOT VECTORIZABLE} \end{align*} \text{for} (i=1; i<N; i++) \begin{align*} a[i] &= a[i-1] + b[i]; \end{align*} Loop Dependencies: Write after Read Consider the loop: \[ a = \{0,1,2,3,4\} \] \[ b = \{5,6,7,8,9\} \] Applying each operation sequentially: \[ \begin{align*} a[0] &= a[1] + b[0] \quad \rightarrow \quad a[0] = 1 + 5 \quad \rightarrow \quad a[0] = 6 \\ a[1] &= a[2] + b[1] \quad \rightarrow \quad a[1] = 2 + 6 \quad \rightarrow \quad a[1] = 8 \\ \end{align*} \] \[ a = \{6, 8, 10, 12, 4\} \] Loop Dependencies: Write after Read Now let’s try vector operations: \[ a = \{0,1,2,3,4\} \] \[ b = \{5,6,7,8,9\} \] Applying vector operations, \( i=\{1,2,3,4\} \): \[ a[i+1] = \{1,2,3,4\} \text{ (load)} \] \[ b[i] = \{5,6,7,8\} \text{ (load)} \] \[ \{1,2,3,4\} + \{5,6,7,8\} = \{6, 8, 10, 12\} \text{ (operate)} \] \[ a[i] = \{6, 8, 10, 12\} \text{ (store)} \] \[ a = \{0, 6, 8, 10, 12\} = \{0, 6, 8, 10, 12\} \text{ VECTORIZABLE} \] \[ \text{for}(\ i=0; \ i<N; \ i++) \] \[ \quad a[i] = a[i+1] + b[i]; \] Loop Dependencies - **Read After Write** - Also called “flow” dependency - Variable written first, then read - Not vectorizable - **Write after Read** - Also called “anti” dependency - Variable read first, then written - Vectorizable ```c for( i=1; i<N; i++ ) a[i] = a[i-1] + b[i]; ``` ```c for( i=0; i<N-1; i++ ) a[i] = a[i+1] + b[i]; ``` Loop Dependencies • Read after Read – Not really a dependency – Vectorizable \[ \text{for } (i=0; i<N; i++) \\ \quad a[i] = b[i\%2] + c[i]; \] • Write after Write – a.k.a “output” dependency – Variable written, then re-written – Not vectorizable \[ \text{for } (i=0; i<N; i++) \\ \quad a[i\%2] = b[i] + c[i]; \] Loop Dependencies: Aliasing • In C, pointers can hide data dependencies! – Memory regions they point to may overlap • Is this safe?: ```c void compute(double *a, double *b, double *c) { for (i=1; i<N; i++) { a[i]=b[i]+c[i]; } } ``` – .. Not if we give it the arguments `compute(a, a+1, c);` • Effectively, b is really a[i-1] → Read after Write dependency • Compilers can usually cope, add bounds checking tests (overhead) Vectorization Reports • Shows which loops are or are not vectorized, and why • Intel: -vec-report=<n> – 0: None – 1: Lists vectorized loops – 2: Lists loops not vectorized, with explanation – 3: Outputs additional dependency information – 4: Lists loops not vectorized, without explanation – 5: Lists loops not vectorized, with dependency information • Reports are essential for determining where the compiler finds a dependency • Compiler is conservative, you need to go back and verify that there really is a dependency. Loop Dependencies: Vectorization Hints - Compiler must prove there is no data dependency that will affect correctness of result - Sometimes, this is impossible - e.g. unknown index offset, complicated use of pointers - Intel compiler solution: IVDEP (Ignore Vector DEPendencies) hint. - Tells compiler “Assume there are no dependencies” ```c subroutine vec1(s1,M,N,x) ... !DEC$ IVDEP do i = 1,N x(i) = x(i+M) + s1 end do void vec1(double s1,int M, int N,double *x) { ... #pragma IVDEP for(i=0;i<N;i++) x[i]=x[i+M]+s1; ``` Compiler hints affecting vectorization - For Intel compiler only - Affect whether loop is vectorized or not - `#pragma ivdep` - Assume no dependencies. - Compiler may vectorize loops that it would otherwise think are not vectorizable - `#pragma vector always` - Always vectorize if technically possible to do so. - Overrides compiler’s decision to not vectorize based upon cost - `#pragma novector` - Do not vectorize Loop Dependencies: Language Constructs - C99 introduced ‘restrict’ keyword to language - Instructs compiler to assume addresses will not overlap, ever ```c void compute(double * restrict a, double * restrict b, double * restrict c) { for (i=1; i<N; i++) { a[i]=b[i]+c[i]; } } ``` - May need compiler flags to use, e.g. `-restrict, -std=c99` Cache and Alignment \[ \begin{bmatrix} z_1 \\ z_2 \\ z_3 \\ \vdots \\ z_n \end{bmatrix} = \alpha \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ \vdots \\ x_n \end{bmatrix} + \begin{bmatrix} y_1 \\ y_2 \\ y_3 \\ \vdots \\ y_n \end{bmatrix} \] \[ \text{ymm}_2 \quad \text{ymm}_0 \quad \text{ymm}_1 \] - Optimal vectorization requires concerns beyond SIMD unit! - Registers: Alignment of data on 128, 256 bit boundaries - Cache: Cache is fast, memory is slow - Memory: Sequential access much faster than random/strided Strided access - Fastest usage pattern is “stride 1”: perfectly sequential - Best performance when CPU can load L1 cache from memory in bulk, sequential manner - Stride 1 constructs: - Iterating Structs of arrays vs arrays of structs - Multi dimensional array: - Fortran: stride 1 on “inner” dimension - C/C++: Stride 1 on “outer” dimension ```c for(j=0; j<n; j++) for(i=0; i<n; i++) a[j][i]=b[j][i]*s; ``` **Strided access** - Striding through memory reduces effective memory bandwidth! - For DP, roughly 1-stride/8 - Worse than non-aligned access. Lots of memory operations to populate a cache line, vector register ```plaintext *do i = 1,4000000*istride, istride a(i) = b(i) + c(i) * sfactor endo ``` Diagnosing Cache and Memory deficiencies • Obviously bad stride patterns may prevent vectorization at all: – In vector report: "vectorization possible but seems inefficient" • Otherwise, may be difficult to detect – No obvious assembly instructions, other than a proliferation of loads and stores – Vectorization performance farther away from ideal than expected • Profiling tools can help – PerfExpert (available at TACC) – Visualize CPU cycle waste spent in data access (L1 cache miss, TLB misses, etc) Conclusion • Vectorization occurs in tight loops “automatically” by the compiler. • Need to know where vectorization should occur, and verify that the compiler is doing that. • Need to know if a compiler’s failure to vectorize is legitimate – Fix code if so, use #pragma if not. • Need to be aware of caching and data access issues – Very fast vector units need to be well fed.
{"Source-Url": "http://www.cac.cornell.edu/education/training/StampedeOct2013/Vectorization.pdf", "len_cl100k_base": 5193, "olmocr-version": "0.1.49", "pdf-total-pages": 33, "total-fallback-pages": 0, "total-input-tokens": 52959, "total-output-tokens": 6619, "length": "2e12", "weborganizer": {"__label__adult": 0.0004067420959472656, "__label__art_design": 0.0003559589385986328, "__label__crime_law": 0.0004591941833496094, "__label__education_jobs": 0.0005383491516113281, "__label__entertainment": 6.937980651855469e-05, "__label__fashion_beauty": 0.00019121170043945312, "__label__finance_business": 0.0002646446228027344, "__label__food_dining": 0.000415802001953125, "__label__games": 0.0008172988891601562, "__label__hardware": 0.00795745849609375, "__label__health": 0.0005259513854980469, "__label__history": 0.00034236907958984375, "__label__home_hobbies": 0.00017631053924560547, "__label__industrial": 0.0012254714965820312, "__label__literature": 0.0001876354217529297, "__label__politics": 0.00035190582275390625, "__label__religion": 0.0007524490356445312, "__label__science_tech": 0.09429931640625, "__label__social_life": 7.444620132446289e-05, "__label__software": 0.00797271728515625, "__label__software_dev": 0.880859375, "__label__sports_fitness": 0.0004804134368896485, "__label__transportation": 0.0010328292846679688, "__label__travel": 0.0002608299255371094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15843, 0.03655]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15843, 0.60961]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15843, 0.74359]], "google_gemma-3-12b-it_contains_pii": [[0, 153, false], [153, 591, null], [591, 981, null], [981, 1584, null], [1584, 2166, null], [2166, 2285, null], [2285, 2695, null], [2695, 3006, null], [3006, 3618, null], [3618, 4234, null], [4234, 4794, null], [4794, 5250, null], [5250, 5969, null], [5969, 6541, null], [6541, 7107, null], [7107, 7423, null], [7423, 8059, null], [8059, 8899, null], [8899, 9468, null], [9468, 10062, null], [10062, 10576, null], [10576, 10940, null], [10940, 11266, null], [11266, 11727, null], [11727, 12263, null], [12263, 12821, null], [12821, 13250, null], [13250, 13662, null], [13662, 14207, null], [14207, 14640, null], [14640, 14945, null], [14945, 15461, null], [15461, 15843, null]], "google_gemma-3-12b-it_is_public_document": [[0, 153, true], [153, 591, null], [591, 981, null], [981, 1584, null], [1584, 2166, null], [2166, 2285, null], [2285, 2695, null], [2695, 3006, null], [3006, 3618, null], [3618, 4234, null], [4234, 4794, null], [4794, 5250, null], [5250, 5969, null], [5969, 6541, null], [6541, 7107, null], [7107, 7423, null], [7423, 8059, null], [8059, 8899, null], [8899, 9468, null], [9468, 10062, null], [10062, 10576, null], [10576, 10940, null], [10940, 11266, null], [11266, 11727, null], [11727, 12263, null], [12263, 12821, null], [12821, 13250, null], [13250, 13662, null], [13662, 14207, null], [14207, 14640, null], [14640, 14945, null], [14945, 15461, null], [15461, 15843, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15843, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15843, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15843, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15843, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15843, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15843, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15843, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15843, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15843, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15843, null]], "pdf_page_numbers": [[0, 153, 1], [153, 591, 2], [591, 981, 3], [981, 1584, 4], [1584, 2166, 5], [2166, 2285, 6], [2285, 2695, 7], [2695, 3006, 8], [3006, 3618, 9], [3618, 4234, 10], [4234, 4794, 11], [4794, 5250, 12], [5250, 5969, 13], [5969, 6541, 14], [6541, 7107, 15], [7107, 7423, 16], [7423, 8059, 17], [8059, 8899, 18], [8899, 9468, 19], [9468, 10062, 20], [10062, 10576, 21], [10576, 10940, 22], [10940, 11266, 23], [11266, 11727, 24], [11727, 12263, 25], [12263, 12821, 26], [12821, 13250, 27], [13250, 13662, 28], [13662, 14207, 29], [14207, 14640, 30], [14640, 14945, 31], [14945, 15461, 32], [15461, 15843, 33]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15843, 0.04265]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
2d03f96e7002f7d511ee567929767a415997ac15
Algorithm Analysis with Big Oh Data Structures and Design with Java and JUnit Chapter 12 ©Rick Mercer Algorithm Analysis - Objectives - Analyze the efficiency of algorithms - Analyze a few classic algorithms - Linear Search, Binary Search, Selection Sort - Know the differences between $O(1)$, $O(n)$, $O(\log n)$, and $O(n^2)$ - Visualize runtime differences with experiments Algorithms continued - Computer Scientists focus on problems such as - How fast do algorithms run - How much memory does the process require - Example Applications - Make the Internet run faster - Pink-Degemark's routing algorithms - Gene Meyers determined the sequences of the Human genome using his whole genome shotgun algorithm Analysis of Algorithms - We have ways to compare algorithms - Generally, the larger the problem, the longer it takes the algorithm to complete - Sorting 100,000 elements can take much more time than sorting 1,000 elements - and more than 10 times longer - the variable $n$ suggests the "number of things" - If an algorithm requires $0.025n^2 + 0.012n + 0.0005$ seconds, just plug in a value for $n$ A Computational Model - To summarize algorithm runtimes, we can use a computer independent model - instructions are executed sequentially - count all assignments, comparisons, and increments there is infinite memory - every simple instruction takes one unit of time Simple Instructions - Count the simple instructions - assignments have cost of 1 - comparisons have a cost of 1 - let's count all parts of the loop for (int j = 0; j < n; j++) • j=0 has a cost of 1, j<n executes n+1 times, and j++ executes n times for a total cost of 2n+2 - each statement in the repeated part of a loop have have a cost equal to number of iterations Examples ``` sum = 0; -> 1 sum = sum + next; -> 1 Total Cost: 2 for (int i = 1; i <= n; i++) sum = sum++; -> 1 + n+1 + n = 2n+2 Total Cost: 3n + 2 k = 0 for (int i = 0; i < n; i++) for (int j = 0; j < n; j++) k++; -> n (2n+2) = 2n² +2n Total Cost: 3n² + 4n + 3 ``` ## Total Cost of Sequential Search <table> <thead> <tr> <th>Cost</th> </tr> </thead> <tbody> <tr> <td>for (int index = 0; index &lt; n; index++) -&gt; 2n + 2</td> </tr> <tr> <td>if(searchID.equals(names[index])) -&gt; n</td> </tr> <tr> <td>return index;</td> </tr> <tr> <td>-&gt; 0 or 1</td> </tr> <tr> <td>return -1 // if not found</td> </tr> <tr> <td>-&gt; 0 or 1</td> </tr> </tbody> </table> Total cost = 3n+3 **Different Cases** - The total cost of sequential search is $3n + 3$ - But is it always exactly $3n + 3$ instructions? - The last assignment does not always execute - But does one assignment really matter? - How many times will the loop actually execute? - that depends - If searchID is found at index 0: _____ iterations - best case - If searchID is found at index n-1:_____ iterations - worst case Typical Case of sequential (linear) - The average describes the more typical case - First, let the entire cost be simplified to n - Assume the target has a 50/50 chance of being in the array - n comparisons are made: worst-case occurs 1/2 the time - Assume if it's in a, it's as likely to be in one index as another \[ \frac{1}{2}n + \frac{1}{2} \times \frac{n}{2} = \frac{n}{2} + \frac{n}{4} = \frac{3}{4}n \] Half the time it is n comparisons, the other half it is n/2 comparisons - So the typical case is 3/4 n comparisons The Essence of Linear Search Plot the function *this is why sequential search is also called linear search*. As \( n \) increases, runtime forms a line. Linear Search Continued - This equation is a polynomial: $3n + 3$ - The fastest growing term is the high-order term - The high order term (which could be $n^2$ or $n^3$), represents the most important part of the analysis function - We refer to the rate of growth as the order of magnitude, which measure the rate of growth Rate of Growth Imagine two functions: \[ f(n) = 100n \quad \quad g(n) = n^2 + n \] - When \( n \) is small, which is the bigger function? - When \( n \) is big, which is the bigger function? - We can say: \( g(n) \) grows faster than \( f(n) \) Rate of Growth, another view Function growth and weight of terms as a percentage of all terms as $n$ increases for $$f(n) = n^2 + 80n + 500$$ <table> <thead> <tr> <th>$n$</th> <th>$f(n)$</th> <th>$n^2$</th> <th>$80n$</th> <th>500</th> </tr> </thead> <tbody> <tr> <td>10</td> <td>1,400</td> <td>100 (7%)</td> <td>800 (57%)</td> <td>500 (36%)</td> </tr> <tr> <td>100</td> <td>18,500</td> <td>10,000 (54%)</td> <td>8,000 (43%)</td> <td>500 (3%)</td> </tr> <tr> <td>1000</td> <td>1,080,500</td> <td>1,000,000 (93%)</td> <td>80,000 (7%)</td> <td>500 (0%)</td> </tr> <tr> <td>10000</td> <td>100,800,500</td> <td>100,000,000 (99%)</td> <td>800,000 (1%)</td> <td>500 (0%)</td> </tr> </tbody> </table> Conclusion: consider highest order term with the coefficient dropped, also drop all lower order terms. Definition - The *asymptotic growth* of an algorithm - describes the relative growth of an algorithm as $n$ gets very large - With speed and memory increases doubling every two years, the asymptotic efficiency *where $n$ is very large* is the thing to consider - There are many sorting algorithm that are "on the order of" $n^2$ (there are roughly $n \times n$ instructions executed) - Other algorithms are "on the order of" $n \times \log_2 n$ - and this is a huge difference when $n$ is very large Constant Function - Some functions don't grow with n - If the sorting program initializes a few variables first, the time required does not change when n increases - These statements run in *constant time* - e.g. construct an empty List with capacity 20 - The amount of time can be described as a constant function $f(n) = k$, where $k$ is a constant - It takes ~0.0003 seconds no matter the size of n Big O - Linear search is "on the order of n", which can be written as $O(n)$ to describe the upper bound on the number of operations. - This is called big $O$ notation. - Orders of magnitude: $O(1)$ constant (the size of $n$ has no effect) $O(n)$ linear $O(\log n)$ logarithmic $O(n \log n)$ no other way to say it, John K’s License plate $O(n^2)$ quadratic $O(n^3)$ cubic $O(2^n)$ exponential Binary Search - We'll see that binary search can be a more efficient algorithm for searching If the element in the middle is the target report target was found and the search is done if the key is smaller search the array to the left Otherwise search the array to the right - This process repeats until the target is found or there is nothing left to search - Each comparison narrows search by half <table> <thead> <tr> <th>Data</th> <th>reference</th> <th>loop 1</th> <th>loop 2</th> </tr> </thead> <tbody> <tr> <td>Bob</td> <td>a[0] ← left</td> <td></td> <td></td> </tr> <tr> <td>Carl</td> <td>a[1]</td> <td></td> <td></td> </tr> <tr> <td>Debbie</td> <td>a[2]</td> <td></td> <td></td> </tr> <tr> <td>Evan</td> <td>a[3]</td> <td></td> <td></td> </tr> <tr> <td>Froggie</td> <td>a[4] ← mid</td> <td></td> <td></td> </tr> <tr> <td>Gene</td> <td>a[5] ← left</td> <td></td> <td></td> </tr> <tr> <td>Harry</td> <td>a[6] ← mid</td> <td></td> <td></td> </tr> <tr> <td>Igor</td> <td>a[7]</td> <td></td> <td></td> </tr> <tr> <td>Jose</td> <td>a[8] ← right</td> <td></td> <td></td> </tr> </tbody> </table> **How fast is Binary Search?** - **Best case:** 1 - **Worst case:** when target is not in the array - At each pass, the "live" portion of the array is narrowed to half the previous size. - The series proceeds like this: - $n, \frac{n}{2}, \frac{n}{4}, \frac{n}{8}, \ldots$ - Each term in the series represents one comparison. How long does it take to get to 1? - This will be the number of comparisons **Binary Search (con.)** - Could start at 1 and double until we get to n \[1, 2, 4, 8, 16, \ldots, k \geq n \quad \text{or} \quad 2^0, 2^1, 2^2, 2^3, 2^4, \ldots, 2^c \geq n\] - The length of this series is \(c+1\) - The question is 2 to what power \(c\) is greater than or equal to \(n\)? - if \(n\) is 8, \(c\) is 3 - if \(n\) is 1024, \(c\) is 10 - if \(n\) is 16,777,216, \(c\) is 24 - Binary search runs \(O(\log n)\) *logarithmic* Comparing $O(n)$ to $O(\log n)$ Rates of growth and logarithmic functions <table> <thead> <tr> <th>Power of 2</th> <th>n</th> <th>$\log_2 n$</th> </tr> </thead> <tbody> <tr> <td>$2^4$</td> <td>16</td> <td>4</td> </tr> <tr> <td>$2^8$</td> <td>128</td> <td>8</td> </tr> <tr> <td>$2^{12}$</td> <td>4,096</td> <td>12</td> </tr> <tr> <td>$2^{24}$</td> <td>16,777,216</td> <td>24</td> </tr> </tbody> </table> Graph Illustrating Relative Growth \( n, \log n, n^2 \) Other logarithm examples The guessing game: - Guess a number from 1 to 100 - try the middle, you could be right - if it is too high - check near middle of 1..49 - if it is too low - check near middle of 51..100 - Should find the answer in a maximum of 7 tries - If 1..250, a maximum of $2^c \geq 250$, $c = 8$ - If 1..500, a maximum of $2^c \geq 500$, $c = 9$ - If 1..1000, a maximum of $2^c \geq 1000$, $c = 10$ Logarithmic Explosion - Assuming an infinitely large piece of paper that can be cut in half, layered, and cut in half again as often as you wish. - How many times do you need to cut and layer until paper thickness reaches the moon? - Assumptions - paper is 0.002 inches thick - distance to moon is 240,000 miles - 240,000 * 5,280 feet per mile * 12 inches per foot = 152,060,000,000 inches to the moon Examples of Logarithmic Explosion - The number of bits required to store a binary number is logarithmic. *Add 1 bit to get much larger ints* - 8 bits stored 256 values \( \log_2 256 = 8 \) - \( \log 2,147,483,648 = 31 \) - The inventor of chess asked the Emperor to be paid like this: - 1 grain of rice on the first square, 2 on the next, double grains on each successive square \( 2^{63} \) Compare Sequential and Binary Search - Output from CompareSearches.java (1995) Search for 20000 objects **Binary Search** - #Comparisons: 267248 - Average: 13 - Run time: 20ms **Sequential Search** - #Comparisons: 200010000 - Average: 10000 - Run time: 9930ms Difference in comparisons: 199742752 Difference in milliseconds: 9910 $O(n^2)$ quadratic - $O(n^2)$ reads on the order of $n$ squared or quadratic - When $n$ is small, rates of growth don’t matter - Quadratic algorithms are greatly affected by increases in $n$ - Consider the selection sort algorithm - Find the largest, $n-1$ times Actual observed data for $O(n^2)$ sort Time required to sort an array of size $n$ - Time in seconds - $n$ in thousands Two $O(n^2)$ algorithms - Many known sorting algorithms are $O(n^2)$ - Given $n$ points, find the pair that are closest Compare $p_1$ with $p_2, p_3, p_4, p_5$ (4 comparisons) Compare $p_2$ with $p_3, p_4, p_5$ (3 comparisons) Compare $p_3$ with $p_4, p_5$ (2 comparisons) Compare $p_4$ with $p_5$ (1 comparisons) - When $n$ is 5, make 10 comparisons - In general, #comparisons is \[ \frac{n(n-1)}{2} = \frac{n^2}{2} - \frac{n}{2} \] - highest order term is $n^2$, drop $\frac{1}{2}$ and runtime is $O(n^2)$ $O(n^3)$ algorithms - Matrix Multiplication (naïve): ```java for (int i = 0; i < m.length; i++) { for (int j = 0; j < m2.length - 1; j++) { for (int k = 0; k < m2.length; k++) { m[i][j] += m[i][k] * m2[k][j]; } } } ``` Big O and Style Guidelines - Big O is similar to saying the runtime is less than or equal to Big O notation. - $O(f)$ is an upper bound - Don't use constants or lower-order terms - These are no-nos for now (you will use coefficients in C Sc 345) - $O(n^2 + n)$ should be written $O(n^2)$ - $O(5500n)$ should be written $O(n)$ - $O(2.5n)$ should be written $O(n)$ Properties of Big-O Summarizing two main properties If \( f(n) \) is a sum of several terms, the one with the largest growth rate is kept, and all others omitted If \( f(n) \) is a product of several factors, any constants (terms in the product that do not depend on \( n \)) are omitted – which means you can drop coefficients Properties of Big-O We can drop coefficient Example: \[ f(n) = 100*n \] then \( f(n) \) is \( O(n) \) **Summation of same Orders** The property is useful when an algorithm contains several loops of the same order **Example:** \[ f(n) \text{ is } O(n) \] \[ f_2(n) \text{ is } O(n) \] \[ \text{then } f(n) + f_2(n) \text{ is } O(n) + O(n), \text{ which is } O(n) \] **Summation of different Orders** This property works because we are only concerned with the term of highest growth rate Example: \[ f_1(n) \text{ is } O(n^2) \] \[ f_2(n) \text{ is } O(n) \] so \( f_1(n) + f_2(n) = n^2 + n \text{ is } O(n^2) \) Product This property is useful for analyzing segments of an algorithm with nested loops Example: $f_1(n)$ is $O(n^2)$ $f_2(n)$ is $O(n)$ then $f_1(n) \times f_2(n)$ is $O(n^2) \times O(n)$, which is $O(n^3)$ Limitations of Big-Oh Analysis - Constants sometimes make a difference - \( n \log n \) may be faster than \( 10000n \) - Doesn't differentiate between data cache memory, main memory, and data on a disk--there is a huge time difference to access disk data - thousands of times slower - Worst case doesn't happen often - it's an overestimate Quick Analysis - Can be less detailed - Running time of nested loops is - the product of each loop's number of iterations - Several consecutive loops - the longest running loop - $3n$ is $O(n)$ after all Runtimes with for loops ```java int n = 1000; int[] x = new int[n]; - O(n) for(int j = 0; j < n; j++) x[j] = 0; - O(n²) int sum = 0; for (int j = 0; j < n; j++) for (int k = 0; k < n; k++) sum += j * k; ``` Run times with for loops - $O(n^3)$ ```java for (int j = 0; j < n; j++) for (int k = 0; k < n; k++) for (int l = 0; l < n; l++) sum += j * k * l; ``` - $O(n)$ ```java for (int j = 0; j < n; j++) sum++; for (int j = 0; j < n; j++) sum--;; ``` - $O(\log n)$ ```java for (int j = 1; j < n; j = 2 * j) sum += j; ``` Analyze this ```java public void swap(int[] a, int left, int right) { int temp = array[left]; array[left] = array[right]; array[right] = temp; } ``` Analyze that ```c for (int j = 0; j < n; j++) sum += l; for (int k = 0; k < n; k++) sum += l; for (int l = 0; l < n; l++) sum += l; ``` Analyze that for (int j = 0; j < n; j++) for (int k = 0; k < n; k++) sum += k + l; for (int l = 0; l < n; l++) sum += l; Analyze this for (int top = 0; top < n - 1; top++) { int smallestIndex = top; for (int index = top; index < n; index++) { if (a[index] < a[smallestIndex]) { smallestIndex = index; } } // Swap smallest to the top index swap(a, top, smallestIndex); }
{"Source-Url": "https://www2.cs.arizona.edu/~mercer/Presentations/12-AlgorithmAnalysis.pdf", "len_cl100k_base": 4970, "olmocr-version": "0.1.53", "pdf-total-pages": 45, "total-fallback-pages": 0, "total-input-tokens": 60586, "total-output-tokens": 6587, "length": "2e12", "weborganizer": {"__label__adult": 0.0005040168762207031, "__label__art_design": 0.00029730796813964844, "__label__crime_law": 0.0005559921264648438, "__label__education_jobs": 0.0005125999450683594, "__label__entertainment": 8.219480514526367e-05, "__label__fashion_beauty": 0.0002002716064453125, "__label__finance_business": 0.00018227100372314453, "__label__food_dining": 0.0006299018859863281, "__label__games": 0.00124359130859375, "__label__hardware": 0.0017375946044921875, "__label__health": 0.000942707061767578, "__label__history": 0.0002803802490234375, "__label__home_hobbies": 0.00011932849884033204, "__label__industrial": 0.00048470497131347656, "__label__literature": 0.00032711029052734375, "__label__politics": 0.0003871917724609375, "__label__religion": 0.0006885528564453125, "__label__science_tech": 0.0226898193359375, "__label__social_life": 8.279085159301758e-05, "__label__software": 0.0032482147216796875, "__label__software_dev": 0.96337890625, "__label__sports_fitness": 0.0006694793701171875, "__label__transportation": 0.0007367134094238281, "__label__travel": 0.0002453327178955078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14812, 0.0388]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14812, 0.84033]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14812, 0.8404]], "google_gemma-3-12b-it_contains_pii": [[0, 105, false], [105, 393, null], [393, 741, null], [741, 1153, null], [1153, 1426, null], [1426, 1813, null], [1813, 2149, null], [2149, 2778, null], [2778, 3206, null], [3206, 3743, null], [3743, 3897, null], [3897, 4222, null], [4222, 4469, null], [4469, 5068, null], [5068, 5581, null], [5581, 5996, null], [5996, 6408, null], [6408, 6823, null], [6823, 7294, null], [7294, 7701, null], [7701, 8149, null], [8149, 8465, null], [8465, 8521, null], [8521, 8957, null], [8957, 9378, null], [9378, 9778, null], [9778, 10113, null], [10113, 10383, null], [10383, 10504, null], [10504, 11087, null], [11087, 11344, null], [11344, 11725, null], [11725, 12056, null], [12056, 12162, null], [12162, 12428, null], [12428, 12678, null], [12678, 12892, null], [12892, 13248, null], [13248, 13459, null], [13459, 13689, null], [13689, 14063, null], [14063, 14225, null], [14225, 14376, null], [14376, 14515, null], [14515, 14812, null]], "google_gemma-3-12b-it_is_public_document": [[0, 105, true], [105, 393, null], [393, 741, null], [741, 1153, null], [1153, 1426, null], [1426, 1813, null], [1813, 2149, null], [2149, 2778, null], [2778, 3206, null], [3206, 3743, null], [3743, 3897, null], [3897, 4222, null], [4222, 4469, null], [4469, 5068, null], [5068, 5581, null], [5581, 5996, null], [5996, 6408, null], [6408, 6823, null], [6823, 7294, null], [7294, 7701, null], [7701, 8149, null], [8149, 8465, null], [8465, 8521, null], [8521, 8957, null], [8957, 9378, null], [9378, 9778, null], [9778, 10113, null], [10113, 10383, null], [10383, 10504, null], [10504, 11087, null], [11087, 11344, null], [11344, 11725, null], [11725, 12056, null], [12056, 12162, null], [12162, 12428, null], [12428, 12678, null], [12678, 12892, null], [12892, 13248, null], [13248, 13459, null], [13459, 13689, null], [13689, 14063, null], [14063, 14225, null], [14225, 14376, null], [14376, 14515, null], [14515, 14812, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14812, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14812, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14812, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14812, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 14812, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14812, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14812, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14812, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14812, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 14812, null]], "pdf_page_numbers": [[0, 105, 1], [105, 393, 2], [393, 741, 3], [741, 1153, 4], [1153, 1426, 5], [1426, 1813, 6], [1813, 2149, 7], [2149, 2778, 8], [2778, 3206, 9], [3206, 3743, 10], [3743, 3897, 11], [3897, 4222, 12], [4222, 4469, 13], [4469, 5068, 14], [5068, 5581, 15], [5581, 5996, 16], [5996, 6408, 17], [6408, 6823, 18], [6823, 7294, 19], [7294, 7701, 20], [7701, 8149, 21], [8149, 8465, 22], [8465, 8521, 23], [8521, 8957, 24], [8957, 9378, 25], [9378, 9778, 26], [9778, 10113, 27], [10113, 10383, 28], [10383, 10504, 29], [10504, 11087, 30], [11087, 11344, 31], [11344, 11725, 32], [11725, 12056, 33], [12056, 12162, 34], [12162, 12428, 35], [12428, 12678, 36], [12678, 12892, 37], [12892, 13248, 38], [13248, 13459, 39], [13459, 13689, 40], [13689, 14063, 41], [14063, 14225, 42], [14225, 14376, 43], [14376, 14515, 44], [14515, 14812, 45]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14812, 0.08356]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
8fd95ab5cd16fe9499432b29d5a4194d22dd627f
Drilling for Data ABBYY, Agile Upstream and Alta Mesa automate lease management in the Oil & Gas Upstream market. Sue Feldman, CEO, Synthexis Executive Summary The oil and gas industry runs on information. Quick access to the right information, and better, faster analysis of documents make all the difference in gaining a competitive edge. It makes sense, then, to streamline the data pipeline: from entering the data into a land system to analysis, alerting, and decision-making. Land management is something of a labyrinth. Automating it requires extensive knowledge of the industry and in-depth understanding of acquisition and divestiture processes. There are significant hurdles to be leaped in the journey to automation. Buying and selling assets is a complex process, and contracts are rarely straightforward. Contract stipulations can be dense and unpredictable. Multiple documents, sometimes with conflicting rights, apply to a single property. They may go back decades or more. For that reason, oil and gas companies must make their way through a welter of what is usually aging paper. Little of it is digital. The analysts who manage this complexity daily must understand and retrieve not only the original contract but also the related documents and additions. To create an effective well-designed application that fits the needs of a specific task like this one, technology vendors, industry experts and practitioners must work together. In this case study, ABBYY supplied the language-based technology foundation to power advanced information extraction. They constructed a specialized knowledge base for the oil and gas industry that their system used to accurately understand and retrieve information from leases. Agile Upstream was ABBYY’s development partner, designing the user interaction, and supplying the process knowledge. They provided the big picture of the role that lease and document management plays in the oil and gas industry. Together, they developed a highly usable application. They also brought in Alta Mesa Holdings, their client: the right test case to help develop a practical working application. At the end of the day, the oil and gas business is an information business. You need to keep your house in order from a data standpoint to succeed. --Alta Mesa The Story In 2015, ABBYY was seeking a significant project to demonstrate the value of their new semantic platform for information management. They realized that the oil and gas industry, overwhelmed by data, was ripe for this kind of highly integrated approach to managing information. At the same time, Agile Upstream was looking for a technology design partner to help them co-develop a Land Insights system. Their customer, Alta Mesa, an upstream oil and gas company, wanted to streamline its land lease management process to tackle these problems. Alta Mesa Holdings, LP is a mid-sized oil and gas company engaged in onshore oil and natural gas exploration and production. It has approximately 230 employees, and operations in Texas, Oklahoma and Louisiana. The Land Department, run by David Murrell, VP of Land and Business Development, acquires oil and gas leases to gain access to minerals. The process of deciding what to leases to buy and where to drill is highly sensitive and proprietary. Alta Mesa depends on 3D seismic and other geological and technical data, and tends to look for under-developed or overlooked areas to which they can apply new geological techniques in order to extract more oil and gas. In their quest to find these overlooked drilling locations, they are adept in analyzing various data sources and acquiring and analyzing oil and gas leases. Lease analysis supports their acquisition strategy and varies according to level of activity, varying from a handful to upwards of several hundred leases in a given week, from which they expect to extract key facts like description, date of expiration, and provisions that trigger dollar compensation. They need to verify leases, to identify conflicts among related documents such as LPR’s and plat’s, and understand all the provisions. In the case of an acquisition, they can receive multiple physical boxes as well as virtual collections of leases and related documents. It takes a lot of personnel and time to verify and integrate a set of new files. Alta Mesa recognized that it would be beneficial to improve its efficiencies and thereby get a faster handle on acquisition and divestiture documentation. In the event a timely conversion was not available, their analysts had to sort, classify and enter data into their land management system. The process takes time in addition to the day to day activities. They wish to streamline the acquisition processing: to image and analyze their boxes of paper leases quickly, and to integrate that information into the system they already have in order to keep on top of expiring drilling rights and ownership specifications. Alta Mesa was interested in new approaches to integrating oil and gas lease information into its current system and data base. Agile Upstream is a technology integrator for the oil and gas industry. They design process-based software that integrates data and applications to streamline specific industry workflows such as capital budget management, accounts payable, well management, or joint interest billing. Agile creates portals and data rooms that allow buyers and sellers to search, collate and “slice and dice” documents quickly and accurately. Land lease management processes depend today on stove-piped applications in which the same data is entered multiple times, sapping productivity and leading to errors and inconsistencies. Agile realized that too much information was getting lost in the cracks between the applications. This was apparent to them as they tackled the same problems in customer after customer. They realized that the need was industry-wide and that a common infrastructure and document standards would enable companies to exchange documents, perform due diligence, and speed up mergers and acquisitions. Agile was looking for a new approach to knit the applications together and to minimize data entry. They needed a well-integrated technology platform that could ingest, manage and analyze paper and digital documents more intelligently. Their goal was to create a Land Insights system that bundled together features from content and records management, retention and governance, analytics, extraction monitoring and reporting. Because regulations vary by locality, the system had to be smart enough to apply the right regulations to each property. A Land Insights system combining cutting edge cognitive computing capabilities with domain specific knowledge in oil and gas could transform the way oil and gas leases are managed - significantly increasing operational efficiencies and driving massive value to acquisitions and divestitures. ABBYY is a technology and solutions company, founded in 1989. Their software tools run the gamut from OCR and recognition to document analysis, text analytics, classification and even inferencing. They needed a showcase for the new ABBYY InfoExtractor (AIE) SDK that can identify and extract entities, facts and relationships from complex documents. AIE is designed as a platform for information-intensive Smart Business Process Applications, to provide content-based intelligence to improve analysis and accelerate decision-making. ABBYY InfoExtractor incorporates Compreno, ABBYY’s linguistic technology foundation for their text analytic capabilities at every software level. Compreno’s advanced semantic analysis extracts concepts and terms, reconstructing facts, events and storylines, storing them in a graph database for pattern matching and reasoning. These insights enhance information discovery, search accuracy, and are the basis for visualizing collections of information effectively. The ideal of this highly integrated platform is to drive a process-driven document pipeline that can support information-dependent processes like the oil and gas industry’s appetite for acquisitions and divestitures. The implications of this approach are myriad. Certainly by deploying this kind of information platform, an oil company can streamline its acquisitions and divestiture process as well as increase accuracy and productivity. However, there is also potential upside for the entire industry. Its impact grows as other players standardize on this approach. Using a shared domain-specific terminology, or ontology, to extract facts, events and storylines from complex leases will provide much greater information transparency for all parties involved in M&A transactions, accelerating deal making while controlling risk and optimizing results across the board. The Challenge To develop an application that will support property acquisition and divestiture in the oil and gas industry. The system must create a single accessible collection from both paper and digital documents, then uniformly extract facts such as lease terms, locations, and drilling rights no matter how they are phrased. It must find relationships among leases and related documents. Furthermore, the system must: - Support ad hoc querying and analysis without relying on schemas or specialized query languages - Connect to standard land management and document management systems to update databases and keep all systems in sync - Provide a process-based visual interface to support all land lease management processes - Automate acquisition and divestiture processes, but with manual override as necessary - Be easy to use for non-technical users - Create an integrated pipeline from paper document to acquisition/divestiture analysis - Adapt and evolve as terminology and the business change Property acquisition and divestiture are two sides of the same coin. In both cases, all documents relating to a property must be located, and then scrutinized to determine the value of the property to the current business. The information must be accurate, displayed so that it is quickly understood, and show changes in ownership over time. The ability to trace the system’s interpretation of a term back to the original wording is a requirement. The system must be able to adapt to and integrate new information and terminology. It must evolve as situations change. The machine and the human must interact smoothly and complement each other, with the system managing predictable and repetitive tasks, and the human overseeing the process and making corrections that can improve the machine learning process. Mining Oil and Gas Leases Like any contract, oil and gas leases are long and convoluted. No two are alike in wording or format. Extracting the salient names, locations, terms and conditions for entry into lease management systems has been a manual operation that is time consuming and error prone. Streamlining this process will save time and money. More, it may make a significant difference in the profits or losses of the company. We have the technology today to build a software solution that automatically extracts names, lease terms, time, and locations, as Agile Upstream has done with Agile Land Insights and the underlying ABBYY Compreno technology. The color-coded example below is taken from a lease that is 3,400 words long. It shows some of the features that Alta Mesa needed to identify and store in its Land Lease Management system: **Contract Extract** THIS AGREEMENT made this 22nd day of October 2014, between GEORGE COSTANZA and wife, SUSAN ROSS, lessor, whose address is 1560 SHOTWELL LANE, ROUND TOWN, TX 72345 and VANDELY ENERGY RESOURCES, LLC, 12345 KATY FREEWAY, SUITE 100, HOUSTON, TX 77777, lessee. 1. Lessor, in consideration of Ten dollars … does hereby grant, lease and let unto lessee the land … with the exclusive right of exploring, drilling, mining and operating for, producing and owning oil, gas (including carbon dioxide), sulphur and all other minerals, … right to make surveys …, lay pipe lines, establish and utilize facilities for surface or subsurface disposal of salt water, construct roads and bridges, dig canals, build tanks, power stations, power lines, telephone lines, employee houses and other structures on said land. The land is located in the County of CLARKE, State of ALABAMA, and is described as follows. TOWNSHIP 5 NORTH, RANGE 2 EAST SECTION 3; … 400 feet South and 500 feet West of the NE corner of the SE 1/4 of the NE 1/4 of Section 3; thence South 600 feet; thence West 700 feet; thence North 800 feet; thence North 80 degrees East 800 feet; thence East 100 feet to the point of beginning, less and except the 80 foot right-of-way of the Kramer & David Road and being a part of the SE 1/4 of the NE 1/4 of Section 3. Lessor does hereby let and lease to Lessee all their mineral interest owned or claimed in Section 24 Township 6 North, Range 2 East, … said land shall be deemed to contain 2.80 acres. Lessee and agrees: (a) To deliver to the credit of lessor, one-eighth part of all oil produced and saved by lessee … to bear one-eighth of the cost of treating oil to render it marketable pipe line oil; (b) To pay lessee on gas and casinghead gas produced … one-eighth of the amount realized by lessee; (c) To pay lessee on all other minerals mined and marketed or utilized by lessee from said land, one-tenth either in kind or value at the well or mine at lessee’s election, except that on sulphur mined and marketed the royalty shall be one dollar ($1.00) per ton long ton … this lease shall continue in force as though operations were being conducted … after the expiration of the primary term, subject then to the terms of the royalty clause, as long as operations are conducted on said land; then at or before the expiration of said ninety day period, unless then paying the royalty … one dollar ($1.00) for each acre of land. … not more than 80 surface acres plus 10% acreage tolerance … 6. This is a PAID-UP LEASE … Lessee shall not be obligated … to commence or continue any operations during the primary term. … No well shall be drilled nearer than 200 feet to the house or barn … Lessee shall pay for damages caused by its operations to growing crops and timber on said land. Obligations of Lessee - [If] there are no operations on said land, then at or before the expiration of said ninety day period, no royalty shall be paid. - No well shall be drilled nearer than 200 feet to the house or barn … Lessee shall pay for damages caused by its operations to growing crops and timber on said land. Rights Granted by Lessee - right of exploring, drilling, mining and operating for, producing and owning oil, gas (including carbon dioxide), sulphur and all other minerals. - Lessee does hereby let and lease to Lessee all their mineral interest owned or claimed in Section 24 Township 6 North, Range 2 East, … said land shall be deemed to contain 2.80 acres. Monetary Considerations - Ten dollars to deliver to the credit of lessee … one-eighth part of all oil produced and saved by lessee … to bear one-eighth of the cost of treating oil to render it marketable pipe line oil; (b) To pay lessee on gas and casinghead gas produced … one-eighth of the amount realized by lessee; (c) To pay lessee on all other minerals mined and owned or claimed in Section 24 Township 6 North, Range 2 East, … said land shall be deemed to contain 2.80 acres. Other Numeric Terms - 2.80 acres. - not more than 80 surface acres plus 10% acreage tolerance. The Technology ABBY InfoExtractor is “taught” to identify each of these elements of meaning with a set of examples. The system must be able to identify multiple expressions of the same idea, as a human would. Any Land man can tell you that a “Pugh clause” and a “freestone rider” are synonyms, but this is not easy for a machine to do. However, given the right technology, it’s possible to “teach” a computer how to understand text. Machines learn to use a variety of clues to help them understand what a word means, just as people do. They rely on what they already know about a topic, as well as how the word is used in order to triangulate on what a word means within a given sentence or paragraph. For high accuracy, good information access systems today build a dictionary of specialized terms and their synonyms—a “knowledge base.” More advanced knowledge bases take advantage of ontologies that define classes of entities, their attributes, and relationships within a specific industry use case. These knowledge bases, along with sample documents, are used to train the system to understand industry concepts. In order to meet the requirements for land lease management, the underlying technology has to understand language on multiple levels. Specifically, it needs to: - Find concepts, no matter how they are worded. This includes recognizing all the variations in phrasing for specialized terms and concepts. - Merge similar concepts across structured databases and text documents, no matter how they are expressed. - Recognize documents that are related in some way, even if the similarities are approximate rather than exact. For instance, a piece of property might be described as an address or a set of coordinates. Or the property may have changed ownership. - Adapt to changes in how leases are worded and add new terms as they arise. A completely rule-based system is too rigid to support work in a changing and dynamic industry. - Understand the natural language that a user might choose in querying a system so that users don’t have to learn a specialized query language. - Scale to ingest paper and digital documents when they arrive in large batches, as well as singly. The Technology cont. Systems use text analytics to identify the important elements in a text. For instance, they may: - Extract names—of places, people, events, objects or organizations - Extract expressions of time (e.g., dates for deadlines but also lengths of time such as “two weeks”) and also construct timelines for a sequence of documents or to determine which provisions are the most current - Extract geographic locations in a variety of formats and relate them all to a location on a map - Extract details such as amounts, costs, or contingencies - Extract properties for each class of entity (e.g., places have locations and addresses; events have speakers, who have names and affiliations) - Extract the type of relationship that exists between all the entities and events (e.g., acquired by, owned by, defaulted, sold, etc.) - Extract the entire fact: a computable “who did what to whom” and store it for analysis **Fact of Acquisition** “Exxon acquired Mobil for $80 billion.” ![Diagram of Fact of Acquisition] **ABYY InfoExtractor** examines the relationship of one word to another. It extracts and stores “triples” (or “facts”) of subject (Bill), verb (hit) and object (Bob). Unlike a traditional search engine, a semantic system must be able to differentiate that a company was acquired from the company that acquired it. ABBYY’s use of linguistic technology throughout its stack, from data capture to fact extraction, makes it possible to find these meaning linkages across documents and decades. The metadata that is extracted helps users explore a large collection to find hidden or unknown relationships. Strong workflow and visual interfaces hide the underlying complexity and invite simple navigation that steps the user through a task. With a strong linguistic foundation, it’s possible to ask complex questions like, “Which Wolfcamp prospects do I have to drill in the next 6 months to retain the lease?” This new, adaptive, interactive type of application brings us squarely into the realm of cognitive computing. It is meaning-based, and designed to solve complex problems. The Design The goal of any software application is to solve a problem by making it easy to accomplish a specific task. Any successful application is rooted in good technology. But surfacing the features of that technology platform so that they are apparent to the user, and so that they fit conveniently within the task’s workflow is both an art and a technical challenge. Agile melded their knowledge of the land lease management process with the features of ABBY’s Compreno technology to design an interface that will boost productivity and accuracy. In the screenshot below from Agile Land Insights, you can see that they have taken the elements identified by ABBY InfoExtractor—names of the lessor and lessee, date, locations, contract terms—and used them to help analysts navigate a contract. The expert is also able to validate the accuracy of the system by comparing the OCR text and extractions with an image of the original lease. This design fits the normal processes the user follows in the many tasks that the underlying information will be used to support, from finding the terms of a specific lease to understanding the company’s holdings in a larger geographic area. It supports answering specific questions with high precision and the larger overviews that are necessary for the company to get a analytic and/or visual understanding of its holdings. Once the elements needed to navigate the information were established, the next step for Agile was to analyze and codify the processes that are germane to land lease management. For example, as shown in the diagram below, the document ingestion and access process digitizes a document, uses semantic data capture to extract the text, stores the extracted terms and relationships in both the land management system of record, and the linguistic knowledge base. **Agile Land Insights** Users can then access the system to answer questions such as: - Find all documents related to Property X - What leases will expire in July? - What do we already own that is near Property Y, and what properties might we acquire that are available? Working with ABBYY, they connected the information sources that will feed information to the system and normalized across data types. They also created standardized outputs to feed the data into Alta Mesa’s Land Management system, replacing much of the manual data entry now required. Finally, Agile designed and tested the user interface, based on their analysis of business processes such as acquisition and divestiture. The resulting application encapsulates process knowledge and documents in a set of tools that support land lease management, taking boxes of paper documents, and turning them into a digital information foundation for the business. Conclusion Far from the frenzy of Internet IPO’s and tech bubbles, companies like ABBYY, Agile Upstream, and Alta Mesa are solving practical problems that increase revenue, streamline processes and improve productivity. Measurably. Agile Land Insights, powered by ABBYY InfoExtractor, gives oil and gas Upstream companies like Alta Mesa a significant information advantage over their competitors because it enables them to analyze more leases at greater depth and speed. A quicker understanding of their current holdings will give them greater agility to react to the fluctuations of oil prices in the marketplace. They should also be able to optimize production for each property by being alerted to lease terms—drilling depths, ownership, or expirations—in a timely manner. Both Alta Mesa and Agile Upstream believe that **systems like Agile Land Insights will produce significant return on investment for the companies** Automating lease management will reduce the data entry error rate. Even more importantly a system such as Agile Land insights, based on a specific oil and gas ontology developed using ABBYY InfoExtractor, will increase the information transparency of transactions. Transaction parties will be able to work more efficiently together and optimize their respective interests, faster, streamlining the all-important merger, acquisition, and divestiture processes that lie at the heart of the oil and gas industry. That should be a win across the entire industry. We believe that Agile Land Insights represents a significant trend in the technology industry. As advanced semantic platforms are embedded within a process-oriented task-based application, the technology is hidden, and the user is able to simply work, without having to become a technical wizard first. These early cognitive systems adapt to new information and will help, not hinder users in getting their work done better and faster. About Synthexis Synthexis is a research focused investigative and consulting firm working in the area of emerging software technologies. Synthexis provides business advisory services to vendors and buyers of cognitive computing, search and text analytics technologies. We maintain active research programs in these areas, write about these topics, speak at industry conferences, craft marketing messages, and analyze strategy and positioning for our clients. Our analysts represent over 50 years in the IT and online industries. Synthexis and its affiliates advise vendors on market trends, potential alliances and competitors. We guide their customers in setting information strategies, measuring ROI, and understanding how the impact of current and future market trends and demand patterns will shape the implementation of tomorrow’s systems. The Synthexis Blog can be found at www.synthexis.com. About the Author Sue Feldman is founder and CEO of Synthexis. She speaks frequently at industry events on topics such as trends in computing, conversational systems, big data technologies, and the hidden costs of information work. She wrote the chapter on search engines for the Encyclopedia of Library and Information Science, and was the first editor of the IEEE Computer Society’s Digital Library News. In her book, The Answer Machine (Morgan & Claypool, 2012), Sue discusses the technologies behind information seeking and analysis, and their central role in the future of computing. Before founding Synthexis, Sue was Vice President for Search and Discovery Technologies at IDC (International Data Corporation), where she directed research on the technologies and markets for search, text analytics, categorization, translation, mobile and rich media search. Prior to coming to IDC, Ms. Feldman was founder and president of Datasearch, an independent technology advisory firm, where she consulted on usability and on information retrieval technologies. She is a founder and former president of the Association of Independent Information Professionals, a member of the Association for Computing Machinery, and has won numerous research and writing awards. Ms. Feldman holds degrees from Cornell University in linguistics and from the University of Michigan in information science. Synthexis white papers place a product or service within the context of our independent market and technology research. All opinions are our own.
{"Source-Url": "http://www.imsimaging.com/wp-content/uploads/2016/05/CS-Compreno-Drilling-Data.pdf", "len_cl100k_base": 5313, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 29074, "total-output-tokens": 5956, "length": "2e12", "weborganizer": {"__label__adult": 0.0008792877197265625, "__label__art_design": 0.0021343231201171875, "__label__crime_law": 0.002101898193359375, "__label__education_jobs": 0.0027866363525390625, "__label__entertainment": 0.00033211708068847656, "__label__fashion_beauty": 0.000598907470703125, "__label__finance_business": 0.10540771484375, "__label__food_dining": 0.0012369155883789062, "__label__games": 0.00136566162109375, "__label__hardware": 0.004360198974609375, "__label__health": 0.0009107589721679688, "__label__history": 0.0007472038269042969, "__label__home_hobbies": 0.0008425712585449219, "__label__industrial": 0.0882568359375, "__label__literature": 0.0006422996520996094, "__label__politics": 0.0010776519775390625, "__label__religion": 0.0008425712585449219, "__label__science_tech": 0.263427734375, "__label__social_life": 0.0002046823501586914, "__label__software": 0.2484130859375, "__label__software_dev": 0.26953125, "__label__sports_fitness": 0.0005621910095214844, "__label__transportation": 0.00254058837890625, "__label__travel": 0.0007486343383789062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26968, 0.00712]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26968, 0.16804]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26968, 0.94751]], "google_gemma-3-12b-it_contains_pii": [[0, 144, false], [144, 2143, null], [2143, 2860, null], [2860, 5087, null], [5087, 6935, null], [6935, 8806, null], [8806, 10623, null], [10623, 15545, null], [15545, 17742, null], [17742, 19087, null], [19087, 20405, null], [20405, 21219, null], [21219, 22609, null], [22609, 24534, null], [24534, 26968, null]], "google_gemma-3-12b-it_is_public_document": [[0, 144, true], [144, 2143, null], [2143, 2860, null], [2860, 5087, null], [5087, 6935, null], [6935, 8806, null], [8806, 10623, null], [10623, 15545, null], [15545, 17742, null], [17742, 19087, null], [19087, 20405, null], [20405, 21219, null], [21219, 22609, null], [22609, 24534, null], [24534, 26968, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26968, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26968, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26968, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26968, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26968, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26968, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26968, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26968, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26968, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26968, null]], "pdf_page_numbers": [[0, 144, 1], [144, 2143, 2], [2143, 2860, 3], [2860, 5087, 4], [5087, 6935, 5], [6935, 8806, 6], [8806, 10623, 7], [10623, 15545, 8], [15545, 17742, 9], [17742, 19087, 10], [19087, 20405, 11], [20405, 21219, 12], [21219, 22609, 13], [22609, 24534, 14], [24534, 26968, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26968, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
f6199224b2d9a72d66d8598848bde842d838557d
Abstract—Linked open data (LOD), as provided by a quickly growing number of sources constitutes a wealth of easily accessible information. However, this data is not easy to understand. It is usually provided as a set of (RDF) triples, often enough in the form of enormous files covering many domains. What is more, the data usually has a loose structure when it is derived from end-user generated sources, such as Wikipedia. Finally, the quality of the actual data is also worrisome, because it may be incomplete, poorly formatted, inconsistent, etc. To understand and profile such linked open data, traditional data profiling methods do not suffice. With ProLOD, we propose a suite of methods ranging from the domain level (clustering, labeling), via the schema level (matching, disambiguation), to the data level (data type detection, pattern detection, value distribution). Packaged into an interactive, web-based tool, they allow iterative exploration and discovery of new LOD sources. Thus, users can quickly gauge the relevance of the source for the problem at hand (e.g., some integration task), focus on and explore the relevant subset. I. PROFILING LINKED OPEN DATA Data profiling comprises a well established set of basic operations, which analyze a (relational) dataset and create metadata that is useful to understand the data and to detect irregularities. Profiling is mostly performed in a column-by-column manner, for instance to detect frequent value patterns or the uniqueness of column values. Common profiling methods and tools have the underlying assumption of a well-defined semantics of the column and mostly regular data. These assumptions do not hold for linked open data (LOD) published on the web. Such data emerge from different sources, such as open source communities (e.g., Wikipedia) or projects dedicated to a specific topic (e.g., DrugBank [1]). These diverse origins cause a diversity of how information is expressed as data values and how these values are structured. Nevertheless, these datasets interlink each other. The overall LOD vision is to enable the generation of new knowledge based on a wealth of widely available interlinked data. However, leveraging the variety of such open data requires (i) an initial understanding of each single dataset and (ii) an overview of the available data as a whole. Only then, data analysts can focus on the required subset of LOD for the problem at hand. Classical profiling techniques are, to the best of our knowledge, not appropriate to deal with these new massive sets of open (and thus heterogeneous) data. We propose a new iterative and interactive methodology for profiling LOD. We envision a process that allows a user to divide data into groups, review simple statistics or sophisticated mining results on a group-level, and then rethink grouping decisions in order to revise them for refining the profiling result. In this paper, we report on ProLOD, an initial prototype we developed to step towards this vision. As a proof-of-concept, we concentrate on the infobox (without ontology mappings) and short abstract data subset of DBpedia 3.2 [2]. DBpedia is generated from Wikipedia and comprises 34.2 million triples. II. RELATED WORK Technologies for dealing with large unstructured datasets are currently under development in two distinct communities, namely the Semantic Web community as well as the database community where relevant techniques emerge from the DataSpace area. Such large datasets are, on the one hand, published on the Web according to a set of best practices [3], whereas, on the other hand, the data stems from heterogenous sources within a specific scope, such as enterprises, or scientific collaborations [4]. Currently, several projects attempt to visualize heterogeneous data. We briefly discuss two examples: Tabulator [5] is an RDF browser that is designed to provoke interest in the Semantic Web for new users as well as to support instant feedback for developers. The authors promote two different views – Explore and Analyze. The first allows the traversal of an RDF graph while the latter enables the analysis of aggregated data that match a query. The Vispedia project [6] is an approach to interactively visualize Wikipedia infoboxes. It allows a user to select an infobox and define a keyword query, which the system evaluates on the semantic graph of Wikipedia to extract supplemental information. The result is presented on a map, a timeline, or in a graph. Some database research also deals with huge amounts of triples since triples are a generic way for representing data. Here, the literature deals with indexing [7], or techniques for property and path analysis with limited schema information [8]. It also discusses storage systems [9] as well as query-, and data models [8], [10]. As for data profiling in general, there is a wealth of approaches that can be used to grasp a given dataset, e.g., functional dependency discovery [6], and join path exploration [11]. The Belman project [12] integrates a set of techniques to address poorly structured and dirty data. Additionally, there are numerous tools from commercial vendors, [13], [14]. However, these tools are inappropriate for unstructured and heterogeneous datasets, such as DBpedia: (1) Analysis focuses on (one or more) columns, which are not present here. (2) Mining a dataset’s structure is not possible. (3) Analyses assume some sort of clean data, which a data analyst aims to understand – not a dataset that needs to be structured, cleansed as well as understood at the same time. III. CLUSTERING AND LABELING Clustering is a preliminary requirement when dealing with large and heterogeneous datasets, because millions of triples as a whole do not allow to derive meaningful results. A predicate, such as length, could appear in a large variety of contexts and domains. Calculating value distributions over all its values can be meaningless. We argue that a good clustering can divide the data into semantically correlated data(sub)sets, for which meaningful and insightful profiling results are possible. As a side-effect, clusters partition data into smaller subsets, better enabling on-the-fly profiling. Difficulties include the unknown number of clusters, the size of the dataset and the meaningful labeling of discovered clusters. Our prototype provides an initial pre-calculated hierarchical clustering of the dataset to profile, which can be refined interactively at runtime. The root cluster represents the complete dataset and each cluster can itself be divided into several subclusters. In our prototype, the clustering component is the only part that performs computation on the entire dataset – namely while computing the initial clusters. All further profiling steps are implicitly performed only on entities of the cluster in focus. In order to deal with the potentially enormous number of triples during clustering, we employ several strategies. (1) The initial clustering is computed in a pre-processing step. (2) Clustering is done using a schema-based similarity measure. Thus, it only needs to take into account the existence of predicates of an entity, not their actual values. (3) During clustering, our prototype employs several heuristics; for example, it uses a cluster-internal dissimilarity threshold to decide whether to further split a cluster into subclusters. Clustering is based on the well known K-Means algorithm, which is fast and has been shown to yield near-to-optimum results, in most cases. Since the number of clusters is unknown, we have implemented a version that iteratively increments the number of clusters, as well as one version that recursively invokes itself on clusters in order to create a cluster hierarchy. For each cluster, we compute a mean schema: It consists of a set of predicates that together form a schema that is distinctive for its cluster and, therefore, constitutes a brief summary for the structure of its entities. Currently, the mean schema is computed during clustering, where the n most frequently occurring predicates of a cluster are selected for its mean schema. We determine n, the size of the mean schema, as the average schema size of all entities in that cluster. To give users more insightful feedback about the content of the clusters, a labeling is required. Because all entities of a cluster are semantically correlated, they should have some common characteristics. In general, many entities have some textual descriptions as rdfs:comment or some equivalent properties. Since we do not want to concentrate on a predefined setting, we treat all strings of length l as texts describing an entity. Here, manual reviews have shown that we get good results for l = 100. We compute tf-idf values to determine the m most important terms of each cluster and use them as cluster label. Table I shows some examples. Alternatively, if no textual description could be found, the Top-m properties of the cluster mean-schema are used as label. <table> <thead> <tr> <th>Label</th> <th>Sample subjects</th> </tr> </thead> <tbody> <tr> <td>minister, politician, mayor</td> <td>Angela_Merkel</td> </tr> <tr> <td></td> <td>Ted_Kennedy</td> </tr> <tr> <td></td> <td>Presidency_of_George_Washington</td> </tr> <tr> <td>film directed, starring</td> <td>Titanic_(1997_film)</td> </tr> <tr> <td></td> <td>Metropolis_(film)</td> </tr> <tr> <td></td> <td>Frankenstein_(1910_film)</td> </tr> <tr> <td>club football league</td> <td>FC_Bayern_Munich</td> </tr> <tr> <td></td> <td>Liverpool_FC.</td> </tr> <tr> <td></td> <td>Los_Angeles_Galaxy</td> </tr> </tbody> </table> TABLE I EXEMPLARY CLUSTER LABELS AND CORRESPONDING SUBJECTS FOR m=3 IV. SCHEMA DISCOVERY After having clustered the data into semantically correlated subsets, a natural next step is to perform more detailed schema discovery for each of these clusters beyond the mean schema. Such schema discovery enables initial understanding of the actual structure of the data, because a set of triples does not expose much structural information. We propose a process that includes determining the actual schema (e.g., distinct attributes of a cluster), finding equivalent attributes (e.g., name, family name, and surname), and discovering poor attributes, i.e., those that do not contain useful values for most data entries. More sophisticated analyses discover attribute correlations, such as association rules, inverse relations, or foreign key relationships. Note that attributes mean predicates that have a clean and unified semantics. Our tool detects positive and negative association rules that show occurrence dependencies among predicates within a cluster. As for a book entity from a media cluster, such combinations with high combined occurrence are author, isbn, and genre. To detect these predicate cooccurrences, a version of the Apriori Algorithm [15] is used. Afterwards, a straightforward algorithm generates all rules that hold a desired confidence and correlation coefficient ρ. Table II shows a list of generated rules that hold in the media cluster for the aforementioned examples. <table> <thead> <tr> <th>Rule</th> <th>Confidence</th> <th>Correlation Coefficient</th> </tr> </thead> <tbody> <tr> <td>genre, isbn ⇒ author</td> <td>0.99</td> <td>0.67</td> </tr> <tr> <td>isbn ⇒ author</td> <td>0.92</td> <td>0.66</td> </tr> <tr> <td>isbn ⇒ author, genre</td> <td>0.83</td> <td>0.66</td> </tr> <tr> <td>author, genre ⇒ isbn</td> <td>0.70</td> <td>0.66</td> </tr> <tr> <td>author ⇒ isbn</td> <td>0.64</td> <td>0.66</td> </tr> <tr> <td>author ⇒ genre, isbn</td> <td>0.58</td> <td>0.67</td> </tr> </tbody> </table> TABLE II EXAMPLE RULES WITH 10% SUPPORT, 50% CONFIDENCE, AND ρ = 0.4 A second way of using association rules is to discover negative dependencies among predicates [16]. Such negative rules hint at heterogeneity among the schemata of entities in the current cluster. Either the cluster is poorly built and contains entities from different domains, or the negatively associated predicates have equivalent semantic meanings and therefore never occur together. For instance, we discovered that the predicates author and developer are negatively associated in the media cluster. The rules author ⇒ ¬developer and developer ⇒ ¬author have a confidence above 90%. In the spirit of holistic matching [17], we could conclude semantic equivalence. In fact, ProLOD allows to mark two attributes within a cluster as equivalent, and further mining will always combine the values from both. The negative association is also true for the predicates title and name, which may lead to the assumption that the media cluster contains different types of media, such as novels and computer games. Hence, using standard procedures to find positive and negative association rules, it is possible to combine results of both methods and scrutinize assumptions about the schemata of entities. Furthermore, finding association rules within the entire, unclustered dataset may lead to new ideas for clusters. For example, highly negative associated schemata would be useful hints for selecting seeds for the K-Means algorithm (which is left for future work). Another proposal to extract schema information is to examine linked entities. If subject X holds a link to subject Y via predicate A (X −→ Y), one can denote a semantic connection. A stronger form of interconnection between two potential correlated entities are links with inverse predicates A and B. So if two links X −−−−−−→ Y and Y −−−−−−→ X exist, one can state that these two entities are considerably connected. The more entities are connected via predicates A and B, the higher the correlation of the inverse predicate pair. In DBpedia 3.2, there are approx. 13m different links between entities; 4% of them form inverse predicate pairs. An example is the relation among the book entity Into the Wild and the author entity Jon Krakauer. Jon Krakauer: Into the Wild debutWorks Jon Krakauer and Jon Krakauer author Into the Wild. Table III shows further examples. Notice that [before, after] is the most frequent inverse link pair, whereas its correlation coefficient is not as high as others, because of the remaining frequent independent use of both predicates. Additionally, there are some occurrences of before with itself, but the correlation coefficient is negative. This may point to inconsistencies in the data. <table> <thead> <tr> <th>PredicateA</th> <th>PredicateB</th> <th>Corr Coef</th> <th>Frequency</th> </tr> </thead> <tbody> <tr> <td>before</td> <td>after</td> <td>0.239</td> <td>28856</td> </tr> <tr> <td>sisterStations</td> <td>sisterStations</td> <td>0.749</td> <td>7494</td> </tr> <tr> <td>precededBy</td> <td>followedBy</td> <td>0.830</td> <td>7097</td> </tr> <tr> <td>spouse</td> <td>spouse</td> <td>0.322</td> <td>1964</td> </tr> <tr> <td>before</td> <td>before</td> <td>-0.003</td> <td>738</td> </tr> <tr> <td>star</td> <td>exoplanet</td> <td>0.895</td> <td>188</td> </tr> </tbody> </table> TABLE III EXAMPLE INVERSE PREDICATES (ORDERED BY FREQUENCY) Like in the debutWorks example mentioned before, many links connect entities of different types and, accordingly, schemata as well as clusters. In conclusion, analyses of links and inverse predicates can lead to statistics about inter- and intra-cluster relationships, which help to perform a semantic (pre-)clustering. Additionally, the existence of inverse predicates may indicate the existence of redundant information about the same fact (e.g., followedBy and precededBy, or spouse with itself). V. DATA TYPES AND PATTERN STATISTICS Having a set of clusters and some knowledge about its structure, traditional profiling methods are applicable. Here, we gather statistics about data types and pattern distributions by analyzing the object values of the RDF triples. In Wikipedia, many users modify articles and use different syntax to represent the same fact. In addition, predicates are often misused when no applicable predicate exists. Profiling statistics support the detection of such discordant values or misused predicates and facilitate to find valid formats for specific predicates. A characteristic of linked data is the fact that an object value can be an internal link to another subject, an external link or a literal. Internal links are pointers to data within the local source; we consider only literal values and external links for further statistics, since we find that computing basic values for internal links is not useful. Profiling internal links is left for future work. At this point, we do also not yet follow external links for simplicity reason. Literals can be of different data types. Here, type detection using regular expressions is the first layer of profiling object values. In the next step, we determine the value distribution within these data types. Values of numeric data types are divided into ranges. For non-numeric data types, we generate patterns and determine their frequencies. To visualize huge numbers of different patterns (even for a single predicate), we introduce normalized patterns. Such normalized patterns summarize patterns by identifying sequences of the same character and reducing these sequences to a single character. Furthermore, ProLOD is able to drill down to the actual data values represented by the pattern or numeric range. ProLOD statistics are best illustrated by the example predicate zipcode. Usually, zip codes are sequences of digits, i.e., data type Integer is expected. Due to DBPedia’s heterogeneous data, String data was also discovered. One normalized pattern found is a-9, which represents all strings that have a sequence of letters followed by a sequence of digits separated by a sequence of hyphens. In our example, the only pattern normalized this way is AA-99999, which captures zip codes preceded by a country code. For this, it is useful to leverage ProLOD’s drill-down functionality to find out such information. VI. THE PROLOD TOOL ProLOD is a web-based prototype. We store the triples to profile in a relational database to be able to perform basic database operations. The profiling process is divided into two phases: pre-processing and realtime profiling. The purpose of the pre-processing phase is to build additional data structures that enable realtime profiling in the second phase. During pre-processing, our prototype executes several computations on the input data: (1) The triples are transformed and stored in a normalized database schema. (2) Each triple is enriched with data type and pattern information. (3) An initial clustering is computed and cluster-labels are determined. Since the pre-processing phase is computationally intensive and not yet highly optimized the DBpedia dataset load and preprocessing takes about one day. In the realtime profiling phase, the user interacts with the web interface, shown in Figure 1. The interface is divided into a cluster tree view on the left and a details view on the right. The cluster view enables users to explore the cluster tree and to select a cluster for further investigation. This view can also be used to modify the clustering by creating new clusters, merging existing ones, or reclustering with a different k. After selecting a cluster, the user can find facts and statistics about it in the details view, which can also be used to perform schema discovery. Further, it allows cluster modification through additional filtering. ![Fig. 1. The ProLOD web interface.](image) ### VII. SUMMARY AND OUTLOOK Data profiling, when applied to heterogeneous data, must be extended beyond the typical tasks of determining value distributions and patterns, finding outliers in columns, etc. Rather, profiling tools need to enable users to create an assessment of the data at hand. This begins at source-level, where clustering can help to identify the domain(s) of the data and their representative schema. The need for assessment continues at attribute-level, where attributes can store data of wildly different semantics and values of same semantics are spread over many attributes. Finally, traditional data profiling tasks apply. Due to the heterogeneity of the underlying data, a profiling tool can no longer have a mere reporting functionality; it is necessary to allow users to interactively learn insights about the data during subsequent profiling steps. Hence, performance can no longer concentrate on offline reporting, but must allow user feedback and perform on-the-fly profiling. With ProLOD, we have developed a corresponding prototype, concentrating on the DBpedia infobox dataset. In its current form, it is only a starting point, implementing a subset of possible tasks\(^1\). An important aspect of future work is scalability to even larger datasets. While DBpedia with its 34 million triples is already formidable, the Billion Triple Challenge\(^2\) hints at typical future data volumes. Finally, we plan to examine possible uses of ProLOD, for instance to help Wikipedia authors in writing infoboxes or to enable rapid integration of LOD sources. ### Acknowledgements This research was supported in part by the German Research Society (DFG grant no. NA 432). Additionally, we thank Christian Bizer and Georgi Kobilarov for providing DBpedia to the community and for their valuable feedback on ProLOD. ### REFERENCES \(^1\)A screencast is available at [http://tinyurl.com/prolod-01](http://tinyurl.com/prolod-01) \(^2\)http://challenge.semanticweb.org
{"Source-Url": "https://hpi.de/fileadmin/user_upload/fachgebiete/naumann/publications/2010/boehm-et-al_ntii_icde2010.PDF", "len_cl100k_base": 4579, "olmocr-version": "0.1.50", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 14355, "total-output-tokens": 5663, "length": "2e12", "weborganizer": {"__label__adult": 0.000370025634765625, "__label__art_design": 0.0008254051208496094, "__label__crime_law": 0.0007500648498535156, "__label__education_jobs": 0.0032558441162109375, "__label__entertainment": 0.0002334117889404297, "__label__fashion_beauty": 0.0002567768096923828, "__label__finance_business": 0.0008058547973632812, "__label__food_dining": 0.00047087669372558594, "__label__games": 0.0008115768432617188, "__label__hardware": 0.0008182525634765625, "__label__health": 0.0008077621459960938, "__label__history": 0.0008721351623535156, "__label__home_hobbies": 0.00017774105072021484, "__label__industrial": 0.0005965232849121094, "__label__literature": 0.0011434555053710938, "__label__politics": 0.000644683837890625, "__label__religion": 0.00067138671875, "__label__science_tech": 0.411376953125, "__label__social_life": 0.00039005279541015625, "__label__software": 0.138916015625, "__label__software_dev": 0.4345703125, "__label__sports_fitness": 0.0002808570861816406, "__label__transportation": 0.0005292892456054688, "__label__travel": 0.00032591819763183594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24923, 0.03074]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24923, 0.45588]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24923, 0.88195]], "google_gemma-3-12b-it_contains_pii": [[0, 5297, false], [5297, 12158, null], [12158, 18483, null], [18483, 24923, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5297, true], [5297, 12158, null], [12158, 18483, null], [18483, 24923, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24923, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24923, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24923, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24923, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24923, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24923, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24923, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24923, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24923, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24923, null]], "pdf_page_numbers": [[0, 5297, 1], [5297, 12158, 2], [12158, 18483, 3], [18483, 24923, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24923, 0.29348]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
39352bf185510872954bf96f9159791e100d8738
3G (Third Generation): A networking standard that uses cell phone frequencies to send Internet data to handheld devices at up to 144 Kbps. Although not as fast as Wi-Fi (802.11x), it offers broader coverage. A 3G wireless system can provide packet-switched wide-area wireless Internet access to support multimedia applications. Adobe CodeFusion: An application server and software development framework used for the development of computer software in general, and dynamic web sites in particular. Adobe Dreamweaver: A WYSIWYG (What You See Is What You Get) authoring software that allows web developers to generate HTML and JavaScript source code while viewing the site as they work. Adobe Flash: An authoring environment for creating animation, advertisements, various web-page components to integrate video into web pages and to develop rich Internet applications. Flash Professional is an IDE (Integrated Development Environment), while Flash Player is a virtual machine used to run, or parse, the Flash files. Adobe Photoshop: A software package for editing images and creating graphics. Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited. **Android**: A software stack for mobile devices that includes an operating system, middleware, and key applications such as contacts. It is a project proposed by the Open Handset Alliance, a group of more than 30 technology and mobile companies including Google, Inc. **Association Rule Discovery**: An association rule takes the form of “if this then that” and is used to associate events in a database. Association rule discovery can be used to find unordered correlations between items found in a set of database transactions. One example is the association between purchased items in a department. **Automatic Indexing**: Describes a process that algorithmically examines information items to build a data structure that can be quickly searched. **Bluetooth**: An evolving wireless networking protocol that connects various digital devices—from mobile phones to PDAs to computers. It can transmit wireless signals over a range of about 35 feet and does not require a line-of-sight connection. Bluetooth will not itself allow users to go online, but instead is a way to wirelessly connect to printers, computers, cell phones, and other devices. **BREW (Binary Runtime Environment for Wireless)**: An application development platform created by Qualcomm for CDMA (Code Division Multiple Access)-based mobile phones. The complete BREW solution includes: - BREW client software and porting tools for device manufacturers; - BREW Distribution System (BDS). This is controlled and managed by operators and enables them to easily get applications from developers to market and coordinate the billing and payment process; and - BREW SDK (software development kit) for application developers. **CDMA (Code Division Multiple Access)**: A type of mobile phone network. Also known as spread spectrum, CDMA cellular systems utilize a single frequency band for all traffic, differentiating the individual transmissions by assigning them unique codes before transmission. This is used by Sprint in the USA and Telus in Canada, for example. **CGI (Common Gateway Interface)**: A specification providing a method of executing server-side programs from a web page. A CGI program is any program designed to accept data from the web page, process the data, and return the results, usually in an HTML format, to the web page for display. **Client-Side Handheld Computing**: The use of handheld devices to perform mobile, handheld operations that do not need the power of server-side computing. Some application examples of this type of computing include (a) address books, (b) stand-alone video games, (c) note pads, and (d) to-do-lists. **CPU (Central Processing Unit):** The central unit of a computer, which includes both arithmetic and control units. The CPU of a microcomputer is usually a microprocessor. **Crawler/Spider:** A program that automatically scans various Web sites and collects Web documents from them. It follows the links on a site to find other relevant pages and is usually used to feed pages to search engines. **Device Independent Authoring Language (DIAL):** DIAL provides a markup language for the filtering and presentation of web page content available across different delivery contexts. It uses a language profile based on existing W3C XML vocabularies and CSS modules and provides standard mechanisms for representing web page structure, presentation and form interaction. **Digital Certificate:** A document issued by a certificate authority (a trusted third party) that contains the certificate holder’s name, a serial number, expiration dates, a copy of the certificate holder’s public encryption key, and the digital signature of the certificate-issuing authority so that a recipient can verify that the certificate is genuine. **Digital Signature:** An electronic signature that can be used to authenticate the identity of the sender of a message or the signer of a document. The sender uses a private encryption key to encrypt their identification and (possibly) some additional information derived from the message that is being signed. The receiver uses a sender’s public key to recover the identification and additional information, which is used to check the integrity of the received message. **Electronic Commerce or E-Commerce:** The exchange or buying and selling of commodities, services, or information or the transfer of funds on the Internet using desktop or notebook computers. **Expansion (Card) Slot:** The opening found on handheld devices where expansion cards are inserted, allowing users to add applications such as games, memory such as flash memory, and even peripherals such as keyboards to the device. **Expansion Card:** A postage stamp-sized add-on memory that can be inserted into the expansion slot of a Palm device. Expansion cards can contain applications, songs, videos, pictures, and other information in a digital format. There are three types: MultiMediaCard (MMC), SD (Secure Digital) and SDIO (Secure Digital Input/Output). **Flash Memory:** Non-volatile memory that can be electrically erased and reprogrammed. It can be either non-removable, such as a built-in flash drive, or remov- able, such as a USB flash drive for portable data. Its capacity is usually between a couple of hundreds of Kb and a few GB. **Flexible Screens:** The company Polymer Vision has developed a new rollable display technology that enables a screen to be rolled up into a pen-sized case. The rollable screen is equipped with various connectivity options, including Bluetooth. **Fuel Cells:** A device that turns fuel such as hydrogen into electricity via a chemical reaction. It offers a greener method of generating electricity because no fossil fuels are used, no combustion takes place, and no pollutants are emitted. **GSM/GPRS (Global System for Mobile Communications/General Packet Radio Service):** A type of mobile phone network used throughout most of the world. GPRS enabled networks offer “always-on,” higher capacity, Internet-based content and packet-based data services. This enables services such as color Internet browsing, email on the move, powerful visual communications, multimedia messages and location-based services. Wireless Service providers on a GSM/GPRS network include Cingular Wireless and T-Mobile in the USA and Rogers Wireless and Fido in Canada. **Handheld Computing:** The use of handheld devices such as smart cellular phones and PDAs (Personal Digital Assistants) to perform wireless, mobile, handheld operations such as personal data management and making phone calls. **Hard Drives:** Storage devices containing a read-write mechanism plus magnetic disks inside a sealed unit. At the time of writing (in 2007), a hard drive for handheld devices is capable of storing data up to a couple of gigabytes. This compares with the capacity of a hard drive on a desktop computer, which could be several hundreds of gigabytes. **HTML (HyperText Markup Language):** HTML, based on SGML (Standard Generalized Markup Language), is used for publishing hypertext on the World Wide Web. Tags such as `<p>` and `</p>` are used to structure and stylize text. Refer to W3C’s HyperText Markup Language (HTML) home page at http://www.w3.org/MarkUp/ for more details. **Hyperlinks:** A selectable connection from one word, phrase, picture, or information object to another. By clicking on a hyperlink, a Web user can move easily from one Web page to another page. Hyperlinks are most commonly indicated by a highlighted word, phrase, or picture. **i-mode:** i-mode (http://www.nttdocomo.com/corebiz/imode/index.html) is the full-color, always-on, packet-switched Internet service for cellular phone users offered by NTT DoCoMo. With i-mode, cellular phone users are able to access tens of thousands of Internet sites, as well as specialized services such as e-mail, online shopping and banking, ticket reservations, and restaurant advice. Users are charged based on the volume of data transmitted, rather than the amount of time spent connected. **Infrared Port:** The component on a handheld device that is used to transmit and receive data between compatible infrared equipped devices via infrared signals. On some models the infrared port can also be used to communicate with a compatible, infrared-equipped mobile phone for connecting to the Internet. **Java ME (Java Micro Edition):** An environment for applications running on consumer devices, such as mobile phones, PDAs, and TV set-top boxes, as well as on a broad range of embedded devices. Like its counterparts for the enterprise (J2EE), desktop (J2SE) and smart card (Java Card) environments, J2ME includes Java virtual machines and a set of standard Java APIs defined through the Java Community Process by expert groups whose members include device manufacturers, software vendors, and service providers. **JavaScript:** The **de facto** standard language used to add dynamic behavior to HTML documents. JavaScript is one of the originating technologies of ECMAScript. **JDBC (Java Database Connectivity):** A standardized database interface for Java applications accessing database data. It allows a single database application to run on different platforms and interact with different SQL-compliant DBMSs. **LAN (Local Area Network):** A computer network that provides data communications to users located in a limited area such as a building or city block. **Linux:** A free, open-source, Unix-type operating system. Linux and other Linux-like OSs are developed based on the Linux kernel. **Lithium-Ion Batteries:** Replaceable, rechargeable Lithium-Ion batteries are the batteries most commonly used by handheld devices. Although expensive when they were first introduced, their prices have since dropped rapidly. **Micro PCs:** Mini versions of PCs that differ from handheld devices because they apply a different technology. For example, Windows OSes are for PCs and micro PCs but not for handheld devices, whereas Windows Mobile OSes are for handheld devices but not for PCs and micro PCs. **Micro/Macro-Payment:** A mobile payment of approximately $10 or less (often for mobile content such as video downloads or gaming) is called a micro-payment, while a macro-payment refers to larger value payment. **Microbrowsers:** Miniaturized versions of desktop browsers such as Netscape Navigator or Internet Explorer. Due to the limited resources of handheld devices, microbrowsers differ from traditional desktop browsers in the following ways: (i) smaller windows, (ii) smaller footprints, and (iii) fewer functions and multimedia features. **Microsoft ASP.NET:** Free technology that allows programmers to create dynamic web applications. ASP.NET is part of Microsoft’s .NET platform and is the successor to ASP technology. **Microsoft Expression Web:** A design tool used to create sophisticated standards-based web sites. It combines both FrontPage and Visual Studio technologies in a new user interface for creating XHTML, CSS, XML, XSLT, and ASP.NET 2.0. **Microsoft SharePoint Designer:** Enables information workers to develop applications and solutions on top of the SharePoint platform to enable organizational agility, business process automation, and benefit from using Microsoft Office applications on the SharePoint platform. **Microsoft Visual Studio:** Microsoft’s flagship software development product, Visual Studio lets programmers create standalone applications, web sites, web applications, and web services that run on any platforms supported by Microsoft’s .NET Framework. **Mobile Commerce Security:** The technological and managerial procedures applied to mobile commerce to provide security services for mobile commerce information and systems. **Mobile Commerce:** The exchange or buying and selling of commodities, services, or information or the transfer of funds on the Internet (wired or wireless) through the use of Internet-enabled mobile handheld devices. **Mobile Handheld Devices:** Small general-purpose, programmable, battery-powered computers, that are small enough to be held in one’s hand. There are two major kinds of handheld devices: (i) (voice-centric) smart cellular phones, and (ii) (data-centric) PDAs (Personal Digital Assistants). They differ from desktop or notebook computers in several important ways, namely their (i) low network bandwidth, (ii) small screen/body size, and (iii) mobility. **Mobile Payment:** The transfer of financial value and corresponding services or items between different participants in mobile commerce systems. **Navigator:** The two large, concentric buttons in the middle of the lower part of many Palm devices. Pressing the edges moves the cursor up, down, right, or left, and the center button selects items. The navigator (sometimes referred to as the five-way navigator) works differently in different applications, but is usually an easy way to operate the device with one hand. **NetBeans:** An open-source IDE (Integrated Development Environment) used to create professional cross-platform desktop, enterprise, web, and mobile applications. NetBeans is sponsored by Sun Microsystems, Inc. **OMA (Open Mobile Alliance):** A large interest group of mobile operators, device and network suppliers, information technology companies, and content and service providers, it was founded by consolidating the WAP Forum, Location Interoperability Forum (LIF), SyncML Initiative, MMS-IOP (Multimedia Messaging Interoperability Process), Wireless Village, Mobile Gaming Interoperability Forum (MGIF), and the Mobile Wireless Internet Forum (MWIF) (http://www.openmobilealliance.com/). The OMA develops market driven, interoperable mobile service enabler specifications, promoting the creation of interoperable services across countries, operators and mobile terminals that will meet mobile users’ needs. **Palm Operating Systems:** An ARM-compliant, 32-bit operating system running on handheld devices developed by PalmSource Inc.. Currently, there are two versions of Palm OS: (i) Palm OS Garnet, an enhanced version of Palm OS 5 and (ii) Palm OS Cobalt, the next generation of Palm OS. **Palm OS Developer Suite:** The official development environment and tool chain from PalmSource Inc., intended for software developers at all levels. It offers a complete IDE (Integrated Development Environment) for (i) Protein applications (all ARM®-native code) for Palm OS Cobalt and (ii) 68K applications for all shipping versions of the Palm OS. **Persistent Storage:** Persistent (non-volatile) storage is a storage medium whose contents are preserved even when the power is off. Flash memory, a kind of persistent storage, is usually used for handheld devices. Hard disks are seen in some devices, but rarely. **Public Key Encryption (PKI):** A technique that uses two different keys (a private key and a public key) for encryption. Messages encrypted with the private key can be decrypted with the public key, giving the owner of the private key a way to prove their identity to anyone who knows the public key. Messages encrypted with the public key can be decrypted with the private key, giving anyone who knows the public key a way of sending a confidential message to the owner of the private key. **QWERTY:** The standard layout for letter keys on computer, handheld and smartphone keyboards used to type in English, modeled after the typewriter and named for the order of the first six keys on the top row. **RAM (Random Access Memory):** Stores digital information temporarily and can be changed by the user. It constitutes the basic storage element in computer terminals and has replaced magnetic core memories in main frame computers. **RISC (Reduced Instruction Set Computing):** Used in devices where the number of instructions a microprocessor runs for a specific application are reduced from a general purpose Complex Instruction Set Computing (CISC) device in order to create a more efficient operating system. **ROM (Read Only Memory):** Stores information used repeatedly, such as tables of data, characters for electronic displays, etc. Unlike RAM, ROM cannot be altered. **Ruby On Rails (ROR):** A full-stack framework for developing database-backed web applications using Model-View-Control patterns. **Server-Side Handheld Programming:** The use of handheld devices to perform wireless, mobile, handheld operations that require additional input from a server. Typical applications of this type of computing include (a) instant messages, (b) mobile Web contents, (c) online video games, and (d) wireless telephony. **SET:** The Secure Electronic Transaction (SET) protocol is a technical standard designed to provide security for payment transactions among cardholders, merchants, payment gateways, and certification authorities operating in the Internet environment. **Smart Card:** A plastic card that uses a silicon chip as its storage mechanism, rather than the conventional magnetic strip, for increased security and functionality. **SQL (Structured Query Language):** A standard interactive, programming language for accessing, manipulating, and administering a database. It is both an ANSI and an ISO standard, and most relational database products support SQL. Its commands include selection, insertion, update, deletion, finding out the location of data, and so forth. **Stylus:** An input device used to write text or draw lines on a surface as input to a handheld device. **Subscriber Identity Module (SIM):** A device in the mobile handset that contains the subscriber’s authentication information, such as cryptographic keys, and a unique identifier called international mobile subscriber identity (IMSI). **Sun Java Studio:** A development platform with features such as UML modeling, instant collaboration, and application profiling. Java Studio is used to develop, debug, tune, and deploy enterprise applications, web services, and portal components based on the Java EE platform. **Symbian:** A software licensing company that develops and supplies an operating system—Symbian OS—for data-enabled mobile phones. It is an independent, for-profit company whose mission is to establish Symbian OS as the world standard for mobile digital data systems, primarily for use in cellular telecoms. **Symmetric Encryption:** A technique that uses one secret code for both encryption and decryption. It is used to support confidential exchanges of messages between two parties who share knowledge of the common key. **Synchronization:** Connects handheld devices to desktop computers, notebooks, and peripherals in order to transfer or synchronize data. Other than using serial cables to connect handheld devices and other computing equipment, many handheld devices use either an infrared (IR) port or Bluetooth technology to send information to other devices. **Tablet PCs:** A mini version of PCs that differ from handheld devices as they utilize different technologies. For example, Windows OSs are for PCs and tablet PCs but not for handheld devices, whereas Windows Mobile OSs are for handheld devices but not for PCs and tablet PCs. **TDMA (Time Division Multiple Access):** A digital transmission technology that operates in either the 800MHz or 1900MHz frequency bands. The frequency band is split into a number of channels, so that several calls or data can share a single channel without interfering with one another. **Universal Connector:** A USB-enabled communications port and hardware attachment system on computers and some handheld devices that is used to connect to accessories like cables, cradles and keyboards. **URL (Uniform Resource Locator):** The address of an object accessible on the Internet. The object could be an HTML document, a text file, an image file, a program such as a common gateway interface application, etc. They are mainly used in HTML documents to specify the target of a hyperlink. USB (Universal Serial Bus): A standardized communications protocol that enables data exchange between electronic devices. It is often used to enable communications between handheld devices and computers for synchronization. USB supports data transfer rates of up to 12 megabytes per second. User Navigation Patterns: Interesting usage patterns that are contained in Web log data. Most algorithms use a method of sequential pattern generation, while the remaining methods tend to be rather ad hoc. W3C (World Wide Web Consortium): An international industry consortium created in October 1994 to develop open, unifying standards such as XML and semantic Web for the World Wide Web in order to avoid duplicating, competing standards (http://www.w3c.org/). WAN (Wide Area Network): A computer network that provides data communications to more users than a local area network (LAN) can, and usually spans a greater geographical distance, such as a town or country. WAP (Wireless Application Protocol): A suite of open, global specifications developed by the WAP Forum. It is used for developing applications that operate over wireless communication networks, and allows users with mobile handheld devices to easily access and interact with information and services instantly. The detailed specifications can be found at http://www.openmobilealliance.org/tech/affiliates/wap/wapindex.html. WAP Forum: The official organization developing the WAP (Wireless Application Protocol) (http://www.wapforum.org/). It has now merged with the Open Mobile Alliance (OMA) (http://www.openmobilealliance.org/) and no longer exists as an independent organization. WAP Gap: A known security weakness in Wireless Application Protocol (WAP) caused by the inclusion of the WAP gateway in a security session such that encrypted messages sent by end systems might temporarily become clear text on the WAP gateway when messages are processed. Web Logs: Records activity information when a web user submits a request to a Web server. A log file can be located in three different places: (i) Web servers, (ii) Web proxy servers, and (iii) client browsers. Web Proxy Servers: An intermediary that takes HTTP requests from users and passes them to a Web server; the proxy server then returns to users the results passed to them by the web server. **Web Server:** An application software that uses HyperText Transfer Protocol. Its major task is to serve the requests of clients (browsers) by returning web pages. **WEP (Wired Equivalent Privacy):** A data link-level protocol that provides security for the IEEE 802.11 WLAN standards. The encryption algorithm used in WEP is a stream cipher based on RC4. **Wi-Fi:** Wi-Fi or 802.11x, is a family of wireless networking protocols, with 802.11b being the most common at present. It operates in an unlicensed radio frequency band at 2.4 GHz and provides data access at 11 Mbps. **Windows Mobile:** A compact operating system for handheld devices based on the Microsoft Win32 API. It is powered by Windows CE 5.x and uses the .NET Compact Framework 2.0. Windows Mobile includes three major kinds of software: (i) Windows Mobile-based Pocket PCs, (ii) Windows Mobile-based Portable Media Centers, and (iii) Windows Mobile-based Smartphones. **WML (Wireless Markup Language):** A markup language that formats and stylizes Web pages for being displayed on mobile handheld devices such as smartphones and PDAs. WML is part of Wireless Access Protocol (WAP) and it is based on XML (eXtensible Markup Language). Refer to Wireless Markup Language (WML), Version 2.0 at http://www.openmobilealliance.org/tech/affiliates/wap/wap-238-wml-20010911-a.pdf for more details. **XHMTL (eXtensible Hypertext Markup Language):** Reproduces, subsets, and extends HTML 4.0, which is used to publish hypertext on the World Wide Web, and has been reformulated in XML 1.0. Refer to the W3C’s HyperText Markup Language (HTML) Home Page at http://www.w3.org/MarkUp/ for more details. **XML (eXtensible Markup Language):** Originally developed by the W3C, this is effectively a pared-down version of SGML, designed especially for Web documents. It allows designers to create their own customized tags, enabling the definition, transmission, validation, and interpretation of data between applications and between organizations. Refer to Extensible Markup Language (XML) at http://www.w3.org/XML/ for more details. Zend’s Core: The production PHP 5 stack that provides the certified, enhanced capabilities with support and services that professionals need for PHP development and production.
{"Source-Url": "http://www.igi-global.com/pdf.aspx?ctid=17&ptid=653&t=glossary&tid=151388", "len_cl100k_base": 5170, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 23394, "total-output-tokens": 5774, "length": "2e12", "weborganizer": {"__label__adult": 0.0006098747253417969, "__label__art_design": 0.0005364418029785156, "__label__crime_law": 0.0006098747253417969, "__label__education_jobs": 0.0015745162963867188, "__label__entertainment": 0.0002639293670654297, "__label__fashion_beauty": 0.00025773048400878906, "__label__finance_business": 0.0010900497436523438, "__label__food_dining": 0.00029540061950683594, "__label__games": 0.0010662078857421875, "__label__hardware": 0.06353759765625, "__label__health": 0.0006113052368164062, "__label__history": 0.0004596710205078125, "__label__home_hobbies": 0.0002543926239013672, "__label__industrial": 0.0009202957153320312, "__label__literature": 0.00039768218994140625, "__label__politics": 0.00018584728240966797, "__label__religion": 0.0005598068237304688, "__label__science_tech": 0.1376953125, "__label__social_life": 0.00013577938079833984, "__label__software": 0.157958984375, "__label__software_dev": 0.62939453125, "__label__sports_fitness": 0.0003273487091064453, "__label__transportation": 0.0008416175842285156, "__label__travel": 0.0003731250762939453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26061, 0.00404]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26061, 0.71841]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26061, 0.88603]], "google_gemma-3-12b-it_contains_pii": [[0, 1226, false], [1226, 3856, null], [3856, 6385, null], [6385, 8909, null], [8909, 11487, null], [11487, 13945, null], [13945, 16471, null], [16471, 18680, null], [18680, 21138, null], [21138, 23462, null], [23462, 25885, null], [25885, 26061, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1226, true], [1226, 3856, null], [3856, 6385, null], [6385, 8909, null], [8909, 11487, null], [11487, 13945, null], [13945, 16471, null], [16471, 18680, null], [18680, 21138, null], [21138, 23462, null], [23462, 25885, null], [25885, 26061, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26061, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26061, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26061, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26061, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26061, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26061, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26061, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26061, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26061, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26061, null]], "pdf_page_numbers": [[0, 1226, 1], [1226, 3856, 2], [3856, 6385, 3], [6385, 8909, 4], [8909, 11487, 5], [11487, 13945, 6], [13945, 16471, 7], [16471, 18680, 8], [18680, 21138, 9], [21138, 23462, 10], [23462, 25885, 11], [25885, 26061, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26061, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
99c0167417008771b7782119413532944cf099cd
[REMOVED]
{"Source-Url": "https://www.scratchabledevices.com/HCII11_scratchable.pdf", "len_cl100k_base": 4279, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 20052, "total-output-tokens": 5358, "length": "2e12", "weborganizer": {"__label__adult": 0.0010251998901367188, "__label__art_design": 0.004261016845703125, "__label__crime_law": 0.0009050369262695312, "__label__education_jobs": 0.006145477294921875, "__label__entertainment": 0.0002810955047607422, "__label__fashion_beauty": 0.0008096694946289062, "__label__finance_business": 0.0005702972412109375, "__label__food_dining": 0.0013589859008789062, "__label__games": 0.0020732879638671875, "__label__hardware": 0.07452392578125, "__label__health": 0.0022029876708984375, "__label__history": 0.0007877349853515625, "__label__home_hobbies": 0.01019287109375, "__label__industrial": 0.0025043487548828125, "__label__literature": 0.0006837844848632812, "__label__politics": 0.0003960132598876953, "__label__religion": 0.001094818115234375, "__label__science_tech": 0.21484375, "__label__social_life": 0.0005140304565429688, "__label__software": 0.01210784912109375, "__label__software_dev": 0.6591796875, "__label__sports_fitness": 0.0008902549743652344, "__label__transportation": 0.0021762847900390625, "__label__travel": 0.00048160552978515625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24159, 0.02186]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24159, 0.59102]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24159, 0.91109]], "google_gemma-3-12b-it_contains_pii": [[0, 2528, false], [2528, 5619, null], [5619, 8948, null], [8948, 11128, null], [11128, 14192, null], [14192, 16259, null], [16259, 17453, null], [17453, 20036, null], [20036, 22795, null], [22795, 24159, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2528, true], [2528, 5619, null], [5619, 8948, null], [8948, 11128, null], [11128, 14192, null], [14192, 16259, null], [16259, 17453, null], [17453, 20036, null], [20036, 22795, null], [22795, 24159, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24159, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24159, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24159, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24159, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24159, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24159, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24159, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24159, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24159, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24159, null]], "pdf_page_numbers": [[0, 2528, 1], [2528, 5619, 2], [5619, 8948, 3], [8948, 11128, 4], [11128, 14192, 5], [14192, 16259, 6], [16259, 17453, 7], [17453, 20036, 8], [20036, 22795, 9], [22795, 24159, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24159, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
2dc522a51933e591d46f1bc8fd679b11792a7bca
1 Introduction 1.1 The game We designed an AI to play The Wikipedia Game, a computer game played on wikipedia.org. In short, players start on a random Wikipedia article and then try to traverse the site using only the hyperlinks in the articles to get to some common destination article. The game can be played by humans either for speed or for fewest links used, but a key component of a human’s strategy is that he never backtracks (i.e., his algorithm is entirely greedy). An input to our AI will therefore be a random Wikipedia article, and the output will be a path from there to the destination article, which we chose to remain fixed at the ‘Stanford University’ page. Our AI will navigate from the start article to the goal one attempting to minimize both the link-distance and time but also attempting to minimize states viewed, to emulate the human player and because in a real application to the web each article would take non-trivial time to download and view. 1.2 Classes This project incorporates a variety of concepts in artificial intelligence, game theory, and machine learning. Because all of the elements are used in concert and interrelated, the distinction between the subjects is difficult to make. In general, the search problem and modelling apply more specifically to the subject matter in CS 221, and the rest of the machine learning (i.e., the various ML algorithms, K-Means, and PCA) are geared toward CS 229. 1.3 Related work While we were unable to find any academic articles directly investigating the link traversal which forms the bulk of our project, many groups have applied machine learning techniques to explore the structure and semantics of Wikipedia. Much of previous research focuses on text and document categorization and to that end, various techniques have been implemented to group Wikipedia articles. Banerjee, Somnath, et al [1] along with a number of other groups have attempted to create “concepts” from the articles to provide more descriptive features for them. The most successful approaches such as that of Milne, David, et al [2] and Yazdani, Majid, et al [3], heavily utilized the links in each article for classification, which boded well for our link-based approach to explore the structure of Wikipedia. Hu, Jian, et al [4] introduced some representation of the hierarchical nature Wikipedia which improved their text classification, in particular differentiating category and link based articles from the rest. Our technique employs unsupervised learning to infer such relationships and build features representing these broader articles to augment our base bag of words model. Intuitively such higher level articles should be useful in linking diverse concepts and thus providing minimal path solutions in the game. Many previous papers have investigated various natural language processing methods to optimally featurize and then cluster Wikipedia pages in a similar manner to our unsupervised learning. Hu, Xia, et al [5] proposed the use of the Porter Stemmer to most effectively stem features, a technique we inherited, but also suggested the removal of date strings as features. For our specific application, date articles turned out to be very common on minimal path solutions and so we decided to retain them in our feature set. 2 Data and features 2.1 Dataset The first thing we did to collect data was extensive human trials. We observed enough plays through the game to establish an oracle for human ability with confidence. A comparison of these results with results from various implementations of our AI can be found in Table 2 on page 4. Our data was all downloaded directly from the Simple English version of Wikipedia, which can be found at simple.wikipedia.org. This website, and all Wikimedia websites, provides a free .xml dump of the entire site. We downloaded this single file and parsed it into a series of pages, each with its article text attached. We then extracted all of the hyperlink information from the pages and used this to form a directed graph, with each page represented by a node and each link from one page to another as an edge in that direction. With this exceedingly simple implementation we were able to implement basic graph search algorithms such as Uniform Cost Search (UCS). However, the more interesting problem began when we introduced a model to estimate the link-distance between two articles. To do this, we first had to generate training sets to train the models and testing sets to test them. We did this by running the simple (but slow and costly) exhaustive UCS algorithm to determine the actual minimum link distance \( d \) between a random given start page \( p_{\text{start}} \) and our \( p_{\text{goal}}, \) ‘Stanford University’. This allowed us to generate a virtually infinite amount of training and testing data in pairs \( (p_{\text{start}}, d) \) representing the input and the desired output of our predictor. Essentially, this created an arbitrary dataset \( S \equiv \{(x^{(i)}, y^{(i)})\} \) for \( i = 1, \ldots, m \) where \( x^{(i)} \) is a random article and \( y^{(i)} \) is the distance between that article and the goal. ### 2.2 Feature extraction To extract meaningful data about each webpage for the ML algorithms, we had to featurize each article. We used a feature extractor \( \phi(\cdot) \) which could operate on any article \( x \). The page was modelled as a bag of words (i.e. we assumed that word order was irrelevant to our problem). We therefore implemented the feature extractor using the nltk package to tokenize and stem each word in the page, so that any pages with similar concepts might have features in common. We ignored commonly used words by removing any of nltk’s stop words from the feature set. Furthermore, we added features for the number of words and the number of links in a page. Therefore \( \phi(x^{(i)}) \) essentially created a sparse feature vector with only the words in the page and a modicum of metadata about it. ### 3 Supervised Learning #### 3.1 Search Initially we implemented a state-based model to frame the problem. Essentially, the goal of the algorithm is to find a path from the start article to the goal article via a series of other articles. The state-based model is simple: each state is based on a article, and must store all of the information that the article does. This is the only value intrinsic to a state, so it is the sole component of a state. The actions which can be taken from a given state are to transfer to a state representing any article that the given state links to. Following this action transfers to the given state, and the cost of taking this transition is uniform at 1 because each transition counts as one link used. Finally, a state is the goal state if the article it refers to is the goal article. #### 3.2 Machine Learning This model was sufficient to use simple state-based search algorithms—namely, Depth-First Search (DFS), Breadth-First Search (BFS), and UCS. These algorithms (particularly UCS, unsurprisingly) performed well and found the minimal path within a reasonable amount of time. However, we were interested in trading the guarantee of the minimal path that UCS affords to decrease the number of states that the algorithm has to explore. The algorithms fell short of our goal because although they all found the minimum path between \( p_{\text{start}} \) and \( p_{\text{goal}}, \) in each case the number of states the algorithm had to explore was still very high. To remediate this, we applied a heuristic to UCS (thereby making it \( A^* \)) which could estimate the distance from one article to another. This strategy was roughly based on ours as humans—when we play the Wikipedia Game, we follow a greedy algorithm, clicking on one page and never looking back. This means that we almost never find the shortest path (only an approximate one, on the order of the minimal path), but it also means that we are much more efficient and don’t have to explore as many states. All of this is only possible because humans are able to incorporate knowledge of the content of the articles in their decisions about which link to follow. To implement this heuristic, we made the assumption that the link-distance between two articles could be modelled directly and implemented ML algorithms to find it. As discussed above, we used a series of training examples with the input of a page \( x^{(i)} \), a bag of words feature extractor \( \phi(\cdot) \), and the actual minimum distance from that page to the goal \( y^{(i)} \) to train different algorithms. It is important to note that the output of our algorithms \( y \) is not binary; this is a multiclass application of ML. In principle, \( y \) can assume any value \( 1, \ldots, l \) (where \( l \) is the maximum minimum path length from any article to the goal) or the value \( \infty \), representing the case where no path exists whatsoever. In practice, the range of values \( y \) could assume were limited to the range of the values assumed by any of training examples \( y^{(i)} \). Usually, this range was about \([1 : 8]\), \( \infty \). One common strategy to handle multiclass classification is One vs. Rest (OVR) classification. This involves training any ML algorithm as usual, but doing so once for each of the different possible outputs. For each separate model, only the training examples pertaining to that particular outcome are marked as positive outcomes. Each model is then able to generate a score for its outcome on a new example \( x \). Whichever of the models gives the highest score to that example (implying that that outcome is, in some sense, the most likely result according to the model) predicts the overall result to be the corresponding outcome. We used the sklearn library to implement OVR Logistic Regression; Stochastic Gradient Descent (SGD) with Hinge Loss and with Perceptron Loss; and a Support Vector Machine (SVM) with Hinge Squared Loss. None of these was the most effective algorithm, so we will not describe them here. See Table 1 on page 4 for details. Given a random set of 1000 training examples and random test set of 200 examples, Multinomial Logistic Regression was able to consistently outperform all of the above methods. Multinomial Logistic Regression actually operates in a similar fashion to OVR, but the output of each of the classifiers is a probability that that classifier's outcome is the correct one. Normalizing over the results produces a probability distribution for the outcome, and selecting the one with the highest probability allows the algorithm to select a particular value. Each of the individual classifiers in the model is implemented as a linear combination of the weights of the example $x$ (note that $x_0 \equiv 1$ for the intercept term). The classifier's score therefore varies with $w^\top \phi(x)$, where $w$ is the weight vector learned in the training phase. Specifically, the classifier's score is $$h_w(x) \equiv S(x) = \frac{1}{1 - e^{-w^\top x}}.$$ Typically, the parameter $w$ is taken to be the maximum a posteriori (MAP) estimate $$w = \arg\max_w L(w) = \arg\max_w p(\hat{y} \mid X : w) = \arg\max_w \prod_{i=1}^m p(y^{(i)} \mid x^{(i)} : w),$$ which can be found by stochastic gradient ascent via the update rule $$w_j := w_j + \alpha(y^{(i)} - h_w(x^{(i)})) x^{(i)}_j.$$ When run to convergence, this algorithm produced a relatively powerful classifier for our data. 4 Unsupervised Learning 4.1 K-Means One method we used to improve our algorithm and to gain an intuition for our feature space was K-Means clustering, which we used to develop clusters of pages under our feature extractor. Our theory was inspired by human players, who frequently maintained the initial goal of getting to articles on the topic of the goal, then focused on moving from there to the real goal. The intuition was that if the machine could also take this topical context into account the algorithm would improve. Clustering the data allowed us to add another feature to the feature set–specifically, the identity of the cluster each page was in. K-Means is run in two steps: the assignment step, where each page is assigned to a centroid, mathematically represented by $$c^{(i)} := \arg\min_j \|x^{(i)} - \mu_j\|^2;$$ and the recentering step, where each centroid is moved to the average of its constituents places with $$\mu_j := \frac{\sum_{i=1}^m 1\{c^{(i)} = j\} x^{(i)}}{\sum_{i=1}^m 1\{c^{(i)} = j\}}.$$ For instance, PCA showed us that date articles (e.g. “2007” or “12 January”), which intuitively connect sporadic ideas, proved to be a very common route to the goal for the minimum path. It can be seen in Figure 2 that certain nodes are disproportionately more valuable than others (represented by the large blue circles); these are primarily date and list nodes. Upon further investigation, we found that these types of articles are outliers in the 2D space (see Figure 3). Figure 3: 2D PCA with Dates Highlighted The fact that they are outliers suggests that something is different about the structure of these pages. Specifically, list and date articles have a lot of links to other pages and shockingly little text. These two types of articles, along with some of Wikipedia’s administrative articles, comprised the vast majority of the outliers in the 2D PCA space. 5 Results We found with high confidence that from 35% of randomly generated start articles, ‘Stanford University’ could not be reached at all. For 65% of start articles, there did exist a path from \( p_{\text{start}} \) to \( p_{\text{goal}} \); on average the empirically minimal path had a cost (i.e. length) of 4.7 transitions. The really interesting results had to do with the number of states explored under various state search algorithms, which can be seen in Table 2. BFS and DFS both by definition explored all 210,000 states per solution. The time complexity of UCS depends inherently on the structure and convexity of the search problem. Without an heuristic for the future cost of a path (i.e. the cost from \( p_{\text{curr}} \) to \( p_{\text{goal}} \)), the transition costs for the problem are uniform. This structure allowed UCS to achieve a much better result than DFS or BFS; when UCS was used to solve the problem it found a minimal path after exploring an average of 19,000 states for a speedup of more than 1,000%. All of these algorithms are guaranteed to find a minimal cost path when one exists. To improve UCS, we undertook to implement an ML algorithm to predict the future cost of a path. Specifically, we tried the methods in Table 1. <table> <thead> <tr> <th>Model</th> <th>Accuracy</th> <th>Distance</th> </tr> </thead> <tbody> <tr> <td>Logistic Regression</td> <td>55%</td> <td>.54</td> </tr> <tr> <td>SGD with Hinge Loss</td> <td>21%</td> <td>1.2</td> </tr> <tr> <td>SGD with Perceptron Loss</td> <td>5.8%</td> <td>1.8</td> </tr> <tr> <td>SVM with Hinge Squared Loss</td> <td>44%</td> <td>.64</td> </tr> <tr> <td>Multinomial Logistic Regression</td> <td>59%</td> <td>.50</td> </tr> </tbody> </table> Note that all but the ultimate algorithm is OVR. The ‘Distance’ column in the table represents the average distance between the predicted value of \( y \) and the actual one. It is clear to see that Multinomial Logistic Regression yielded both the best accuracy and the lowest distance, so we implemented that for the algorithm. Finally, consider the performance of \( A^* \), implemented with Multinomial Logistic Regression as an heuristic. It did not always find a minimum cost path, because the heuristic we used was not consistent: on average less than 40% of the time the heuristic would be incorrect (by a distance of 1.2, on average) and sometimes this error would result in an overestimate of the actual distance. Overestimating distance contradicts the definition of consistency for heuristics, which meant that \( A^* \) achieved an average path length of 5.2, 11% more than the minimum. Furthermore, it took \( A^* \) more than 25x as long to accomplish this result because it had to use the predictor on each of the states it visited. \( A^* \) did, however, drastically improve in our desiderata, the number of states explored. The heuristic allowed the algorithm to advance to \( p_{\text{goal}} \) while only examining an average of 510 states, which is a massive improvement even over UCS. Specifically, it is a twentyfold improvement over UCS and a two-hundredfold improvement over the naive algorithms. This result was exactly as we had expected and imagined. <table> <thead> <tr> <th>Search Type</th> <th>Path Length</th> <th>States Explored</th> <th>Time</th> </tr> </thead> <tbody> <tr> <td>Human</td> <td>7.8</td> <td>7.8</td> <td>190 s</td> </tr> <tr> <td>DFS</td> <td>4.7</td> <td>210,000</td> <td>5.2 s</td> </tr> <tr> <td>BFS</td> <td>4.7</td> <td>210,000</td> <td>4.6 s</td> </tr> <tr> <td>UCS</td> <td>4.7</td> <td>19,000</td> <td>.45 s</td> </tr> <tr> <td>( A^* )</td> <td>5.2</td> <td>510</td> <td>12 s</td> </tr> </tbody> </table> 6 Discussion Comparing these results to our extensive human trials, we see that $A^*$ can still do the problem 15x faster than humans can. Furthermore, $A^*$'s result is within throwing distance of the minimal one; humans are nearly twice that. However, the massive advantage that humans obtain in the game from their ability to actually understand the articles and the context allows them to perform a greedy search and explore exactly as many states as their selected path. Humans still explore only 1.5% the number of states that $A^*$ does (and .037% the number UCS does). In a context without the whole of Wikipedia downloaded (i.e. with the introduction of network latency, a fact of life on the internet) humans could still conceivably beat $A^*$ at the game. Nonetheless, the strides which $A^*$ exhibits demonstrate massive improvement over any non-heuristic based algorithm, which shows that our implementation really has learned something about the goal article. One of the most impressive aspects of this learning is that it also demonstrates serious latent structure in the data (structure which it learned). The classification is incorrect by an average of only 1.2 for the incorrectly classified examples (i.e. the output is only off by two or more about 10% of the time overall). This would not be possible if adjacent categories were not inherently very similar. 7 Future Work Some avenues for continued improvement stand out. Firstly, our Multinomial Logistic Regression classifier could be improved. Our solution is almost certainly overfit to the data (i.e. we have a high variance) because the bag of words model usually errs in that direction. That model has so many features, and the training set is even in our context such a small portion of the space, that the classifier is extremely likely to overfit to certain words. Further investigation is required, but a simple path to improvement for the accuracy of the classifier might be to remove rare words from the feature space or to train on more of the data. In this vein, implementing other or more complicated ML algorithms might also yield better results. Because the training set is virtually infinite and the output $y$ is multiclass, this might very well be a good space for a neural network to be applied. If the neural net could even better classify and predict the distance, we could move even further toward a perfect heuristic for $A^*$. Once we settle on an optimal ML algorithm, it would be interesting to vary the weighting of the heuristic in $A^*$. Although this will likely wreak havoc with the consistency of the estimate, it will produce interesting results. $A^*$ will likely be much less minimal in its path-finding, but could potentially also explore fewer states. Examining this tradeoff between minimality and states explored is the theme of our problem. If the heuristic is good enough (especially among low-distance states) it is entirely possible that an implementation of weighted-DFS using the heuristic to calculate edge weights might produce similar results to the ones achieved by a human’s greedy search. 8 References
{"Source-Url": "http://cs229.stanford.edu/proj2015/309_report.pdf", "len_cl100k_base": 4634, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 16934, "total-output-tokens": 5171, "length": "2e12", "weborganizer": {"__label__adult": 0.0003833770751953125, "__label__art_design": 0.0007271766662597656, "__label__crime_law": 0.0005083084106445312, "__label__education_jobs": 0.005985260009765625, "__label__entertainment": 0.0001996755599975586, "__label__fashion_beauty": 0.00029206275939941406, "__label__finance_business": 0.0004291534423828125, "__label__food_dining": 0.0005464553833007812, "__label__games": 0.00103759765625, "__label__hardware": 0.0010271072387695312, "__label__health": 0.0009870529174804688, "__label__history": 0.000640869140625, "__label__home_hobbies": 0.0001919269561767578, "__label__industrial": 0.0007047653198242188, "__label__literature": 0.0012178421020507812, "__label__politics": 0.00040340423583984375, "__label__religion": 0.000667572021484375, "__label__science_tech": 0.415283203125, "__label__social_life": 0.0002586841583251953, "__label__software": 0.0214996337890625, "__label__software_dev": 0.5458984375, "__label__sports_fitness": 0.0003859996795654297, "__label__transportation": 0.0006136894226074219, "__label__travel": 0.0002765655517578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21374, 0.04464]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21374, 0.41495]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21374, 0.93104]], "google_gemma-3-12b-it_contains_pii": [[0, 4213, false], [4213, 9902, null], [9902, 12514, null], [12514, 16966, null], [16966, 21374, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4213, true], [4213, 9902, null], [9902, 12514, null], [12514, 16966, null], [16966, 21374, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21374, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21374, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21374, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21374, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21374, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21374, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21374, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21374, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21374, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21374, null]], "pdf_page_numbers": [[0, 4213, 1], [4213, 9902, 2], [9902, 12514, 3], [12514, 16966, 4], [16966, 21374, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21374, 0.17284]]}
olmocr_science_pdfs
2024-12-04
2024-12-04
bbbf970a727dcf8be195db2646591668b1879254
Use Cases This is the responsibility of the Configuration Profiles Team and Communications Team. This is an attempt to make a grid of major use cases and major user types. Ex.: Enterprise Collaboration, Publishing for non-profits, etc. Also related: Tiki Suite use cases This page is still in draft. Once it stabilizes, it will probably be split into smaller chunks. When it is ready, it will be publicized. Please help improve. Perhaps it should be moved to http://profiles.tiki.org? This list needs to be split in what Tiki is used for and by whom - Introduction - Who is reading? - I am not a techy - 1.1.1. Writer / Content manager / Translator / Marketing - 1.1.2. Graphic artist / theme designer - 1.1.3. Power-user / Information architect - 1.1.4. Entrepreneur / Management / Lawyer - I am techy - 1.1.1. Programmer / coder - 1.1.2. Sysadmin / Web host - 1.1.3. IT manager - What type of organization? - Small and medium-sized business (SMB) - Enterprise / Corporate - Government - Education / Research - Association / Non-Profit / Co-ops / NGO / Special Interest Group / Club / Foundation - Personal / Family / Freelancer - Web agency / Designer / Consultant - Hosting company / Tech services - International / Multilingual / Geo CMS - How will the system be used / what is the project? - Web Publishing / News / Blog / Personal site / family site / Portal / news site - E-commerce / Shopping cart / Finance / Membership - Collaboration / Project Management / Intranet / Team project / Entreprise Wiki / B2B or internal - E-learning / Learning Management System (LMS) / classroom / course management system (CMS) / Coaching communities - Social Networking / Community / Fan Site / E-democracy Introduction Tiki is the FLOSS Web Application with the most built-in features and is in general one of the most diverse / feature-full piece of software in the World. This is great when you need it (of course!), but it makes it difficult for - marketing -> "How do you market something that does everything" - the User/Admin experience -> "It does everything, nice but I just want X. How do I do it?" Tiki is a Wiki / Content Management System (CMS) / Groupware. These are broad and vague terms. - Wikis can be used for so many things. A public encyclopedia is very different than a corporate wiki. - Groupware can be defined as "software designed to help people involved in a common task achieve their goals". This is very vague. - Content Management System (CMS) is especially a vague term: A "system to manage content". Just about anything can fit that definition! A computer and a filing a cabinet are a CMS. Tiki enthusiasts will tell you that Tiki is a jack-of-all-trades, everything-but-the-kitchen-sink, all-in-one tool which lets you do just about "anything". It could power probably over 90% of the websites in the World (Let's face it, most sites are pretty simple!). But of course, you don't want to do "anything". You have a specific objective/community/project in mind. This page will define a bit more the different ways to use Tiki. With Tiki, you can pick & choose which features are turned on. Chances are, your project will be a combination of a few 'Use Cases'. So you can use the combined features of all appropriate use cases. Why cater to so many use cases? A bit like speaking many languages from the same root (French, Spanish, Portuguese, Spanish, Italian, Catalan, etc.), once you know 3 or 4, the 5th and 6th are easier to learn because you already have many bases. Some words are the same, or similar enough that you can get by. So each Use Case is easier & easier. Similar to the list of features, rating (A, B, C, D, E) are to compare Tiki as an overall solution for these use cases, compared to "industry standards" in the Open Source World. So "A" means that Tiki is, for this Use Case, just as good as any other Open Source project. "A" does not mean "Perfect" or that it has all the features. Some features may be weak but compensated by other strong features. The ratings are based on the latest stable version of Tiki. If you disagree with a rating, please change it and edit the Strengths/Weaknesses sections accordingly. **ToDo** - Review every feature of Tiki and put the most relevant ones below in the top 1 (or max 2) relevant use cases. 1. These ratings needs to be split, like in features: - Setup/admin UI: How easy / intuitive is to set up this feature? - UI end user : Once setup, how easy / intuitive is it for the end-user? (was it worth it?) - Stability: How many bugs left? - Feature-set / power: How does this feature compare to other Open Source apps? 2. Move to a SWOT format **Please do add links to your Tiki-powered sites which fit a profile.** Each Use Case will eventually need an installation profile to make it easy for people to get going when they install Tiki. Format: ``` - Use Case=- Description of Use case, which will eventually be a profile (who/what/etc) * Description of type of user / user case (clarify what we are talking about) - Key features=- Including links to the proper documentation page. This will make it easier to evaluate/understand. - Strengths=- * Specific benefits/strength of adopting Tiki (why is Tiki great for them?) - Weaknesses=- Weaknesses of Tiki / what needs to be improved - Potential to contribute to Tiki=- * Potential to contribute to Tiki (to help the Tiki community prioritize efforts) ** Examples of low potential to contribute to Tiki: microsites, or one-shot projects that will be shut down after a short period. Ex.: Event organization web site, that won't be re-used the following year - Examples & related links=- ``` Some good examples of live sites (Tiki or otherwise) Who is reading? So info.tiki.org should start out with I am not a techy - I am a manager - I am a designer / Graphic artist - I am a Power-user (this one is in both sections because some power-users consider themselves techy, some not) - I am Writer / Tech Writer / Content manager 1.1.1. Writer / Content manager / Translator / Marketing **Me:** Content is king! I have info to get out and don't want to wait for a webmaster. **Strengths:** Tiki has a full range of features to make a great publishing & collaboration platform, with plenty of Interaction tools. Please also see: SEO **Weaknesses:** **Potential to contribute to Tiki:** Can help with *.tiki.org content and can also help with bug reports, testing, documentation, feature requirements, etc. 1.1.2. Graphic artist / theme designer **Me:** I am really good with CSS & HTML but I am allergic to PHP **Strengths:** Tiki has a very flexible template system. You can do an amazing quantity of thing without ever changing a single line of PHP code. Thanks to the theme control center, you can assign a different theme to different parts of your site. The whole theme system was recently re-organized. Please see: themes.tiki.org **Weaknesses:** **Potential to contribute to Tiki:** Theme designers often make very specific theme for customers. These themes often cover only a small number of features. Result: their work is most often not contributable. 1.1.3. Power-user / Information architect **Me:** I am really good at testing, configuring, finding bugs, etc. and I want a powerful system to organize things efficiently **Strengths:** Tiki has arguably more features out of the box than any other web application. Configure your Tiki to your liking through a point & click GUI. **Tiki** has a ton of goodies for you. With categories, you can create a basic document workflow. With Weaknesses: **Potential to contribute to Tiki:** Can help with bug reports, testing, documentation, feature requirements, etc. 1.1.4. Entrepreneur / Management / Lawyer Give me the executive summary Strengths: - LGPL. Tiki is free as in free speech and free beer. - Tiki has 200+ different permissions you can set to any group. Groups can be included in groups (and inherit permissions). Just about any item in Tiki can have individual permissions which override global permissions. And, to make management easier, Tiki also offers category permissions. Weaknesses: **Potential to contribute to Tiki:** Can help find new uses and business models for Tiki, which at least indirectly, lead to more resources working on Tiki (ex.: a SAAS project or web community (with Tiki as a back-end)) I am techy If the person clicks techy - I am a sysadmin - I am a coder - I am a front-end developer - I am a Power-user (this one is in both sections because some power-users consider themselves techy, some not) 1.1.1. Programmer / coder **Me:** I want to code as little as possible and I want to be able to contribute my code or use APIs so I don't have to manage a local fork. **Strengths:** Tiki is a full-fledged Wiki/ CMS / Groupware. But it can also be used as a framework to build additional functionality. Tiki has a very open development philosophy. You can add features as long as you make them optional **Strengths:** **Weaknesses:** **Potential to contribute to Tiki:** New features, bug fixes, improve release process, etc. 1.1.2. Sysadmin / Web host **Me:** I am worried about complex updates and security issues. Apt-get is my friend **Strengths:** Why install 2, 3 or 4 Web apps when Tiki can handle these needs? One application to manage instead of inconsistent logins & permissions throughout different systems. Tiki offers TRIM, MultiTiki & InterTiki. Many authentication options (LDAP, CAS, etc). Tiki runs on standard PHP/MySQL. You can easily export & reuse data if need be. See: Hosting Company **Weaknesses:** **Potential to contribute to Tiki:** Stability, Security, Uptime, Performance, Deployability, etc. 1.1.3. IT manager **Me:** I am looking for a good overall solution for a web app. I am planning ahead because now, my staff is asking for a wiki, but I know that later, they are gonna want more. I don't want to be limited down the road. **Strengths:** Tiki is a very active project. Hundreds of people have contributed. On one side, it is mature and stable in version LTS versions, a lot of exciting new features are in development **Weaknesses:** **Potential to contribute to Tiki:** Can help with funding / planning of enhancements and stability/upkeep of LTS versions. What type of organization? Small and medium-sized business (SMB) **Strengths:** The big feature-set and tight integration reduce the number of apps to deploy **Weaknesses:** **Potential to contribute to Tiki:** suite.tiki.org Enterprise / Corporate - Enterprises has different needs in terms of support and release cycles. **Tiki** has evolved over the years to a large number of benefits for the **Enterprise** - Enterprises tend to use Tiki for fewer features, but more intensely. **Strengths:** Tiki is an excellent tool for a corporate wiki, with all the tools built-in: LDAP, fine-grained permissions, trackers, etc Tiki is licensed LGPL (while most projects of this type are GPL) so you can freely integrate with your commercial applications. Weaknesses: Potential to contribute to Tiki: They save money, thus, can contribute. Typically long term commitment. Related links: - enterprise Government Strengths: Weaknesses: Potential to contribute to Tiki: Large deployments and long-term projects. Could prefer to invest public money in FOSS solutions, which are thus available to all. Education / Research For teachers & school administration For student Strengths: Weaknesses: Potential to contribute to Tiki: Translation Student Program Association / Non-Profit / Co-ops / NGO / Special Interest Group / Club / Foundation Can be an association of professionals Strengths: Weaknesses: Potential to contribute to Tiki: Have a culture of volunteers and good causes. Can help with fundraising and volunteer testers. If you a philanthropist and want to help, you can put your resources to good use, with a fantastic multiplying effect. Tiki is free as in free beer. Tiki got this far with a budget of 0, but money would help to cover travel expenses for contributors for TikiFests and increase overall quality. Personal / Family / Freelancer Strengths: Weaknesses: Potential to contribute to Tiki: Web agency / Designer / Consultant Us: We do all kinds of web projects. Mostly web sites but sometimes some Intranets Strengths: With Tiki, you can manage several different types of projects, all with the same back-end. And it's easy to share back so you don't have to maintain your customizations. - Tiki for Web agencies - FLOSS Web Application with the most built-in features Weaknesses: Potential to contribute to Tiki: These agencies are the ones picking the tool, and can both increase the install base, and contribute back enhancements. Hosting company / Tech services Us: We want applications which add value to our hosting services. And it has to be easy to maintain. Strengths: Tiki is ideal! Please see: Hosting Company Weaknesses: Potential to contribute to Tiki: Links: International / Multilingual / Geo CMS Features: i18n, maps Strengths: Tiki interface is at least partially translation in 40+ languages Good back-end architecture & storage Weaknesses: Potential to contribute to Tiki: Translations Links: - http://wiki-translation.com - http://profiles.tiki.org/Cartograf How will the system be used / what is the project? Web Publishing / News / Blog / Personal site / family site / Portal / news site "One to Many" web communication **Features:** Menu, Newsletter, Articles, Article Generator, Blog, Banners, Themes, Kaltura videos, Comments, Spam protection, WYSIWYG, Search engine optimization, Clean URLs, RSS, Categories, etc. **Strengths:** Themable, Smarty, Can natively handle multiple blogs **Weaknesses:** Spam prevention, not enough nice built-in themes. **Potential to contribute to Tiki:** Web shops that do brochure-ware websites typically take Tiki as-is, have very little to develop, and most often can't recontribute the custom theme. Since sites are pretty simple, and version history / collaboration is not important, they more often change technologies after a few years. Because of a custom theme, site upgrades are likely to be 2-4 years apart, along with design refresh. LTS is useful for them. **Links:** - http://profiles.tiki.org/Personal_Blog_and_Profile_9x - http://info.tiki.org - http://www.cunningham-lee.com - http://www.amette.eu - Featured Tikis - http://tiki.org/TikiProfilesSmallBiz E-commerce / Shopping cart / Finance / Membership Managing transactions **Features:** Shopping Cart, Payment, Accounting, Credit, Membership, PluginTrade, Spreadsheet **Strengths:** **Weaknesses:** - Various features are not integrated, No Shopping Cart profile - Still quite basic. Suitable if your main site needs other Tiki features and you also need basic e-commerce. - No tax handling **Potential to contribute to Tiki:** These projects have an incentive to focus on security. Their business model may bring additional features and a better UI. **Links:** - PayPalSubscription - shopping cart -> in the works! **Collaboration / Project Management / Intranet / Team project / Entreprise Wiki / B2B or internal** One or many defined teams focusing on goals. **Features:** Workspaces, Wiki, Bug/task Tracker, Calendar, Chat, Watch, Permissions, Share, Categories, Search, LDAP authentication, File Gallery **Strengths:** - Very customizable and powerful - Very promising in the wiki-based project management field **Weaknesses:** - Missing profile for typical use case **Potential to contribute to Tiki:** These are typically long term commitment projects, thus, they are interested in being around for a smooth upgrade. **Links:** - http://profiles.tiki.org/Company_Intranet_9x - http://dev.tiki.org - http://en.wikipedia.org/wiki/Project_management_software - http://en.wikipedia.org/wiki/Project_management_2.0 - http://projectmanagementwiki.org - project management - Wikis evolve as collaboration tools - Wiki Tools: Who's In The Game - Wikis At Work - What makes an enterprise wiki? - Intranet wikis, by Michel Buffa E-learning / Learning Management System (LMS) / classroom / course management system (CMS) / Coaching communities One to many teaching, or many to many learning Support coaching activities and education communities. Features: Quizzes, BigBlueButton webinars, Slideshow and Screencast Strengths: Weaknesses: - Not yet suitable for classrooms (as a Moodle alternative) - Grading only available through mods Potential to contribute to Tiki: Links: - Enterprise E-learning roadmap - http://profiles.tiki.org/Learning_Center Social Networking / Community / Fan Site / E-democracy Many to many, organic groups Examples: Job board / Dating site / hospitality exchange / Car Pooling People are connected around a topic. Features: Forums, Inter-User Messages, Maps, Friendship Network, Spam Protection, Workspace UI, User profiles, Organic Groups, Wiki user pages, Calendar of events, polls Strengths: Tiki eats its own DogFood Weaknesses: - "transactions" between users - Reputation / karma system - Tiki is missing a few things to be efficient for this use case. Tiki friends are very basic. It would be possible but clunky to get by with trackers and forums. Potential to contribute to Tiki: I am doing my PhD and I would like some people to review my work. Features: Wiki, Screencast, Search, Structure, i18n (for translation), Flagged Revision, Tag, Spam Protection, Zotero, References, WebHelp, categories, category permissions, Watch, category watch, category transition, File galleries with check-in/check-out, WebDAV Strengths: Excellent tracking of edits (wiki history) Weaknesses: Printing, Offline, PDF, Doesn't handle non-Web things (OCR, etc) Potential to contribute to Tiki: Customer/Contact/Constituent relationship management (CRM) / Support / Help Desk / Issue Tracker / Bug Tracker / Customer extranet - Supporting a product or service or sales - Typically with one to one support of a product and a public bug/issue tracker - This includes software-forge type sites There is overlap with "Knowledgebase / Documentation" use case with respect to the product documentation There is overlap with "Community" use case because we want customers to help each other Features: Issue Tracker, File Gallery, Surveys, Polls, Live Support, User profiles, Newsletter, comments, forum Strengths: Lots of interest: CRM Weaknesses: Tiki can do some of this. It would need quite a bit of configuration. Too much work for a new admin. A profile would be very useful here Potential to contribute to Tiki: Typically Long Term commitment as they want to have a stable platform for their customers Links: - http://dev.tiki.org/CRM - http://crm.tiki.org/ - Issue Tracker - http://freshmeat.net/browse/1017/ - http://en.wikipedia.org/wiki/Membership_software Office Suite Online office productivity tools, to collaborate on various document types (all web-based) Features: Wiki, Docs, Screencast, Spreadsheet, Slideshow, Drawings, Database builder. For regular files, you can use the File Gallery and WebDAV Strengths: Weaknesses: Potential to contribute to Tiki: Links: Productivity / Personal Information Management (PIM) / Personal knowledge base Makes users more efficient Features: Webmail, Bookmarks, Notepad, Task, User Files, Calendar, Contact Strengths: Wiki is very strong for this. Weaknesses: No sync with PDA, weak webmail Potential to contribute to Tiki: Related links: - http://en.wikipedia.org/wiki/Personal_information_manager Framework / App builder / Activity Workflow / Online service / Database, forms & reports / Specialized web Application Tiki has all these features, and you use as Framework to build even more! Most online services like Basecamp, Freshbooks, etc. are not using FOSS software (although they are very likely using components). Contact some of the developers to see if/how your project could fit in so you can focus on your content and community and reduce the risk/cost of feature development. I have legacy data | I want to reuse content from another DB http://tiki.org/PluginSql, Webservices, etc. Features: tracker, Category Transitions can be used as a Workflow engine Strengths: Super powerful and nicely integrated with wiki Includes Zend Framework, Smarty and jQuery - When you'll need a generic feature (forum, etc), it's just a click away. Weaknesses: Potential to contribute to Tiki: Production / ERP This can be built with trackers, but Tiki is not yet good here. **Features:** Trackers, Category Transitions engine to code custom processes **Strengths:** **Weaknesses:** **Potential to contribute to Tiki:** **Related links** - http://ck-erp.net/ - http://www.dolibarr.org/ - http://www.web-erp.org/ - https://www.ohloh.net/tags/erp Event Management, Conference Management & planning This is a mix of Project Management for the organizers, CRM for the attendees and publishing to promote the event. Yet, since it's a fairly clear / common use case, we'll make add a section **Features:** Trackers (for user registration), Wiki, Calendar **Strengths:** Wikis in Plain English / Planning a camping trip "Email is not good at coordinating and organizing a group's input. This is the old way. The important information is scattered across everyone's inbox. (...) There is a better way" **Weaknesses:** - Needs a better way to track submissions - Need a profile **Potential to contribute to Tiki:** **Related links** - [http://americana.org/](http://americana.org/) - [http://phpbeer.com](http://phpbeer.com) (project) - [http://freshmeat.net/projects/ocs/](http://freshmeat.net/projects/ocs/) - See [Citadel Rock Online Communities's EventSoup](http://www.expectnation.com/) project **What is your infrastructure?** **Shared hosting** **V-server or dedicated server** **Local computer or USB key** **Related links** - User type - Major Features Missing In Tiki - Why Wiki Syntax Is Important - Model - Wikis Are Now Serious Business : 10 more use cases - tiki-slideshow.php?page=Presentation+Comm506#s2 - Good to update [https://profiles.tiki.org/Small_Organization_Web_Presence_products_include](https://profiles.tiki.org/Small_Organization_Web_Presence_products_include) - The Word Processor being the wiki, Tiki has everything else on this top-10 list. Webmail is poor though. - [http://bestengagingcommunities.com/2012/11/27/how-to-b2b-is-morphing-into-b2a-b2d-b2m/](http://bestengagingcommunities.com/2012/11/27/how-to-b2b-is-morphing-into-b2a-b2d-b2m/) **Alias** - Use case - Who uses Tiki • User Type • Major Use cases • User Types • Types of Users • Top-10 uses
{"Source-Url": "https://tiki.org/tiki-print.php?display=pdf&page=Use+Cases", "len_cl100k_base": 5437, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 31384, "total-output-tokens": 6711, "length": "2e12", "weborganizer": {"__label__adult": 0.0003199577331542969, "__label__art_design": 0.0007615089416503906, "__label__crime_law": 0.0003635883331298828, "__label__education_jobs": 0.005084991455078125, "__label__entertainment": 0.00023484230041503904, "__label__fashion_beauty": 0.0001569986343383789, "__label__finance_business": 0.003185272216796875, "__label__food_dining": 0.0002815723419189453, "__label__games": 0.0008082389831542969, "__label__hardware": 0.0005044937133789062, "__label__health": 0.00015985965728759766, "__label__history": 0.0002357959747314453, "__label__home_hobbies": 0.00024318695068359375, "__label__industrial": 0.00020766258239746096, "__label__literature": 0.0003426074981689453, "__label__politics": 0.0002307891845703125, "__label__religion": 0.00032806396484375, "__label__science_tech": 0.0021190643310546875, "__label__social_life": 0.00044345855712890625, "__label__software": 0.163330078125, "__label__software_dev": 0.81982421875, "__label__sports_fitness": 0.00016057491302490234, "__label__transportation": 0.0002142190933227539, "__label__travel": 0.00024116039276123047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24287, 0.006]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24287, 0.00503]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24287, 0.86989]], "google_gemma-3-12b-it_contains_pii": [[0, 1819, false], [1819, 3481, null], [3481, 5796, null], [5796, 7711, null], [7711, 9354, null], [9354, 11289, null], [11289, 12402, null], [12402, 13636, null], [13636, 15444, null], [15444, 16779, null], [16779, 18041, null], [18041, 18838, null], [18838, 20248, null], [20248, 21567, null], [21567, 22540, null], [22540, 24214, null], [24214, 24287, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1819, true], [1819, 3481, null], [3481, 5796, null], [5796, 7711, null], [7711, 9354, null], [9354, 11289, null], [11289, 12402, null], [12402, 13636, null], [13636, 15444, null], [15444, 16779, null], [16779, 18041, null], [18041, 18838, null], [18838, 20248, null], [20248, 21567, null], [21567, 22540, null], [22540, 24214, null], [24214, 24287, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 24287, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24287, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24287, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24287, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24287, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24287, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24287, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24287, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24287, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24287, null]], "pdf_page_numbers": [[0, 1819, 1], [1819, 3481, 2], [3481, 5796, 3], [5796, 7711, 4], [7711, 9354, 5], [9354, 11289, 6], [11289, 12402, 7], [12402, 13636, 8], [13636, 15444, 9], [15444, 16779, 10], [16779, 18041, 11], [18041, 18838, 12], [18838, 20248, 13], [20248, 21567, 14], [21567, 22540, 15], [22540, 24214, 16], [24214, 24287, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24287, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
c60c45eaa423780be5fb4adcbb3c2f37a2fde6ed
Where shall we add completeness? if true 1 (2, 3) does not get stuck, but we can’t type it either. Perhaps we should add this typing rule? \[ \frac{e_1 \rightarrow \text{true}}{\Gamma \vdash \text{if } e_1 \; e_2 \; e_3 : \tau} \] Not if we want to keep decidability! How about? \[ \frac{\Gamma \vdash e_2 : \tau}{\Gamma \vdash \text{if } \text{true} \; e_2 \; e_3 : \tau} \] Sound, adds completeness, but not terribly useful. Tradeoffs Desirable type system properties (desiderata): - **soundness** - exclude all programs that get stuck - **completeness** - include all programs that don’t get stuck - **decidability** - effectively determine if a program has a type Our friend Turing says we can’t have it all. We choose soundness and decidability, aim for “reasonable” completeness, but still reject valid programs. Any benefit to an *unsound*, complete, decidable type system? Today: *subtype polymorphism* to start adding completeness. Where shall we add *useful* completeness? Code reuse is crucial: write code once, use it in many contexts. Polymorphism supports code reuse and comes in several flavors: - **ad hoc** - implementation depends on type details + in ML vs. C vs. C++ - **parametric** - implementation independent of type details \[ \Gamma \vdash \lambda x. \; x : \forall \alpha. \alpha ightarrow \alpha \] - **subtype** - implementation assumes constrained types void makeSound(Dog d) { d.growl(); } ... makeSound(new Husky()); Subtyping uses a value of type \( \tau \) as a different type \( \tau' \). Where shall we add *useful* completeness? Code reuse is crucial: write code once, use it in many contexts. *Polymorphism* supports code reuse and comes in several flavors: - **ad hoc** - implementation depends on type details - in ML vs. C vs. C++ - **parametric** - implementation independent of type details - $\Gamma \vdash \lambda x. x : \forall \alpha \to \alpha$ - **subtype** - implementation assumes constrained types - void makeSound(Dog d) { d.growl(); } - ... makeSound(new Husky()); Subtyping uses a value of type $A$ as a different type $B$. **Extending STLC with Subtyping** We know the extension recipe: 1. add new syntax 2. add new semantic rules 3. add new typing rules 4. update type safety proof **Guiding principle:** If $A$ is a subtype of $B$ (written $A \leq B$), then we can safely use a value of type $A$ anywhere a value of type $B$ is expected. ### Subtyping Wait... how many types can a STLC expression have? At most one! Currently we have **no polymorphism**: If $\Gamma \vdash e : \tau_1$ and $\Gamma \vdash e : \tau_2$, then $\tau_1 = \tau_2$ Let’s fix that: - add completeness by extending STLC with subtyping - consider implications for the compiler - also touch on coercions and downcasts **Extending STLC with Subtyping** We know the extension recipe: *already half done!* 1. add new syntax 2. add new semantic rules 3. add new typing rules 4. update type safety proof Where to start adding new typing rules? First, let’s focus on **records**: - review existing rules - consider examples of incompleteness - add new rules to handle examples and improve completeness Records Review \[ e ::= ··· | \{ l_1 = e_1, \ldots, l_n = e_n \} \mid e.l \] \[ \tau ::= ··· | \{ l_1 : \tau_1, \ldots, l_n : \tau_n \} \] \[ v ::= ··· | \{ l_1 = v_1, \ldots, l_n = v_n \} \] \[ \Gamma \vdash e : \{ l_1 : \tau_1, \ldots, l_n : \tau_n \} \quad 1 \leq i \leq n \] \[ \Gamma \vdash e.l_i : \tau_i \] Now it type-checks \[ (\lambda x : \{ l_1 : \text{int}, l_2 : \text{int} \}. x.l_1 + x.l_2) \{ l_1=3, l_2=4, l_3=5 \} \] Sure! It won’t get stuck. Suggests \textit{width subtyping}: \[ \tau_1 \leq \tau_2 \] Add new typing rule to take advantage of subtyping: \textit{Subsumption} \[ \frac{\Gamma \vdash e : \tau' \quad \tau' \leq \tau}{\Gamma \vdash e : \tau} \] Instantiation of Subsumption is highlighted (pardon formatting) The derivation of the subtyping fact \[ \{ l_1 : \text{int}, l_2 : \text{int}, l_3 : \text{int} \} \leq \{ l_1 : \text{int}, l_2 : \text{int} \} \] would continue, using rules for the \( \tau_1 \leq \tau_2 \). So far we only have one subtyping axiom, just use that. Clean division of responsibility: \> - Where to use subsumption - How to show two types are subtypes Permutation \[ (\lambda x : \{ l_1 : \text{int}, l_2 : \text{int} \}. x.l_1 + x.l_2) \{ l_2=3; l_1=4 \} \] Suggests \textit{permutation subtyping}: \[ \begin{align*} \{ l_1 : \tau_1, \ldots, l_i-1 : \tau_i-1, l_i : \tau_i, \ldots, l_n : \tau_n \} & \leq \\ \{ l_1 : \tau_1, \ldots, l_i-1 : \tau_i, l_i-1 : \tau_i-1, \ldots, l_n : \tau_n \} \end{align*} \] Example with width and permutation. Show: \[ \vdash \{ l_1=7, l_2=8, l_3=9 \} : \{ l_2 : \text{int}, l_1 : \text{int} \} \] No longer obvious, efficient, sound, complete type-checking algo: \> - sometimes such algorithms exist and sometimes they don’t - in this case, we have them Reflexive Transitive Closure The subtyping principle implies reflexivity and transitivity: \[ \tau \leq \tau \] \[ \tau_1 \leq \tau_2 \quad \tau_2 \leq \tau_3 \quad \tau_1 \leq \tau_3 \] Could get transitivity w/ multiple subsumptions anyway. Have we lost anything while gaining all these rules? Type-checking no longer syntax-directed: - may be 0, 1, or many distinct derivations of \( \Gamma \vdash e : \tau \) - many potential ways to show \( \tau_1 \leq \tau_2 \) Still decidable? Need algorithm checking that labels always a subset of what’s required, must prove it “answers yes” iff there exists a derivation. Still efficient? Implementation Efficiency Given semantics, width and permutation subtyping totally reasonable. How do they impact the lives of our dear friend, the compiler writer? It would be nice to compile \( e.l \) down to: 1. evaluate \( e \) to a record stored at an address \( a \) 2. load \( a \) into a register \( r_1 \) 3. load field \( l \) from a fixed offset (e.g., 4) into \( r_2 \) Many type systems are engineered to make this easy for compiler writers. In general: If some language restriction seems odd, ask yourself: what useful invariant does limiting expressiveness provide the compiler? Getting more completeness. Added new subtyping judgement: - width, permutation, reflexive transitive closure \[ \{l_1 : \tau_1, \ldots, l_n : \tau_n, l : \tau\} \leq \{l_1 : \tau_1, \ldots, l_n : \tau_n\} \quad \tau \leq \tau \] \[ \{l_1 : \tau_1, \ldots, l_{i-1} : \tau_{i-1}, l_i : \tau_i, \ldots, l_n : \tau_n\} \leq \{l_1 : \tau_1, \ldots, l_{i-1} : \tau_{i-1}, \ldots, l_n : \tau_n\} \] Added new typing rule, subsumption, to use subtyping: \[ \Gamma \vdash e : \tau' \quad \tau' \leq \tau \] \[ \Gamma \vdash e : \tau \] Squeeze out more completeness: - Extend subtyping to “parts” of larger types - Example: Can’t yet use subsumption on a record field’s type - Example: Don’t yet have supertypes of \( \tau_1 \rightarrow \tau_2 \) ### Depth Does this program type-check? Does it get stuck? \[(\lambda x:\{l_1:{l_3:}\text{int},l_2:}\text{int}\}. x.l_1.l_3 + x.l_2)\{l_1=\{l_3=3,l_4=9\},l_2=4\}\] Suggests depth subtyping \[\tau_i \leq \tau'_i\] \[\{l_1:\tau_1,\ldots,l_i:\tau_i,\ldots,l_n:\tau_n\} \leq \{l_1:\tau_1,\ldots,l_i:\tau'_i,\ldots,l_n:\tau_n\}\] (With permutation subtyping, can just have depth on left-most field) ### Function Subtyping Given our rich subtyping on records (and/or other primitives), how do we extend it to other types, notably \(\tau_1 \rightarrow \tau_2\)? For example, we’d like \(\text{int} \rightarrow \{l_1:\text{int},l_2:}\text{int}\} \leq \text{int} \rightarrow \{l_1:}\text{int}\) so we can pass a function of the subtype somewhere expecting a function of the supertype \[\text{???}\] For a function to have type \(\tau_3 \rightarrow \tau_4\) it must return something of type \(\tau_4\) (including subtypes) whenever given something of type \(\tau_3\) (including subtypes). A function assuming less than \(\tau_3\) will do, but not one assuming more. A function guaranteeing more than \(\tau_4\) but not one guaranteeing less. ### Summary of subtyping rules \[\tau_1 \leq \tau_2 \quad \tau_2 \leq \tau_3 \quad \tau \leq \tau\] \[\{l_1:\tau_1,\ldots,l_n:\tau_n,l:\tau\} \leq \{l_1:\tau_1,\ldots,l_n:\tau_n\}\] \[\{l_1:\tau_1,\ldots,l_i-1:\tau_{i-1},l_i:\tau_i,\ldots,l_n:\tau_n\} \leq \{l_1:\tau_1,\ldots,l_i:\tau_i',\ldots,l_n:\tau_n\}\] This is unintuitive enough that you, a friend, or a manager, will some day be convinced that functions can be covariant in their arguments. THIS IS ALWAYS WRONG (UNSound). Notes: - As always, elegantly handles arbitrarily large syntax (types) - For other types, e.g., sums or pairs, would have more rules, deciding carefully about co/contravariance of each position Maintaining soundness Our Preservation and Progress Lemmas still “work” in the presence of subsumption ▶ So in theory, any subtyping mistakes would be caught when trying to prove soundness! In fact, it seems too easy: induction on typing derivations makes the subsumption case easy: ▶ Progress: One new case if typing derivation $\vdash e : \tau$ ends with subsumption. Then $\vdash e : \tau'$ via a shorter derivation, so by induction a value or takes a step. ▶ Preservation: One new case if typing derivation $\vdash e : \tau$ ends with subsumption. Then $\vdash e : \tau'$ via a shorter derivation, so by induction if $e \rightarrow e'$ then $\vdash e' : \tau'$. So use subsumption to derive $\vdash e' : \tau$. Hmm... Ah, Canonical Forms That’s because Canonical Forms is where the action is: ▶ If $\vdash \cdot v : \{l_1:\tau_1, \ldots, l_n:\tau_n\}$, then $v$ is a record with fields $l_1, \ldots, l_n$ ▶ If $\vdash \cdot v : \tau_1 \rightarrow \tau_2$, then $v$ is a function We need these for the “interesting” cases of Progress Now have to use induction on the typing derivation (may end with many subsumptions) and induction on the subtyping derivation (e.g., “going up the derivation” only adds fields) ▶ Canonical Forms is typically trivial without subtyping; now it requires some work Note: Without subtyping, Preservation is a little “cleaner” via induction on $e \rightarrow e'$, but with subtyping it’s much cleaner via induction on the typing derivation ▶ That’s why we did it that way A matter of opinion? If subsumption makes well-typed terms get stuck, it is wrong We might allow less subsumption (e.g., for efficiency), but we shall not allow more than is sound But we have been discussing “subset semantics” in which $e : \tau$ and $\tau \leq \tau'$ means $e$ is a $\tau'$ ▶ There are “fewer” values of type $\tau$ than of type $\tau'$, but not really Very tempting to go beyond this, but you must be very careful... But first we need to emphasize a really nice property of our current setup: Types never affect run-time behavior Erasure A program type-checks or does not. If it does, it evaluates just like in the untyped $\lambda$-calculus. More formally, we have: 1. Our language with types (e.g., $\lambda x : \tau$. $e$, $A_{\tau_1 + \tau_2}(e)$, etc.) and a semantics 2. Our language without types (e.g., $\lambda x$. $e$, $A(e)$, etc.) and a different (but very similar) semantics 3. An erasure metafunction from first language to second 4. An equivalence theorem: Erasure commutes with evaluation This useful (for reasoning and efficiency) fact will be less obvious (but true) with parametric polymorphism ### Coercion Semantics Wouldn’t it be great if . . . - `int` ≤ `float` - `int` ≤ `{l₁:int}` - `τ` ≤ `string` - we could “overload the cast operator” For these proposed `τ` ≤ `τ'` relationships, we need a run-time action to turn a `τ` into a `τ'` - Called a coercion Could use `float_of_int` and similar but programmers whine about it. ### Implementing Coercions If coercion `C` (e.g., `float_of_int`) “witnesses” `τ` ≤ `τ'` (e.g., `int` ≤ `float`), then we insert `C` where `τ` is subsumed to `τ'` So translation to the untyped language depends on where subsumption is used. So it’s from *typing derivations* to programs. But typing derivations aren’t unique: uh-oh Example 1: - Suppose `int` ≤ `float` and `τ` ≤ `string` - Consider ▶ Suppose `int` ≤ `{l₁:int}` - Consider `print.string(34) : unit` Example 2: - Suppose `int` ≤ `{l₁:int}` - Consider `34 == 34`, where `==` is equality on ints or pointers ### Coherence Coercions need to be *coherent*, meaning they don’t have these problems More formally, programs are deterministic even though type checking is not—any typing derivation for `e` translates to an equivalent program Alternately, can make (complicated) rules about where subsumption occurs and which subtyping rules take precedence - Hard to understand, remember, implement correctly It’s a mess . . . ### Upcasts and Downcasts - “Subset” subtyping allows “upcasts” - “Coercive subtyping” allows casts with run-time effect - What about “downcasts”? That is, should we have something like: ```plaintext if_hastype(τ,e₁) then x. e₂ else e₃ ``` Roughly, if at run-time `e₁` has type `τ` (or a subtype), then bind it to `x` and evaluate `e₂`. Else evaluate `e₃`. Avoids having exceptions. - Not hard to formalize Downcasts Can’t deny downcasts exist, but here are some bad things about them: - Types don’t erase – you need to represent $\tau$ and $e_1$’s type at run-time. (Hidden data fields) - Breaks abstractions: Before, passing $\{l_1 = 3, l_2 = 4\}$ to a function taking $\{l_1 : \text{int}\}$ hid the $l_2$ field, so you know it doesn’t change or affect the callee Some better alternatives: - Use ML-style datatypes — the programmer decides which data should have tags - Use parametric polymorphism — the right way to do container types (not downcasting results)
{"Source-Url": "https://courses.cs.washington.edu/courses/cse505/17au/lec15/lean/lec15-subtyping-4up.pdf", "len_cl100k_base": 4329, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 30172, "total-output-tokens": 4905, "length": "2e12", "weborganizer": {"__label__adult": 0.0004570484161376953, "__label__art_design": 0.0002777576446533203, "__label__crime_law": 0.00030922889709472656, "__label__education_jobs": 0.0004048347473144531, "__label__entertainment": 5.125999450683594e-05, "__label__fashion_beauty": 0.0001608133316040039, "__label__finance_business": 0.00014448165893554688, "__label__food_dining": 0.0004754066467285156, "__label__games": 0.00052642822265625, "__label__hardware": 0.0005135536193847656, "__label__health": 0.000392913818359375, "__label__history": 0.00018155574798583984, "__label__home_hobbies": 7.88569450378418e-05, "__label__industrial": 0.0003323554992675781, "__label__literature": 0.0003066062927246094, "__label__politics": 0.0002849102020263672, "__label__religion": 0.000537872314453125, "__label__science_tech": 0.0022296905517578125, "__label__social_life": 9.66191291809082e-05, "__label__software": 0.0024013519287109375, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.00040435791015625, "__label__transportation": 0.0004749298095703125, "__label__travel": 0.00022268295288085935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13766, 0.01037]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13766, 0.28703]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13766, 0.77005]], "google_gemma-3-12b-it_contains_pii": [[0, 1562, false], [1562, 3200, null], [3200, 4973, null], [4973, 6965, null], [6965, 8792, null], [8792, 11457, null], [11457, 13206, null], [13206, 13766, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1562, true], [1562, 3200, null], [3200, 4973, null], [4973, 6965, null], [6965, 8792, null], [8792, 11457, null], [11457, 13206, null], [13206, 13766, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 13766, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13766, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13766, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13766, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13766, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13766, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13766, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13766, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13766, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13766, null]], "pdf_page_numbers": [[0, 1562, 1], [1562, 3200, 2], [3200, 4973, 3], [4973, 6965, 4], [6965, 8792, 5], [8792, 11457, 6], [11457, 13206, 7], [13206, 13766, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13766, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
d41cfed9400a9c113a5449855dd3d574039e5117
Prolog, Mercury and the termination problem Anna Sasak* Maria Curie-Skłodowska University, Pl. M. Curie-Skłodowskiej 1, 20-031 Lublin, Poland Abstract This paper shortly introduces the two logic programming languages Prolog and Mercury. On this background we introduce the problem of analysing termination of programs. Then we present Mercury’s termination analyser, that the authors of the language incorporated into its compiler. We will also discuss the proposition based on the same method analyser for Prolog’s predicates. 1. Prolog Prolog is a logic programming language with its roots in automated theorem proving. Its terminology and concept are based on the first-order predicate logic. There are a few definitions required to understand the rules of programming in Prolog. Facts are used to describe some relations between the objects and they are always true. They consist of relation name, predicate and the list of comma-separated arguments enclosed with brackets. Each fact ends with full stop. For example, the fact expressing the relation of admiration and referring to two objects could have a form like(kate, flowers). Names of used object and predicates must start with lower case letters. The second type of statement in Prolog are rules. Rules comprise two parts, a head and a body connected with ‘:-’ (neck symbol also read as ‘if’) and end with a full stop. Head is a name of the defined rule with the list of its arguments. Body is a set of goals separated with commas which state for logic ‘and’. That conditional form means that to satisfy the rule it is required for all its goals to succeed. While defining a rule it is allowed to use variables both in its head and body. A variable in Prolog is a string of letters, digits, and underscores beginning either with a capital letter or with an underscore. They are used to replace objects that can not be named at the moment. For example, if we want to describe a person who is a brother of somebody, in Prolog we would use the following form: brother(X,Y) :- man(X), parents(O,M,X), parents(O,M,Y). *E-mail address: asasak@poczta.fm Collection of facts and rules creates a **database**. With this database, we can ask Prolog questions. To answer a question about the database, or in other words, to satisfy all the goals that build that query, Prolog has to search the database. Binding variables always go from left to right, backtracking if necessary. When Prolog finishes satisfying the goals, the answer is given. Extending the previous rule with a few facts the following database is created: ```prolog man(kriss). man(mathew). pARENTS(mathew,anne,kriss). pARENTS(mathew,anne,maggie). bROTHER(X,Y) :- man(X), PARENTS(O,M,X), PARENTS(O,M,Y). ``` Now after putting the question ``` ?- brother(kriss,melanie). ``` the answer ‘No’ will be obtained as there are no possible variables binding that would satisfy every goal of ‘brother’ relation. The basic method used in Prolog’s programs and data structures is **recursion**. As the definition says: recursion or recurrency in programming and mathematics is calling function on itself. Each recursive definition requires at least one ending, nonrecursive state that would make the recursive calls stop. For example, a predicate that computes factorial in Prolog could have the following form: ```prolog factorial(1,1):-!. factorial(X,Y):-X1 is X-1, factorial(X1,Z), Y is Z*X. ``` First clause is an ending state terminating the recursive calls. Goal written as ‘!’ is called **cut**. It is a goal that always succeeds, but cannot be backtracked. It is used to prevent unwanted backtracking. The second clause is a recursive call with its input arguments properly transformed. ### 2. Mercury Mercury is a new, pure logic-functional programming language. Such as other existing logic programming languages, it is a high level programming language which allows programmers to concentrate on solving the problem rather than thinking about low-level dependencies like memory management. The project called Mercury was created at Melbourne University in 1995. For the past years there has been done a lot of research and development work. As a result, since December 2002, there has been the version 0.11.0 available that supports most of system platforms. Main features of Mercury: 1. It is purely declarative which means that predicates and functions do not have non-logical side effects. 2. Strong type system. 3. Strong mode system. 4. Strong determinism system which catches many program errors at compile time. 5. Module system. 6. Support for higher-order programming, with closures, currying and lambda expressions. 7. In comparison with the existing logic programming languages Mercury is very efficient. The syntax of Mercury is based on the syntax of Prolog, although the semantics differs a little. Program in Mercury is a set of modules. Each module is a file that contains sequence of elements – declarations and clauses. Element is a term ended with a full stop. Term is a set of tokens and token is a set of characters. Each module starts with a `:-module Module_name' declaration which specifies its name. Next, there should be interface section introduced by `:-interface' declaration. This section specifies the entities that are exported by this module. An `:-implementation' definition indicates the start of next section. This is the place that must contain definitions for all declarations from the interface section. The module may end with `:- end_module Module_name' declaration. To make use of entities exported by other modules it is required to explicitly import those modules using `:- import_module Module_list' or `:-use_module Module_list' declaration. It is important is that one module must export a predicate `main/2' declared as either: ``` :-pred main ( io__state::di, io__state::uo) is det. or :-pred main (io__state::di, io__state::uo) is cc_multi. ``` Module arranging some list of operations could look like this: ``` :-module list_operations. :-interface. :-type list ( El ). :-pred append( El , list( El ), list(El) ). :-implementation. :-type list ( El ) - - - > [ ] ; [ El | list(El) ]. append( El , [ ] , [El] ). append(El , L1 , [El|L1]. :-end_module list_operations. ``` It is also worth mentioning that there are new kinds of clauses in Mercury, called functions or more precisely: function rules and function facts. If the top-level functor of the head of a rule or a predicate is `=/2', that clause is suitable function rule (having the form: \text{head} = \text{return_value}:-\text{body}) or the function fact (having form: \text{head} = \text{return_value}). In both cases head can not be a variable and its arguments must be valid terms. The type system of Mercury, is based on many-sorted, polymorphic logic. Next to built-in primitive types there are predicate types - \text{pred}, function types – \text{func} and universal type \text{univ}. Apart from those types new types can be introduced with \text{:-type} declaration. There are several categories of derived types, such as: \text{discriminated unions}, \text{equivalence types} and \text{abstract types}. \text{Discriminated unions} are similar to records or enumeration types known from other programming languages. Their declarations have the form: \text{:-type name [(T1,...,TN)] - - - - -> body} where body is a comma separated the sequence of constructors. For example, the type designed to contain personal data could have the following form: \text{:-type pers_data - - - - -> pers_data ( firstname :: string , familyname :: string).} \text{Equivalence types} are declared with ‘==’ operator. They are considered as simplification of the type situated on the right side of the declaration f.e. \text{:-type liczba_calkowita == int.} Types with hidden implementation are called \text{abstract types}. In the interface section there is only type’s name and arguments whereas its definition (list of constructors) is situated in the implementation section. For each predicate or function it is necessary to explicitly determine which arguments are input and which are output. These two primary modes are called ‘in’ and ‘out’ respectively. Mode declaration in the form \text{:- mode term_name( mode , ... , mode) [= mode]} should be situated in the module interface section, usually just after the corresponding predicate or the function declaration. For each declared mode there should be appropriate determinism declaration. Term, which call ends not throwing exception, can have one of the following determinism kinds: \text{det, semidet, multi, nondet, failure, erroneous.} In Mercury full declaration and definition of predicate appending two lists could look as follows: \text{:- pred append ( list (T) , list (T) , list (T) ).} \text{:- mode append ( in , out , out ) is det.} \text{:- mode append ( out , out , in ) is multi.} \text{:- mode append ( in , in , in ) is semidet.} \text{append ( L1 , L2 , L3) :- (L1 = [] , L3 = L2; L1 = [G|O] , append(O , L2 ,L33 ),L3=[G|L33].} 3. Termination problem \text{Termination problem} or \text{halting problem} in computability theory there is a decision problem that could be expressed as follows: for a program’s description and its initial input, determine whether the program, when executed on this input, ever completes. In 1936 Alan Turing proved that general algorithm to solve the halting problem for all possible inputs cannot exist. It is said that the halting problem is undecidable over Turing machines. There are many methods that are used to analyse termination of programs. I would like to present the Gröger-Plümer approach to the problem. That method has two stages: 1. Assigning an integer to each clause of analysed predicate that estimates maximum difference between the total size of the output and input arguments. 2. Using the solution from the first stage to check if in each cycle in the predicate’s call graph, input arguments decrease in size. The subject of this article is only first stage analysis which involves producing and solving a set of linear inequalities in the form: \[ \sum_{i \in I} p_i + \varphi_p \geq \sum_{j \in J} p_j \] where \( p_i \) stands for the size of argument \( i \) predicate \( p \), \( I \) is a subset of the set of the input arguments and \( J \) is a subset of the set of its output arguments. The aim is to solve for the minimum integer value of \( \varphi_p \) satisfying the inequalities inferred for the predicate. Let us consider the predicate \texttt{append(H1,H2,H3)}, which attaches a list to the back of another, and its arguments modes are in, in, out respectively. Starting with the recursive clause, the first stage analysis would have the following course: \begin{verbatim} append (H1, H2, H3) :- H1 = [ X | Xs ], append (Xs, H2, Zs), H3 = [ X | Zs ]. \hfill %h3 append (H1, H2, H3) :- H1 = [ X | Xs ], %x + xs + h2 + \varphi_a + 1 append (Xs, H2, Zs), %x + zs + 1 \end{verbatim} H3 = [ X | Zs ]. %h3 First goal states that h1=x+x \text{xs}+1 so x+x \text{xs} on the right side can be replaced by h1-1 \[ \text{append (H1, H2, H3) :- } \quad h1 + h2 + \varphi_a + 1 - 1 \] \[ H1 = [ X | Xs ], \quad x + x + h2 + \varphi_a + 1 \] \[ \text{append (Xs, H2, Zs), } \quad x + zs + 1 \] \[ H3 = [ X | Zs ]. \quad h3 \] As a result inequality of the form \( h1+h2+\varphi_a \geq h1+h2+\varphi_a+1-1 \) is gained which is always true. The annotation for the other, nonrecursive clause would be: \[ \text{append (H1, H2, H3) :- } \quad h2 \] \[ H1 = [], \quad h2 \] \[ H3 = H2. \quad h3 \] which gives the inequality of the form \( h1+h2+\varphi_a \geq h2 \) also always true which finally allows us to estimate \( \varphi_a \) equals 0. 4. Algorithms A few of Mercury’s creators: Chris Speirs, Zoltan Somogyi and Harald Søndergaard have implemented a termination analyzer and incorporated it into the Mercury compiler. Their algorithm is mainly based on the previously described method, however with small modifications. The first stage analysis is based around the inequalities \[ \sum \text{output\_supplier\_variables} + \varphi \geq \sum \text{output\_variables}, \] where \text{output\_supplier\_variables} represent the set of these input variables whose value contribute to the size of the output variables. The set of output variables from the right side of inequality is calculated by using the fixed point analysis. At first it is assumed that none of the procedures use none of its input arguments to produce output. Then in each of the following steps the output_suppliers set is being completed by M set which is subset of input arguments of \( p_j \) procedure. This process is repeated until reaching a fixed point. The code of Stage 1 algorithm is as follows: For each procedure \( p \in S \) set new\_output\_suppliers(p) to [] Do \[ \text{old\_output\_suppliers:=new\_output\_suppliers} \] \[ \text{Step1(old\_output\_suppliers, new\_output\_suppliers)} \] while new\_output\_suppliers \neq old\_output\_suppliers \[ \text{output\_suppliers:=new\_output\_suppliers} \] If \( I \) is unsatisfiable then for each \( p \in S \) set \( \varphi_p \) to \( \infty \) else set ($\varphi_1, \ldots, \varphi_n$) to ($d_1, \ldots, d_n$) where ($d_1, \ldots, d_n$) is the least solution satisfying $I$ Proc step1(in old_output_suppliers, out new_output_suppliers) is Set $I$ to true For $j := 1$ to $n$: Let $B_0: - B_1, \ldots, B_m$ be the clause defining $p_j$ ($m \geq 0$) $M := \text{outvars}(p_j)$ $\delta := 0$ For $i := m$ to 1: If $M \cap \text{outvars}(B_i) \neq \emptyset$ then $M := (M \setminus \text{outvars}(B_i)) \cup \text{old_output_suppliers}(B_i)$ $\delta := \delta + \text{change}(B_{\{i\}})$ If $M \notin \text{invars}(p_j)$ then for each $p \in S$ set $\varphi_p$ to $\infty$ and exit new_output_suppliers$(p_j) := M$ If $\delta$ contains $\infty$ then $I := \text{false}$ else $I := I \land \varphi_p \geq \delta$ Proposition of algorithm realising Stage 1 analysis in Prolog could have the following form: compare(InVar, OutVar):- ... replace(VarSet, OutVar, InVar, VarSetNew) :- ... step([], Var, Var):-!. step(Body, Var, VarX):-last(Body, Clause), not recursiveCall(Clause), replace(Var, outvar(Clause), invar(Clause), Var1), remove(Body, Clause, Body1), step(Body1, Var1, VarX). step(Body, Var, VarX):-last(Body, Clause), recursiveCall(Clause), add(Var, Gamma, Var1), replace(Var, outvar(Clause), invar(Clause), Var1), remove(Body, Clause, Body1), step(Body1, Var1, VarX). step1_check(X:-Body):-Var = outvar(H), step( Body, Var, VarNew), In = invar(H)+Gamma, compare(In, VarNew). The presented algorithm’s framework has been partially written with some informal language to improve clarity. It assumes that there is information about predicate’s modes available and also that predicates are allowed to make recursive calls only to itself. Predicate \textit{step1\_check} for each predicate orders to perform stage 1 analysis. Predicate \textit{step} for each goal of analyzed predicate’s body converts set of input variables including new active variables. Predicate \textit{compare} for variables sets it gains, builds up proper inequality and returns its solution. Predicate \textit{replace} does some required list operations. **Conclusions** Unfortunately stage 1 analysis is not enough to get a reliable answer whether that predicate terminates or not. To prove it, let us again consider the predicate calculating factorial. \begin{verbatim} factorial (1, 1). factorial (X, Y) :- X > 1, X1 is X - 1, factorial (X1, Y1), Y is Y1 * X. \end{verbatim} The inequality that is obtained is of the form \( x + \varphi \geq x + \varphi \) and it is always true. Yet it is enough to change only one sign \begin{verbatim} factorial (1, 1). factorial (X, Y) :- X > 1, X1 is X + 1, factorial (X1, Y1), Y is Y1 * X. \end{verbatim} to see that: 1. It changes nothing in the obtained inequality and its solution. 2. Causes that this predicate’s call will never terminate. To draw some further conclusions about termination of a given predicate it is necessary to make stage two analysis. **References**
{"Source-Url": "https://journals.umcs.pl/ai/article/download/3075/2271", "len_cl100k_base": 4197, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 18719, "total-output-tokens": 4928, "length": "2e12", "weborganizer": {"__label__adult": 0.00030541419982910156, "__label__art_design": 0.00020420551300048828, "__label__crime_law": 0.0003609657287597656, "__label__education_jobs": 0.0004968643188476562, "__label__entertainment": 5.650520324707031e-05, "__label__fashion_beauty": 0.00012636184692382812, "__label__finance_business": 0.00015604496002197266, "__label__food_dining": 0.0003917217254638672, "__label__games": 0.0005345344543457031, "__label__hardware": 0.0007381439208984375, "__label__health": 0.0004534721374511719, "__label__history": 0.0001615285873413086, "__label__home_hobbies": 8.851289749145508e-05, "__label__industrial": 0.0003991127014160156, "__label__literature": 0.000244140625, "__label__politics": 0.00023686885833740232, "__label__religion": 0.00047707557678222656, "__label__science_tech": 0.01415252685546875, "__label__social_life": 8.183717727661133e-05, "__label__software": 0.004505157470703125, "__label__software_dev": 0.97509765625, "__label__sports_fitness": 0.0003037452697753906, "__label__transportation": 0.0004305839538574219, "__label__travel": 0.0001590251922607422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16781, 0.02108]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16781, 0.74651]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16781, 0.84789]], "google_gemma-3-12b-it_contains_pii": [[0, 2118, false], [2118, 4322, null], [4322, 6517, null], [6517, 9337, null], [9337, 11035, null], [11035, 13242, null], [13242, 14684, null], [14684, 16781, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2118, true], [2118, 4322, null], [4322, 6517, null], [6517, 9337, null], [9337, 11035, null], [11035, 13242, null], [13242, 14684, null], [14684, 16781, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16781, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16781, null]], "pdf_page_numbers": [[0, 2118, 1], [2118, 4322, 2], [4322, 6517, 3], [6517, 9337, 4], [9337, 11035, 5], [11035, 13242, 6], [13242, 14684, 7], [14684, 16781, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16781, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
c15b938773ae1996e1fc41895494a1100b04e021
Vocabulary Hosting: A Modest Proposal Harry Halpin World Wide Web Consortium and University of Edinburgh 10 Crichton St. Edinburgh, United Kingdom EH8 9AB Tom Baker Dublin Core Metadata Initiative Ltd. 50 Geylang East Avenue 1 Singapore 389777 Abstract Many of the benefits of structured data come about when users can re-use existing vocabularies rather than create new ones, but it is currently difficult for users to find, create, and host new vocabularies. Moreover as a foundation for applications, the value of any given vocabulary depends on the perceived certainty that the vocabulary – both its machine-readable schemas and human-readable specification documents – will remain reliably accessible over time and that its URIs will not be sold, re-purposed, or simply forgotten. This note proposes two approaches for solving these problems: one for multiple Vocabulary Hosting Services and a Vocabulary Preservation System to keep them linked together. A V ocabulary Hosting Service (VHS) needs to be made to meet immediate needs. Ideally, these sort of services would fulfill the following characteristics. • Addresses an immediate need by application developers for a managed persistence context in which to declare vocabularies. • The service would own a domain under which vocabularies can be created (in principle by anyone) with the certainty that the vocabulary URIs and related documentation will remain accessible. • Stewardship of such a service would lie with a group of stakeholders involving (potentially) groups like the W3C, DCMI, W3F, Library of Congress that are known long-term entities with an interest in stable vocabularies and commercial companies such as Google, Yahoo, and Talis whose business model does or may involve vocabularies. A Vocabulary Preservation System can then link the various vocabulary hosting services together. This more comprehensive Vocabulary Preservation System (VPS) is less pressingly needed now, but is needed as a solution for the long term. This system would fulfill the following characteristics. • Addresses a longer-term need to put vocabularies created in a wide range of institutional and commercial contexts into a well-understood preservation context that will guarantee the resolvability of vocabulary URIs to namespace documentation over the long term. • Memory institutions (such as libraries) maintain mirrored caches of crucial namespace documentation, step in to maintain uninterrupted service in response to well-defined trigger events, and inherit stewardship of the vocabularies in the long term. In the end, pragmatic concerns will dominate. Users who want to create a vocabulary need to be able to host the vocabulary somewhere easily, possibly without purchasing their own domain name or webserver. Experiences at recent ‘open data’ events like Vocamps show that this type of service is needed, as much of the work done at these events is lost. At present, projects or organizations that wish to publish vocabularies and make them available for use by others are rarely in a position to guarantee the long-term persistence of their vocabularies’ URIs or continued availability of associated documentation. End-users face a bewildering number of options for actually finding and using vocabularies, and few reliable third-party hosts provide any technical or legal assurances about the use or persistence of their vocabularies. Furthermore, companies that benefit from the deployment of structured data by end-users would be best served by keeping the vocabularies of structured data of the Web shared and not controlled by any single company or application, much like HTML is shared amongst various browser vendors. This makes it easier for users (including other companies) to create structured data, providing more likelihood that a critical mass of structured data can be created that can 1 A sort of ‘barcamp-style’ event for creating vocabularies, see http://vocamp.org. then serve as the foundation for new, exciting, and potentially profitable applications. This is true for producers of Linked Data in general, and as more and more people release data as Linked Data, the need for vocabulary hosting will only increase. However, first we will look at the use of vocabularies in consumers in the case of structured search. The Use Case of Common Vocabularies in Structured Search Currently, there is increasing interest in the ability to add structure to the Web, as exhibited by the recent move of search engines to index and handle vocabularies in RDFa (Adida et al. 2008). RDFa fulfills a use-case that is not currently covered by microformats (Suda 2006), in particular, the use-case of a user wanting to mark up some structure in HTML that does not have already valid microformat. Note that using technologies like GRDDL, microformats and RDFa can be made compatible (Connolly 2007). As more and more structured data gets on the Web, users will want to be able to deploy a ‘long-tail’ of vocabularies, and many niche markets will want to take advantage of these vocabularies. For example, while microformats like hCard and XFN cover social relationships, in a scientific domain like medicine it is likely useful to large and detailed specialist vocabulary for health care, and to mark up health reports using this vocabulary. Yet such a vocabulary can not easily be created as a microformat, since it appeals only to a small community and its creators may not have the time nor wherewithal to engage in the process around creating a microformat for their domain-specific knowledge. Equally, for the discovery of information about products like MP3 players, some vocabulary listing of MP3 player brands, options, and prices would be needed. For example, the GoodRelations vocabulary currently allows one to describe products (Hepp 2008), and while this is supported by Yahoo, it is not supported by Google, who instead created their own vocabulary for products.3 How can the owner of a store that wants to list MP3 player brands easily find the vocabulary to put that sort of information into his web-page using RDFa? Currently, it is difficult to even discover vocabularies like GoodRelations using Semantic Web search engines like Sindice (Oren et al. 2008). The results of these Semantic Web search engines, while useful for a Semantic Web expert, do not provide the guidance an ordinary user needs. Furthermore, if the store owner tries to optimize for either vocabulary, they risk alienating a portion of their audience or applications that use a particular vocabulary. While there are clear benefits for a search engine to find and use this structure from a web-page, the costs currently outweigh the benefits for the end-user, as there is no process for creating a new RDFa vocabulary that can be easily found, re-used by other users, and indexed by search engines is needlessly difficult. The same issue holds not just for product reviews and prices, but for almost all Semantic Web vocabularies, which are already often fractured into incompatible versions despite a minimum of uptake. So, currently if a user wants to use RDFa to mark up something as simple as some personal information attached to a review, how should they do it? There are a number of independent personal information vocabularies FOAF,4 W3C RDF mappings of vCard,5 and Google’s vocabulary.6 Obviously, the same user or developer that wants to add structured data to their web-page should not have to choose whether they want to optimize their RDFa for either Google or Yahoo!. They should just be able to find the ‘best’ vocabulary for their needs and mark up their web-page with minimal concern, and if they can’t find a vocabulary, they should be able to create one and host the vocabulary someplace neutral with a guarantee that their vocabulary can be re-used without any charge, and the assurance that the documentation and URIs used by the vocabulary will be persistent. Currently, the two main search engines that deploy structured search are Google and Yahoo. Yahoo’s SearchMonkey, based on the earlier work of Microsearch (Mika 2008), tries to point users to relevant specifications, such as FOAF, but does provide any sort of infrastructure for users to find and create vocabularies. Google supports and offers examples in RDFa to vocabularies for reviews, people, products, and organizations. The vocabularies used by Google are often straightforward vocabulary mappings from microformats. Google also does not currently seem to support opened use of RDFa or the ability for users to create vocabularies, although it supports and hosts all its supported RDF vocabularies at Google’s own URI. There is a powerful logic behind Google’s choice: If Google does not feel that other URIs are likely to be trusted, maintained, preserved, or provide a decent user-experience, then of course Google will host the vocabulary at its own URI, as at least Google can then guarantee as a company the persistence and maintenance of those vocabularies. Ideally, what is needed is an infrastructure, usable and supported by both major companies such as Google and Yahoo, that is hosted at a neutral third-party. This neutral vocabulary hosting service would then make life easier for users of these structured vocabularies. Requirements for Vocabularies However, what would the requirements be for vocabularies for structured data? There are many types of vocabularies. In general, vocabularies are used across a vast variety of systems, ranging from strings used in everything from SQL databases microformats, to more complex approaches such as to the QNames typically used in XML and the use of URIs in RDF. In particular, we will in our proposed system will map these all to URIs so that vocabularies in different domains that re-use the same text string can remain independent of each other. In this vein, vocabularies require: - **Use of URIs:** Each term in a vocabulary has a URI. - **Persistent URIs:** Each term URI will be used to refer to the same term in perpetuity and will not be repurposed. --- 5See [http://www.w3.org/TR/vcard-rdf](http://www.w3.org/TR/vcard-rdf) Institutional guarantees are key; note that stable URIs (e.g., using redirection services) do not automatically refer to stable documentation. - **Persistent documentation**: Each term URI should remain resolvable to ‘namespace documents’ – descriptive documentation in HTML and/or machine-readable representations such as RDF schemas. - **Change policy**: The stability of the meaning of the terms should be determinable – i.e., the meanings of terms should evolve according to known change management policies and with responsibility for changes traceable to either individuals or organizations. - **Preservation**: Vocabularies should be preserved, like any other business-critical component or component of cultural heritage, ideally not in dependence on a single vocabulary hosting service. - **Analytics**: Vocabularies should be plugged into actual empirical data about their usage in the wild, allowing vocabulary producers and consumers to see how the vocabulary is actually used. This is invaluable for future versioning of the vocabulary and for discovering valuable patterns in the vocabulary usage. - **Healthy ecosystem**: Like any other aspect of a machine or human language, vocabularies are needed in contexts ranging from general to the very specific domains, from the informal to the highly institutionalized. A diversity of vocabulary maintainers using a multiplicity of domain names is healthier than a vocabulary monoculture. A few notes on URIs and persistence are in order. A URI can always be redirected to persistent documentation, as with purl.org. The problems of persistent URIs can therefore be separated from the goals of persistent documentation and change policy. One organization host might provide persistent URIs while another provides a hosting service for documentation. For example, an organization (such as DERI) might host URIs under its domain (like semanticweb.org) while the documentation is hosted using resources elsewhere (even cloud resources like Amazon.com); this can be easily done. The persistent URI problem is not trivial, and may require clever optimization of caching if the URI is frequently retrieved. However, as URIs are normally just used for disambiguation in vocabularies without retrieval, this problem is not as endemic as it might seem. ### Vocabulary Hosting Service A vocabulary hosting service would own a domain, let people create new vocabularies under that domain, and publish human-readable documentation about the vocabularies. Once published, documentation would fall under the persistence guarantee of the service. Vocabulary maintainers would have the right to install updated documents, but all historical versions of the vocabulary would be preserved and remain accessible. The service would promise to keep the latest version of the documentation available at the vocabulary URIs. Just as sourceforge.net guarantees the preservation of old versions of software and provides access to the latest version (‘latest version persists’) while making no guarantees that the software posted on its servers will work as described, the vocabulary hosting service does not have to make guarantees regarding the soundness of hosted vocabularies, nor would the service itself be in a position to make any promises regarding the further development of any hosted vocabulary. Social systems that provide user-driven ratings could even allow vocabulary users themselves to ascertain soundness and provide explicit feedback. A vocabulary hosting service could do for structured data what programmableweb.com does for APIs and mashups. Such a hosting service might also offer easy-to-use examples for users, tutorials, recipes, best-practice documents on managing vocabularies, a FAQ, and mailing lists; a staging area for testing documentation (human and machine-readable) before publication; a publication wizard for uploading namespace documents and setting MIME types and 303 redirects; and a Web-based authoring environment for vocabularies. Options provided by the hosting service could range from simple uploading of schemas and documents (with automated configuration and testing before final publication) to rich editing environments and command-line access to a server. The service would require explicit permission from vocabulary registrants, such as a license to distribute content in perpetuity, and it would need to provide disclaimers regarding copyrights, trademarks or patent violations. It should ideally guarantee some type of royalty-free licensing for the use of a vocabulary. A hosting service may by default acquire the right to manage (i.e., make changes) to the vocabulary in the long term if or when its maintainers disappear. Vocabulary hosting services may have different policies about vocabulary creation. Some might let anyone register a vocabulary with the service free-of-charge with instant deployment. Others may require a longer community-driven process of review before deploying the vocabulary. A vocabulary hosting service may allow URIs to point to documentation or schemas outside the vocabulary host itself. Vocabulary hosting services could become domain-specific. Vocabulary hosting services may restrict people to hosting abstract vocabularies, while others may allow URIs to be minted for physical objects, people, places, or ideas. For example, one might want a taxonomy of automobile types and options, but also a single URI to describe in detail a particular automobile (the ‘Volkswagen Bug’) that fits within the taxonomy of more abstract automobiles (‘convertibles’). Thus, one could restrict users to RDF properties and classes about automobiles, but some hosts might allow objects, people, places, or ideas, or even entities of an undeclared type. Note that the problem of vocabulary hosting can be separated from the problem of guaranteeing the availability of vocabularies in a distributed manner over a longer term (‘vocabulary preservation’). A Vocabulary Hosting Service might be part of a larger Vocabulary Preservation System which maintains up-to-date caches of all vocabulary-related materials multiple institutions. One possible objection is that redirection services, such as the use of HTTP 303 or purl.org are enough. However, while redirection services such as purl.org can guarantee that URIs will remain resolvable, they cannot guarantee that schemas and documentation will remain available when a vocabulary is no longer actively maintained. Furthermore, even purl.org has gone down on occasion. What few vocabulary hosting sites exist currently are not regularly updated and do not allow use by the general public. Dan Brickley’s xmlns.com and Ian Davis’s vocab.org may go the way of the currently non-maintained schema.web.info. Other organizations that maintain vocabulary hosting sites, such as DERI’s rdfs.org and Revelytix’s knooldi.com, may also suffer the same fate if either organization has problems. There exists a number of codebases that, while each has a large number of problems, could with enough work serve as the foundation for such a system. It should be a requirement that such a system should allow one to edit and maintain a vocabulary purely through a Web-based interface and so without downloading any software. This immediately makes unusable most traditional ontology editing environments. Rather unfortunately, while work like the Ontolingua system are the inspiration for this modest proposal for Linked Data (Gruber 1993), most work from the ontology engineering community like the Neon Toolkit is aimed at specialists and does not have a Web-based interface (Gómez-Pérez and del Carmen Suárez-Figueroa 2009). However, Semantic Mediawiki, could serve as a basis in combination with WebProtege (Völkel et al. 2006). A more light-weight client like Neologism⁶ could also serve as a foundation in combination with a content management system. Since vocabulary creation and maintenance is social process, the re-purposing of a content management system like Drupal that already has a strong social component (with ratings, tags, social networks, and blogs) would likely be a necessary component of a larger vocabulary hosting platform. **Vocabulary Preservation System** Regardless of the intent, the best persistent URIs and Vocabulary Hosting Services may still at some point go down. The only way to counter this to have some loose cooperation among institutions to cache and preserve vocabularies, much as the domain name system itself is cached. The preservation approach should provide a context for the long-term preservation of vocabularies in a distributed manner and in a wide range of social contexts. Indeed, as they may be important to future applications or scholarship, it is important to consider these vocabularies as an artifact to be preserved like any other artifact important for cultural memory. To summarize, a Vocabulary Preservation System (VPS) would: - **Contracts**: Provide a well-understood framework for contracts between Vocabulary Hosting Services and the Vocabulary Preservation System, defining processes and governance for handling rapid interventions (e.g., redirecting URIs if disaster strikes) and for transferring ownership and maintenance responsibility in the long-term. - **Open**: Offer itself as a partner to any Vocabulary Hosting Service that tries to aim for best practices in vocabulary hosting, and so serve as a further ‘seal-of-approval’ for Vocabulary Hosting Services. - **Reliable**: Aim at ensuring that at least all large-scale (popular) vocabularies have such mechanisms in place. - **Meta-review**: Provide regular reports on the status of maintenance activities for covered vocabularies, possibly as a group with other Vocabulary Preservation Systems. As a by-product of this reporting, the VPS or coalition of VPS systems would function as a clearinghouse for information about vocabularies. Note that this system may not be difficult to implement. The LOCKSS (Lots of Copies Keep Stuff Safe) of Stanford would provide an ideal open source code-based for distributing copies across the world (Maniatis et al. 2005). The LOCKSS system achieves this redundancy by providing an automated system for sharing caches of digital content within a secure, closed peer-to-peer network, and has already been implemented successfully to preserve digitally curated journal articles (Maniatis et al. 2005). Due to the mathematical probability of unrelated server failures, a fairly small number of vocabulary caches should be sufficient in order to guarantee vocabulary preservation in the long-term. However, at the moment we need at least one long-term, neutral, non-profit vocabulary hosting site with a clearly established social process for adding and editing vocabularies. Once this is in place, more Vocabulary Hosting Services will undoubtedly arise. **A Modest Proposal** There is a clear need today for at least one neutral and non-profit Vocabulary Hosting Service. This hosting space should ideally be managed by organizations and companies that have a long-term interest in the area and are actively deploying these technologies. This should be created as soon as possible. The elements of a solution are: - **Domain-name**: A neutral domain name should be used – one that belongs to a non-profit organization or could be transferred to one. Suggestions have been sharednames.org, semanticweb.org, a new domain name, or another domain name of an existing body. - **Legal Body**: Some sort of legal body needs to be host the domain name or purchase. Another option could be an existing organization, and the W3C or IANA would be possibilities. A legal policy that provides the proper disclaimers and royalty-free status of the vocabularies would need to be crafted; the W3C Royalty-Free Patent Policy could serve as a starting point, as well as the Open Data Commons’ Open Database License (ODbL).⁷ - **Technical Infrastructure**: The site would need a strong policy on service and maintenance, with a robust and extensible infrastructure. Some combination of Amazon, --- ⁶See [http://neologism.deri.ie/](http://neologism.deri.ie/) ⁷[http://www.opendatacommons.org/licenses/odbl/](http://www.opendatacommons.org/licenses/odbl/) Google, and Yahoo! could help provide this infrastructure. Organizations like IANA or the W3C do not have at the moment the technical resources to maintain such a vocabulary hosting service by itself, although they might if there was some funding by external entities. The baseline infrastructure should allow users that agree to the legal policy to upload their schema and human readable-documentation, generating this via simple transformations. This could later be upgraded with a more comprehensive and interactive user interface. - **Long-term financial stability**: Since managing persistent URIs and hosting documentation takes resources, there needs to be some baseline financial model. This can initially be supported by work from interested parties, and if enough interested parties sign up, the domain name can be guaranteed for a long period of time. Hosting is more difficult, but suitable arrangements could be made. A simple corporate and organizational sponsor revenue stream may work, but in the worst case some sort of fee for hosting and maintaining vocabularies could be deployed. - **Clear Social Process**: There are two main options as regards social process for vocabulary creation: the completely open model and the working group model. In the open model, similar to a Wikipedia for vocabularies, any person could easily submit, update, and comment upon a vocabulary, with a simple line of responsibility going to a single individual or the VHS itself. In this way, vocabularies could be allowed to be assigned on a ‘first-come first-serve’ basis. In the working group model, a group of interested parties is chaired for some period of time to come to consensus on a vocabulary, like in any other standardization process, and vocabularies are then assigned based on consensus. The vision of this modest proposal is at least one neutral vocabulary hosting system that would serve the various vocabularies for the ‘links’ in Linked Data and structured search. Far from an ‘Academie Francaise’ that dictates how vocabularies should be used, a bottom-up approach that allows both producers and consumers of data to contribute to these vocabularies would be ideal: A Wikipedia for the vocabularies that give data their meaning. **References**
{"Source-Url": "https://www.aaai.org/ocs/index.php/SSS/SSS10/paper/download/1140/1450", "len_cl100k_base": 4860, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 15859, "total-output-tokens": 5775, "length": "2e12", "weborganizer": {"__label__adult": 0.0003802776336669922, "__label__art_design": 0.0012645721435546875, "__label__crime_law": 0.0008192062377929688, "__label__education_jobs": 0.0035266876220703125, "__label__entertainment": 0.00027871131896972656, "__label__fashion_beauty": 0.0002448558807373047, "__label__finance_business": 0.0027675628662109375, "__label__food_dining": 0.0004184246063232422, "__label__games": 0.0007452964782714844, "__label__hardware": 0.0010175704956054688, "__label__health": 0.0007696151733398438, "__label__history": 0.0008568763732910156, "__label__home_hobbies": 0.00022220611572265625, "__label__industrial": 0.00045418739318847656, "__label__literature": 0.0017223358154296875, "__label__politics": 0.0005717277526855469, "__label__religion": 0.0006194114685058594, "__label__science_tech": 0.276611328125, "__label__social_life": 0.0003314018249511719, "__label__software": 0.141845703125, "__label__software_dev": 0.5634765625, "__label__sports_fitness": 0.0001844167709350586, "__label__transportation": 0.0006852149963378906, "__label__travel": 0.00034546852111816406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26585, 0.0184]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26585, 0.47118]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26585, 0.91104]], "google_gemma-3-12b-it_contains_pii": [[0, 3965, false], [3965, 10373, null], [10373, 16530, null], [16530, 22650, null], [22650, 26585, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3965, true], [3965, 10373, null], [10373, 16530, null], [16530, 22650, null], [22650, 26585, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26585, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26585, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26585, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26585, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26585, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26585, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26585, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26585, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26585, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26585, null]], "pdf_page_numbers": [[0, 3965, 1], [3965, 10373, 2], [10373, 16530, 3], [16530, 22650, 4], [22650, 26585, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26585, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30