id
stringlengths 40
40
| text
stringlengths 9
86.7k
| metadata
stringlengths 3k
16.2k
| source
stringclasses 1
value | added
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
| created
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
|
|---|---|---|---|---|---|
77c2d062fe527737a96ff1542c7124197f1b2cc1
|
Reinforcing Diversity Company Policies: Insights from StackOverflow Developers Survey
Karina Kohl Silveira\textsuperscript{a}, Soraia Musse\textsuperscript{b}, Isabel Manssour\textsuperscript{c}, Renata Vieira\textsuperscript{d} and Rafael Prikladnicki\textsuperscript{e}
School of Technology, Pontifica Universidade Católica do Rio Grande do Sul (PUCRS), Porto Alegre, Brasil
Keywords: Software Engineering, Software Development, Diversity, Data Visualization, StackOverflow.
Abstract: Diversity is being intensively discussed by different knowledge areas of society and discussions in Software Engineering, are increasing as well. There are unconscious bias and lack of representativeness and when we talk about characteristics as ethnicity and gender, to mention a few. How can tech companies support diversity, minimizing unconscious bias in their teams? Studies say that diversity builds better teams and delivers better results, among other benefits. Cognitive diversity is linked to better outcomes, and is influenced by identity diversity (e.g., gender, race, etc.), mainly when tasks are related to problem-solving and prediction. In this work, we are interested in understanding the pain points in software engineering regarding diversity and provide insights to support the attraction, hiring and retention policies for more diverse software engineering environments. StackOverflow is a popular community question\&answer forum, with a high engagement of software developers. Yearly, they apply a survey, present straightforward results, and made the anonymized results available for download. So, it is possible to perform additional analysis beyond the original ones. Using data visualization techniques, we analyzed 2018 data implying insights and recommendations. Results show that diversity in companies is not yet a conscious decision-making factor for developers assessing a new job opportunity, and respondents from underrepresented groups tend to believe more they are not as good as their peers. We also propose a discussion about the unconscious bias, stereotypes, and impostor syndrome and how to provide support on that.
1 INTRODUCTION
More than ever, software development is a collaborative task. Software development teams are built on people and, lately, the area is becoming aware of the problem of underrepresented groups, like gender, racial, cultural, etc. As mentioned in Vasilescu (Vasilescu et al., 2014) previous work, gender representation in Science, Technology, Engineering, and Mathematics (STEM) related subjects raises the significant attention of researchers and academics, as well as of policy-makers, all noting a significant under-representation of women. Several companies are doing the challenge of embracing diversity in the workforce as Google (Google, 2018a), Facebook (Facebook, 2018), Microsoft (Microsoft, 2018), and SAP (SAP, 2018).
Software Engineering demands well developed problem-solving skills from developers. Considering Agile Methodologies, for example, the Agile Manifesto (Mike Beedle, 2001), put people as critical assets to better performance on development and delivery, by prioritizing "individuals and interactions over process and tools". Roger S. Pressman (Pressman, 2010) says that the agile philosophy emphasizes individual competence (team members) combined with group collaboration as a critical success factor for the team. In other areas, the impact of diversity is also an object of study. Menard et al. (Menard et al., 2018) presents how to adequately address cultural diversity in organizations regarding practices of information protection. Förster (Förster, 2018) shows the effect of ethnic heterogeneity on electoral turnout in Europe, offering insights into the role ethnic heterogeneity plays in political participation. Atadero et al. (Atadero et al., 2018) discuss how to create an in-
inclusive environment in the first years of engineering courses in the light of efforts to broaden the participation of women and people of color in engineering degree programs and careers. Software engineering environments need to be safe for diversity and developers, managers and all other roles in these environments need to understand what are the factors that impact work environments, and to help companies to refine their diversity and inclusion policies.
This paper aims to show our insights, based on an analysis of StackOverflow Developers’ Survey data. To achieve that, we are interested in exploring the following research question:
RQ. What companies working with software engineering should focus to attract, hire and retain talents on the specter of diversity?
To answer this question, we derived three other questions:
RQ1. What is being considered by developers when assessing job opportunities?
RQ2. What are their professional objectives for the future?
RQ3. How confident developers are in their programming skills?
Our analysis shows that diversity in the company is not yet a conscious decision-making factor for a developer assessing a new job opportunity except for non-binary and transgenders. Also, respondents that identified themselves as women, non-binary and transgenders tend to doubt more their programming skills and believe they are not as good as their peers than the respondents identified as men. A discussion about the unconscious bias, stereotypes, and imposter syndrome and how to provide support on that is provided in the Results and Discussions in Section 5.
The rest of the paper proceeds as follows. Section 2 presents some background on how some big players in the software engineering industry are dealing with their diversity and inclusion strategies. Section 3 shows some related work regarding data coming from StackOverflow. Section 4 summarizes the methods we used to analyze the data. Section 5 presents our results, Section 7 the threat to validity, and Section 6 discusses them and their implications. Section 8 concludes the paper.
2 BACKGROUND
Page (Page, 2007) says that we cannot tell whether diversity is good or bad unless we first know what diversity is. They characterized diversity as the differences in how people see, categorize, understand, and go about improving the world, recognizing different dimensions of diversity: cognitive and identity. Cognitive Diversity is the difference between how we interpret, reason and solve problems - how we think. Identity Diversity is determined by affiliation with a social group as gender, culture, ethnicity, religion, sexual orientation, etc. Identity diversity and cognitive diversity often go hand in hand. People belonging to different identity groups, or with different life experiences, also tend to acquire diverse cognitive tools. Education, life experiences, and identity can all contribute to cognitive diversity. How much of these matters depend on the task. For identity diversity to be beneficial in a group, it must link with cognitive diversity, and it happens when tasks are related to problem-solving and prediction, so the identities translate into relevant tools. Also, when the members of the group have little or no preference diversity, and when they get along with one another. In these cases, identity diverse groups do perform better than homogeneous groups (Page, 2017).
2.1 Efforts to have Diverse Workforces
To have diversity in the workforce is a challenge embraced by several companies. Table 1 summarizes the Gender distribution and Table 2 summarizes Race/Ethnicity distribution in some huge technology companies.
Google (Google, 2018a) is one of the technology companies that are engaged in increasing their numbers in diversity and inclusion area. They believe once their mission is to organize the world’s information and make it universally accessible and useful, it means for every one everyone and, to do that well, a workforce that is a representative of the users they served is important. They present an accelerated approach to diversity and inclusion and share, annually, their Diversity Annual Report on how they plan to deliver their strategy. There is a disclaimer that current gender reporting they published is not inclusive of their non-binary population and they intend to take into account research such as Transgender-inclusive measures of sex/gender for population surveys. One of the discussed points is regarding unconscious bias, and they work with the idea that understanding bias and its intersection with the workplace and the communities around is crucial to promote change. Science shows that everyone is biased, once the human brain is predisposed to negative stereotypes (Spiers et al., 2017) and they do not expect people to rid themselves of all bias, but we want them to recognize it. Research shows that when we are more aware of unconscious bias, we make more objective deci-
sions. To date, 84% of Google’s people managers have taken Unconscious Bias training, and they introduced Unconscious Bias workshops into all new employees orientations. Also, Google provides numerous guides to practices and tools to improve their people processes (Google, 2018b).
Facebook (Facebook, 2018) shares, since 2014, their journey to build a diverse company that reflects the global community they serve. For 2018 report they highlight what they believe is working and where they can do better. Facebook believes that diversity is critical to their success as a company once people from all backgrounds rely on Facebook to connect with others, and they will better serve their needs with a more diverse workforce. To attract the best and the brightest, they believe that effective recruiting is critical for building a diverse company. To do that, they work with organizations that support people of color and women in computer science and engineering, some of which include Anita Borg/Grace Hopper (Institute, 2018), SHPE (SHPE, 2018) and NSBE (NSBE, 2018), as well as many others that support a broad range of groups. They see steady increases in hiring rates for underrepresented people since they started testing this approach in 2015.
Microsoft (Microsoft, 2018) says that diversity and inclusion are much more than gender and race demographics. It is about different cultures, religions, ages, political affiliations, education, and sexual orientations. The commitment to diversity and inclusion means creating an environment where everyone feels included and valued. To do that, Microsoft is committed to build and expand the pipeline for diverse technical candidates. They work with girls in early age to sparking their interest in technology careers. Also, work in partnerships with associations for women in STEM and expanding their military academy program to military bases worldwide and still to seek meaningful ways to encourage and cultivate future workforce. Besides, they are also reviewing their traditional recruiting practices to become more expansive and more inclusive in the processes. They expanded the scope of universities where they recruit, such as Historically Black Colleges and Universities (HBCUs) and also have programs for autism hiring. Managers should take Inclusive Hiring training. They believe that build a diverse culture is a critical element to spark innovation and allow unique perspectives and insights to the surface.
SAP (SAP, 2018) presents four diversity areas they work on: Gender, Cross-Generational, Culture & Identity, and Differently Abled. SAP is dedicated to eliminating bias in the workplace and want to enable individuals to be recognized for what they have to contribute. The idea is to embrace and encourage different perspectives and that they are stronger by the unique combination of culture, race, ethnicity, age, gender, sexual orientation, gender identity or expression, physical or mental ability, and work-life situations. It can be highlighted the work cross-generational where people at different stages of life bring a variety of perspectives and experiences to the company — also, differently abled people and the Autism at Work program that leverages the abilities and perspectives of people with autism to foster innovation and to help customers become intelligent enterprises. The program aims to reduce barriers of entry so qualified individuals can fully develop their potential, and it employs over 140 people and in 12 countries.
Table 1: Gender Distribution in Tech Companies (Technical Roles).
<table>
<thead>
<tr>
<th>Company</th>
<th>Female</th>
<th>Male</th>
</tr>
</thead>
<tbody>
<tr>
<td>Google</td>
<td>21.4%</td>
<td>78.6%</td>
</tr>
<tr>
<td>Facebook</td>
<td>21.6%</td>
<td>78.4%</td>
</tr>
<tr>
<td>Microsoft</td>
<td>19.0%</td>
<td>81.0%</td>
</tr>
<tr>
<td>SAP</td>
<td>33%</td>
<td>1%</td>
</tr>
</tbody>
</table>
SAP published only the number of Women in the entire company with no differentiation of kind of role.
Table 2: Ethnicity Distribution in Tech Companies.
<table>
<thead>
<tr>
<th>Company</th>
<th>Asian</th>
<th>Black</th>
<th>Latinx</th>
<th>White</th>
<th>Other</th>
</tr>
</thead>
<tbody>
<tr>
<td>Google</td>
<td>41.4%</td>
<td>1.5%</td>
<td>2.8%</td>
<td>50.7%</td>
<td>3.8%</td>
</tr>
<tr>
<td>Facebook</td>
<td>50.3%</td>
<td>1.3%</td>
<td>3.1%</td>
<td>42.7%</td>
<td>2.6%</td>
</tr>
<tr>
<td>Microsoft</td>
<td>38.2%</td>
<td>2.7%</td>
<td>4.3%</td>
<td>52.3%</td>
<td>2.4%</td>
</tr>
</tbody>
</table>
3 CONTEXT SELECTION
StackOverflow\(^1\) is a popular question and answer sites for developers based on gamification where participants earn reputation points and badges that can be seen as a measure of their expertise by peers and potential recruiters and are known to motivate users to contribute more (Vasilescu, 2014). An extensive list of academic papers using StackOverflow data has been published since it was created in 2008 as Vasilescu et al.(Vasilescu et al., 2013), Bosu et al.(Bosu et al., 2013) and, Berger et al.(Berger et al., 2016) to mention a few. However, we did not find any academic paper using their Annual Developers’ Survey data (published since 2011).
Vasilescu et al. (Vasilescu et al., 2013) investigated the interplay between StackOverflow activi-
\(^1\)https://www.stackoverflow.com
ties and the development process, reflected by code changes committed to the social coding repository, GitHub. Their study showed that active GitHub committers ask fewer questions and provide more answers than others. They also observed that active Stack Overflow askers distribute their work in a less uniform way than developers that do not ask questions. And, finally, they showed that despite the interruptions incurred, the Stack Overflow activity rate correlates with the code changing activity in GitHub.
Once earning a high reputation score requires technical expertise and sustained effort, Bosu et al. (Bosu et al., 2013) analyzed the Stack Overflow data from four perspectives to understand the dynamics of reputation building on it. The results provided guidance to new Stack Overflow contributors who want to earn high reputation scores quickly indicating the following activities to help to build reputation: answering questions related to tags with lower expertise density, answering questions promptly, being the first one to answer a question, being active during off-peak hours, and contributing to diverse areas.
Berger et al. (Berger et al., 2016) studied Question and Answer sites, like StackOverflow, that uses reward systems to incentive users to answer fast and accurately. They investigated and predicted the response time for questions on Stack Overflow, that benefit from an additional incentive through so-called bounties. In their findings, they noted that topic related factors provide much stronger evidence than previously found elements for these questions.
Krueger et al. (Krueger et al., 2017) work is about researchers performing empirical studies in the industry to gain qualitative insights into a real-world problem. However, common critics are the diversity and selection process of participants. To address these issues, they propose to improve the integration of question-answering systems into an empirical study. So, they described approaches to conduct studies in such systems, to exemplify corresponding challenges, and to discuss their potential. They held their research on Stack Overflow.
Papoutsoglou et al. (Papoutsoglou et al., 2017) proposed a framework that aims to collect online job advertisements from a web source which concerns Information Technology job offers and to extract from the raw text the required skills and competencies for specific jobs. The selected professional networking web source was StackOverflow, and multivariate statistical data analysis was used to test the correlations between skills and competencies in the job offers dataset.
Yin et al. (Yin et al., 2018) described a novel method for extracting aligned code/natural language pairs from StackOverflow. The method is based on learning from a small number of annotated examples, using highly informative features that capture structural aspects of the code snippet and the correspondence between it and the original natural language query.
4 METHODOLOGY
To answer our research questions, we combined data visualization and data analysis techniques. In this section, we present details about the dataset, the data visualization and preliminary quantitative analysis. A more detailed discussion is done in Section 5.
4.1 Data Description
The data provided by StackOverflow (StackOverflow, 2018a) is based on a survey of 101,592 software developers from 183 countries around the world. Accordingly the criteria used by StackOverflow, this number of responses are what we consider “qualified” for analytical purposes based on completion and time spent on the survey; another approximately 20,000 responses were started but not included in the analysis because respondents did not answer enough questions. From the total qualified responses, 67,441 (66.4%) completed the entire survey. The survey was fielded from January 8 to January 28 and the median time spent on the survey for qualified responses was 25.8 minutes, and the median time for those who finished the entire survey was 29.4 minutes. Survey responses that spent less than 5 minutes were excluded from the final sample. Respondents were recruited primarily through channels owned by StackOverflow. Since respondents were recruited in this way, highly engaged users on Stack Overflow were more likely to notice the links for the survey and click to begin it. Respondents who finished the survey were awarded a “Census” badge as a motivation to complete the survey. The data is anonymized and available for download in CSV format, and under the Open Database License (ODbL). In the following section, we present the quantitative results that answer our sub-research questions and support the answer to the primary research question of this work. First, we analyze the aspects considered relevant by developers regarding their interests in job opportunities. Then we see respondents future goals. The third aspect we investigate is confidence. We show these issues as seen across gender, race, and ethnicity. Finally we present
lessons learned that might serve as input for companies hiring policies.
5 RESULTS
The primary design goal for visualization is to effectively communicate a thorough understanding of the data it represents. This utility of visualization does include usability goals but ultimately revolves around the visualization’s ability to help people better understand data (Saket et al., 2016). Nowadays many tools are allowing to visualize and gain faster insights about the data. In this work, we chose to use Tableau Desktop (Tableu, 2018) as the tool to support our visual data analysis.
In the following sections, we present the data visualization that aims to provide support to answer the research questions. In this work, we are not considering salary as a parameter to the discussion because it is a very well discussed subject in the industry and hard news. There are studies about gender pay gaps in different areas and technology as well (Florentine, 2018) (Tarr, Ismail, 2018) (Martinson, 2018) (Orphanides, 2018).
5.1 Aspects when Assessing Job Opportunities
This section aims to support to answer our RQ1: “What is being considered by developers when assessing job opportunities?”. In the survey, StackOverflow asked the respondents to rank ten aspects when assessing a potential job opportunity. They should rank from 1 (the most important) to 10 (the least important) the following aspects:
- The industry that I’d be working in;
- The financial performance or funding status of the company or organization;
- The specific department or team I’d be working on;
- The languages, frameworks, and other technologies I’d be working with;
- The compensation and benefits offered;
- The office environment or company culture;
- The opportunity to work from home/remotely;
- Opportunities for professional development;
- The diversity of the company or organization;
- How widely used or impactful the product or service I’d be working on is.
For a better analysis of this topic, we split the answers by Gender and Race/Ethnicity. The results follow:
5.1.1 Gender Analysis
For women, the most important aspect when assessing a job is “The office environment or company culture”. The last important one is “The financial performance or funding status of the company or organization”. For men, the distribution is quite different. Men tend to consider first “The compensation and benefits offered” and the last priority is “The diversity of the company or organization”. Figure 1 compares the distribution for women and men.
Figure 1: Women/Men Priorities when Assessing a Job.
one for women, appears as fourth for men. The financial performance or funding status of the company or organization appears in the tenth and last place for women and ninth place for men, showing pretty similar importance. The compensations and benefits that appear as first for men are in the fourth place for women. And the diversity of the company, that appears as last important for men appears in ninth place for women. So, even with all the discussions about diversity individual developers that identified themselves as Female or Male are not making it a priority when looking for a job.
However, the panorama changes when we evaluate the data for Non-Binary and Transgender population. Individuals that identified themselves as Non-binaries/Genderqueer or Gender non-conforming put office environment or company culture and diversity of the company as their first and second priorities. Transgenders put culture as the first and diversity as fifth. The financial performance or funding status of the company seems to be very well aligned among all the respondents, no matter the gender - ninth and tenth place as shown in Figure 2.
5.1.2 Race and Ethnicity Analysis
We also analyzed the aspects when assessing a job opportunity by race/ethnicity. The race/ethnicity listed in the StackOverflow survey were: White or of European descent, South Asian, Hispanic or Latino/Latina, East Asian, Middle Eastern, Black or of African descent, Native American/Pacific Islander/Indigenous Australian. Figure 3 compares assessments between Black or Afro Descent and White or European Descent.
Compensation and benefits are the priority for White or European Descent, East Asians, Hispanic/Latins and, Native American/Pacific Islander/Indigenous Australian. Opportunities for professional development are priorities for Black or Afro Descents and Middle Eastern. Languages and frameworks were mentioned as priorities for South Asian. However, diversity of the company is ranked in the tenth for all. Office environment or company culture is indicated in third by East Asians, in fourth by White/European Descents, Native American/Pacific Islander/Indigenous Australian and Middle Eastern and, in the fifth by Hispanic/Latin and Black/Afro Descents.
5.2 Professional Objectives
In this section, the objective is to help to answer our RQ2. “What are they professional objectives for the future?”. One of the questions done in the survey was “What Do Developers Hope To Be Doing in Five Years?”.
5.2.1 Gender Analysis
While 26% of men want to be Working as a founder or co-founder of their own company, near 16% of women, non-binary and transgenders want the same. Around 42% of women and non-binary wish to be working in a different or more specialized technical role in comparison to approximately 33% of men and transgenders.
5.2.2 Race and Ethnicity Analysis
When analyzing data from the race/ethnicity point of view, we have 44.71% Black or Afro descent wanting to be Working as a founder or co-founder of their own company and 41.53% of South Asians wanting to be Working in a different or more specialized technical role. Figure 5 shows the data for race/ethnicity.
5.3 Not as Good Programming as the Peers
This section wants to support to answer our RQ3. “How confident developers are in their programming skills?”
As we can see in Figure 6, most of the answers, no matter the gender, are on the disagreement range or in the neither agree nor disagree. However, men tend to disagree more with the affirmation. On the scale of agreeing and strongly agreeing, we have 29.73% of women that agree or strongly agree that is not as good as their peers, versus 17.28% of men that share the same belief. 24.5% on non-binary and 19.61% of transgenders also believe that are not as good as their peers.
5.4 Insights from Results
In this section we intend to perform a more detailed discussion over the data previously presented and answer our more in-depth research question: “what companies working with software engineering should focus to attract, hire and retain talents on the specter of diversity?”
The first analysis presented was about the aspects considered by developers when assessing a new job. Diversity does not seem to be a conscious priority for most of the developers when assessing a new job opportunity. However, non-binaries and transgenders put this on the top of the rank, followed by the culture of the company. Culture appears as first for women and considering gender and race, always mid-ranked. When sharing their results, StackOverflow itself mentioned: “The tech industry is struggling overall with issues around diversity, and individual developers are not making it a priority when looking for a job.” (Stackoverflow, 2018).
It is essential to recall the case of the Google’s employee fired in 2017 after wrote a memo blaming biology for technical’s gender gap (Varinsky, 2017). Here we have a mix of two crucial aspects: diversity supported by culture. Even if the diversity of the company is the last item to be considered when a developer considers a new job opportunity, companies may consider this aspect once is linked to enterprise culture and the way to provide a safe work environment for all.
We also evaluated what the respondents intend to be doing in five years. Men want to be owners of their own business when women wish to more specialized technical positions. It sounds that, development programs that offer opportunities for women improve their technical skills may pay off once they
demonstrate higher interest than men to be in specialized positions.
The last point analyzed is related to respondents assessing if they believe they are not as good as their peers on their programming skills. Women, Non-binary and transgenders tend to doubt more their programming skills comparing to their peers than men.
6 DISCUSSION
To perform the analysis of these results, it is essential to recall previous events. In 2016, Google published a study about the diversity gap in computer science (Inc. and Inc., 2016). In this study, they identified that male student are more interested and more confident in learning computer science, and that female students rate themselves lower in skills related to Computer Science. Another point identified by the study is that stereotypes may influence implicit beliefs about who can study computer science and might introduce unconscious bias in educators and parents, who may disproportionately and unconsciously encourage students who fit the computer scientist stereotype to pursue Computer Science. For example, male students are more likely than female students to have been told by a teacher (39% vs. 26%) or a parent (46% vs. 27%) that they would be good at Computer Science. Teachers and parents may reinforce stereotypes by telling more male students they think they would be good at Computer Science, thus furthering the underrepresentation of females in Computer Science.
Recent research published in Nature from O’Dea et al. (O’Dea et al., 2018) says that girls are susceptible to conforming to stereotypes in the traditionally male-dominated fields of STEM, and backlash effects hinder girls who try to succeed in these fields. A girl’s answer to the question of “what do you want to be when you grow up?” will be shaped by her own beliefs about gender, and the collective beliefs of the society she is raised in. The study also compares school grades of girls and boys. They found out that girls tend to earn higher school grades than boys, including in STEM subjects. The prediction of the grade distribution represents that, when all grades are considered, girls on average earn higher grades and are less variable than boys, although there are more highly performing boys than girls at the upper end of the achievement distribution. Therefore, by the time a girl graduates, she is just as likely as a boy to have earned high enough grades to pursue a career in STEM. When she evaluates her options, however, the STEM path is trodden by more male competitors than non-STEM and presents additional internal and external threats due to her and societies’ gendered beliefs (stereotype threat and backlash effects). Additionally, the paper says that gender differences in expectations of success can arise due to backlash effects against individuals who defy the stereotype of their gender, or due to gender differences in ‘abilities tilt’ (having comparatively high ability in one discipline compared to another). Women in male-dominated pursuits, including STEM, face a paradox: if they conform to gender stereotypes, they might be perceived as less...
competent, but if they defy gender stereotypes and perform ‘like a man,’ then their progress can be halted by ‘backlash’ from both men and women.
So, the results observed in the StackOverflow Developers’ Survey may be a reflex of stereotypes built from previous personal experiences and be directly related to a confidence gap in women, mainly.
An important point to be considered is what is called the impostor syndrome. Jackson and Heath (Jackson and Heath, 2014) says that impostor syndrome is defined as a psychological phenomenon in which people are unable to internalize their accomplishments. Impostor syndrome affects most people at some point during their careers across all races, all genders, and all ages.
Sukhai (Sukhai and Mohler, 2016) mentions that impostor syndrome is common within the academic environment, particularly at the graduate level student - in STEM fields, mainly, where productivity is a significant measure of a student’s success. Impostor syndrome presents itself as a series of feeling or thoughts, and one of them is the frustration with the inability to meet self-set standards (“I will never be as good as I want to be, so why bother trying?”). Churchill (Churchill, 2018) points out that over four decades ago when it was given a name by clinical psychologists Pauline Clance and Suzanne Imes in the late 1970s, this feeling was prevalent among high-achieving women.
Follow-on research has shown that impostor syndrome is very real and very prevalent and that its effects are undeniably negative. Therefore also unsurprising that there is a strong correlation between impostor syndrome and anxiety, stress, depression, and burnout, the debilitating condition of exhaustion that can result in talented individuals giving up on promising careers. StackOverflow shares the results about disability status in their results page, with numbers about anxiety, depression and focus (StackOverflow, 2018b), but, unfortunately, the individualized and anonymized results about it were not shared in the CSV file, and we could not make the gender correlation.
To overcome it, it is essential to have a support network that helps to identified impostor syndrome in their workforce. There are some strategies to overcome the syndrome, for example: instruct employees that comparison with others must be done with care (Jackson and Heath, 2014). Comparison without context can be misleading (people will compare themselves with people in another skill level).
That said, what we found in the data from of StackOverflow Developers’ Survey suggests the importance of the initiatives to minimize bias and stereotypes that some companies are doing in their hiring process and the process of development of their technical team.
7 THREATS TO VALIDITY
The validity of this work can be subjected to some threats. In the following, threats to internal validity and external validity are illustrated.
External validity refers to how much we can generalize our findings. The presented results are based on data from the StackOverflow community. We suspect that given the high number of respondents and the reputation of the community the results could be generalized outside the scope of our study.
Internal validity often refers to experimenter biases. For our results, the threat of misreading the data visualization analysis and getting to conclusions based on the knowledge areas of the researchers involved in this work.
8 CONCLUSION
The present work set out to provide insights to support the attraction, hiring and retention policies for more diverse and inclusive software engineering environments. Using the anonymized data from StackOverflow Developer’s Survey, we performed analysis and correlations beyond their original ones with the support of data visualization techniques that implied in insights to our recommendations. Results show that diversity in the company is not yet a full conscious decision-making factor for developers assessing a new job opportunity, and respondents that identified themselves as women, non-binary and transgenders tend to doubt more their programming skills believing they are not as good as their peers. A discussion about the unconscious bias, stereotypes, and impostor syndrome was done, and we reinforce the importance of initiatives to minimize bias and stereotypes that companies are doing in their hiring process and the process of development of their technical team.
For future work studies, we see opportunities when we selected more specific aspects in the spectrum of the diversity. For example, for cognitive diversity, since there has been an increase in computer science students with the Asperger Syndrom (Ribu, 2010)(Egan, 2005), it is also important to tackle this issue globally in Software Engineering. There is also a need for teaching institutions and software companies to work together to understand these differences better to include them.
ACKNOWLEDGEMENTS
This project is partially funded by FAPERGS, project 17/2551-0001/205-4.
REFERENCES
Ismail, N. (2018). Is there a gender pay gap in the technology sector?
Stackoverflow (2018). How do developers assess potential jobs?
Tarr, T. By the numbers: What pay inequality looks like for women in tech.
|
{"Source-Url": "https://www.scitepress.org/Papers/2019/77079/77079.pdf", "len_cl100k_base": 7268, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 31192, "total-output-tokens": 9525, "length": "2e12", "weborganizer": {"__label__adult": 0.000927448272705078, "__label__art_design": 0.0009675025939941406, "__label__crime_law": 0.0012998580932617188, "__label__education_jobs": 0.0987548828125, "__label__entertainment": 0.00022721290588378904, "__label__fashion_beauty": 0.0005998611450195312, "__label__finance_business": 0.00927734375, "__label__food_dining": 0.0009150505065917968, "__label__games": 0.00146484375, "__label__hardware": 0.0007457733154296875, "__label__health": 0.0014362335205078125, "__label__history": 0.0005631446838378906, "__label__home_hobbies": 0.00029015541076660156, "__label__industrial": 0.0011444091796875, "__label__literature": 0.0010652542114257812, "__label__politics": 0.0057373046875, "__label__religion": 0.0007519721984863281, "__label__science_tech": 0.0341796875, "__label__social_life": 0.001667022705078125, "__label__software": 0.01358795166015625, "__label__software_dev": 0.82177734375, "__label__sports_fitness": 0.0008401870727539062, "__label__transportation": 0.00115966796875, "__label__travel": 0.00038051605224609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40729, 0.04323]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40729, 0.38394]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40729, 0.93505]], "google_gemma-3-12b-it_contains_pii": [[0, 3881, false], [3881, 8848, null], [8848, 13844, null], [13844, 18872, null], [18872, 21458, null], [21458, 24288, null], [24288, 26999, null], [26999, 30117, null], [30117, 33338, null], [33338, 36227, null], [36227, 40729, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3881, true], [3881, 8848, null], [8848, 13844, null], [13844, 18872, null], [18872, 21458, null], [21458, 24288, null], [24288, 26999, null], [26999, 30117, null], [30117, 33338, null], [33338, 36227, null], [36227, 40729, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40729, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40729, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40729, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40729, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40729, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40729, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40729, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40729, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40729, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40729, null]], "pdf_page_numbers": [[0, 3881, 1], [3881, 8848, 2], [8848, 13844, 3], [13844, 18872, 4], [18872, 21458, 5], [21458, 24288, 6], [24288, 26999, 7], [26999, 30117, 8], [30117, 33338, 9], [33338, 36227, 10], [36227, 40729, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40729, 0.07006]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
f2c6219a15821517a7af239b2f9f51471c554486
|
Integrating semantic and syntactic descriptions for chaining geographic services
Rob Lemmens¹, Carlos Granell², Andreas Wytzisk¹, Rolf de By¹, Michael Gould², Peter van Oosterom³
¹ International Institute for Geo-Information Science and Earth Observation (ITC)
Enschede, The Netherlands
² Universitat Jaume I (UJI)
Castellón, Spain
³ Delft University of Technology
Delft, The Netherlands
ABSTRACT
Accelerating the development of complex and heterogeneous geographic services requires improved methods that integrate service discovery, composition, and reuse. We present an integrated use of semantic and syntactic service descriptions for service chaining, by combining an application that supports service discovery and abstract composition, with another that supports concrete composition and execution of services. This facilitates the use of XML-based service description languages for building a geo-service-reuse architecture based on common ontologies and shared service descriptions.
KEYWORDS: semantic annotation, service reuse, geographic services description, discovery and composition methodology
The field of Geographic Information Systems (GIS) has been highly influenced by advances in web service technology. The resulting proliferation of specialized geographic services (e.g., for visualizing vector cartographic data, locating a map view given a toponym, and more recently, specific geo-processing functions) has created an interesting challenge: integrating multiple geographic services, each from a specific information community and spatio-linguistic region. The objective is to produce viable alternatives to the common practice of downloading and geo-processing massive datasets using traditional desktop GIS. Desktop GIS workflow historically has required complex manual data sourcing and reformatting before arriving at even the simplest analysis such as visualization of geodata themes in context (e.g., spatial relationship between factories and schools). Access to chains of remote geographic services promises more flexible, just-in-time analysis of geographic data that is updated in situ, however in practice the chaining of geographic services is non-trivial. Geographic data are special in that multiple versions of the same entities on the earth’s surface can differ radically in terms of data model, scale, generalization of data, and the conceptual models used by the data collectors. Also, most geographic data are collected by different government agencies, and therefore important semantic differences are also found at administrative borders at all levels. Moreover, geographic data objects may have multiple geometric and/or graphic representations, depending on the service type and on the client accessing them.
Alameh [1], in her conceptual description of geographic service chaining, ends by highlighting unresolved research topics such as semantics and dynamic service chaining. Here we show concrete progress in a methodology that combines service discovery, abstract composition (identifying service chain functionality), concrete composition (controlling data flow) and execution. The first two are being studied by semantics researchers and the latter two are common syntactic research issues, however most approaches to geo-service chaining have addressed semantic and syntactic issues superficially and separately.
Here an integrated approach identifies syntactic and semantic relations among possible components involved in geo-service chaining, by combining two independently developed applications. One supports service discovery and abstract composition (‘GeoMatchMaker’), the other concrete composition and execution of services (‘Integrated Component Designer’). Users submit queries based on a geographic semantic framework to the GeoMatchMaker, which identifies appropriate candidate services (some of which may be service compositions). Then semantic and syntactic descriptions are combined by the Integrated Component Designer to permit incremental building of a concrete service composition from the candidate services list. This article discusses the various steps in the integrated approach, for application in typical scenarios involving geographic information services.
Service Description Background (possibly as sidebar)
Service description standards for geographic services are evolving toward the use of general web service standards, such as WSDL for syntactic service descriptions [2] and OWL-S for semantic service descriptions [3]. Annotation approaches have emerged as a way to bridging the gap between the syntactic and semantic worlds.
Syntax-based descriptions
Web Service Description Language (WSDL) is a widely accepted standard for describing web service interfaces. During discovery and composition phases we focus on the abstract part of a WSDL description - operation and input/output messages. Implementation details will be needed during the service execution. At that stage, we make use of OASIS Web Services Business Process Execution Language (WSBPEL), that expresses how a set of web services are to be invoked. Both specifications are expected to become recommendations under their respective committees (W3C and OASIS), yet they treat web services only at the syntactical level, necessary but insufficient for creating meaningful descriptions of web services.
Semantic-based descriptions
To improve semi-automatic discovery methods, services have to be described with formal languages that allow for machine reasoning. A key role is played by machine ontologies, which are machine accessible representations of conceptual models. The Web Ontology Language (OWL) is a recommended specification of W3C that facilitates the creation of (web-based) machine ontologies. OWL draws upon the formal theory of Description Logics, which has roots in first-order predicate logic and provides highly expressive concept-forming constructs [4]. OWL-S [3] is an upper ontology based on OWL that models the characteristics of web services and can be used to create semantically enriched web service descriptions.
OWL-S provides three modelling constructs at the top level, i.e., the service profile (what the service does), the service grounding (how the service can be accessed) and the service model (how to use the service in terms of semantic content, including its workflow). OWL-S provides classes that can be instantiated by a service provider to create specific service descriptions. Because OWL-S is an upper ontology, it obviously does not provide domain ontologies. These must be established by information communities themselves.
Annotation approaches
At present, the integration of syntactic and semantic descriptions is provided by two major approaches: OWL-S grounding and WSDL-S [5]. OWLS-S provides abstract constructs for input and output parameters of processes. It does not explicitly describe the concrete I/O messages, but rather specifies, in a so-called grounding, how they must be linked to parameters in a concrete message mechanism. In the OWL-S specification version 1.1, WSDL is used as the grounding mechanism. For each OWL-S process, a mapping is created between each I/O parameter of the OWL-S process model and its corresponding target parameter in the WSDL document. Furthermore, other parameters, such as operation name and a URI, pointing to the actual WSDL document, are specified. The use of an OWL-S processor such as the OWL-S Virtual Machine allows for the control of interaction between web services, based on the combination of OWL-S process and grounding [6].
WSDL-S annotates web services by enriching WSDL descriptions, which otherwise lack semantic expressivity, with semantic tags (specifically the WSDL-S modelReference attribute for WSDL part and operation elements). WSDL-S suggests adding semantics to WSDL by using extensibility in the elements and attributes supported by the WSDL specification, and permitting the relation between existing WSDL constructs and ontology concepts.
RiskMap service scenario
A typical geographic service chaining scenario might involve planning for possible emergency situations, such as in the following example, called ‘RiskMap’ service. This service should generate a map with the real-time locations of potentially hazardous substances such as ammonia or explosives, and then centre the map around a user-specified location (“hazardous substances near my city”).
Consider a scenario in which a service engineer has to build the above service from smaller distributed services for an end user who only interacts with the composite service. Assume that the geographical aspects of the hazard information are provided by a Web Map Service (WMS) as defined by the OpenGeospatial Consortium (OGC; see http://www.opengeospatial.org). A WMS GetMap request is formulated by a URL containing input parameters for specific geographic features (e.g. points representing hazardous sites) and the geographic extension of the map view. This geographic extension may be determined indirectly by translating a toponym (using a gazetteer service) to its corresponding bounding box (via the BBOX service).
The service engineer is tasked to provide a service chain that allows the user to enter a city name and that shields him/her from the detailed WMS parameter construction. The elements of the RiskMap service chain and its output are depicted in respectively figures 1a and b.
In this scenario the resolution of a city name and determination of its location may be handled by one of several gazetteer services, each of which having its own geographic coverage, data resolution and special semantic (in addition to syntactic) interface needs. The same is true for the other services in the chain. Key to the methodology described here, is that multiple service candidates may be considered at each of the four steps in figure 1a. However, determining how one or another of two functionally-similar candidates can interface with other services, in semantic and syntactic terms, is a non-trivial exercise. The approach described here provides concrete assistance in augmenting the semantic content of each service description, and in discovering and combining services.
**Semantic Framework**
Our integrated approach adopts a semantic framework as a basis for semantic service descriptions to support the discovery and abstract composition of geographic services. Figure 2 shows the proposed semantic framework, which is composed of three formal ontologies grouped into information and operational model:
Figure 2: UML class diagram, depicting an overview of the ontologies as part of the proposed semantic framework for geo-information and geo-operations.
- A feature concept ontology formally defines the conceptualisations of real world phenomena and the relationships between them. For example, ‘Building’ is a feature type that is (partially) defined by its thematic and spatial attributes.
- A feature symbol ontology formally defines the abstract elements that make up a feature in an object/field model, based on the ISO19109 standard [7]. This model distinguishes three abstraction levels, i.e., meta-level, implementation level and data level.
- A geo-operation ontology formally defines operation types in terms of their behaviour and is based on OWL-S. Each type is characterised by the behaviour of a well-known atomic GIS operation (inspired by the ISO19119 service taxonomy [8]) and its typical input and output parameters.
Information and operation model
Semantic service metadata can be represented in the ontology by classes or by individuals (class instances). A (partial) class definition of a gazetteer operation (which serves as a candidate operation for our RiskMap chain) in a Description Logic axiom is shown below. The prefixes refer to the hosting ontology. ‘LocSpat’ stands for the operation that reads a location attribute type (e.g., an address) and produces a spatial attribute type (e.g., a geometric object), a standard gazetteer operation.
```
opera::LocSpat β
(opera::AcrossAttributeTypes 6
( (⋯ opera::hasInputPar. ( ( opera::hasParType.symbol:GF_LocationAttributeType)) 6
(⋯ opera::hasOutputPar. ( ( opera::hasParType.symbol:GF_SpatialAttributeType))
with:
β ‘is subclass of’
|⋯ conjunction of ‘there exists at least one’ and ‘for all’
6 ‘intersected with’
. separator between role and role-filler
```
The above definition describes the ‘LocSpat’ operation type as subclass of the ‘AcrossAttributeTypes’ operation type and puts input and output restrictions on it. A more specific definition is created for an example gazetteer, the Alexandria Digital Library (ADL) Gazetteer (http://middleware.alexandria.ucsb.edu/client/gaz/adl/index.jsp). The definition specifies that the gazetteer takes as input an ‘address’ that only consists of a city name. The omitted ‘forall’ quantifier means that the operation can also take other input types (but they are all GF_LocationAttributeType). The output is of type ‘point’.
\[
\text{opera:ADLGazetteer} \beta \\
\text{opera:LocSpat} 6 \\
( | \text{opera:hasInputPar.} ( | \text{opera:hasParType.} ( | \text{opera:typeBijection.opera:OP.CityNameAddress))) 6 \\
( | \ldots \text{opera:hasOutputPar.} ( | \text{opera:hasParType.} ( | \text{opera:typeBijection.opera:OP.Point])))
\]
Another representation can be given with so-called ‘individuals’ that instantiate the concepts used in these class definitions. Both class and individual definitions are encoded in OWL in the ontology and they are stored in a knowledge base for reasoning purposes.
**Integrated Architecture and Implementation**
Figure 3 shows the integrated architecture for service chaining using syntactic and semantic descriptions. We assume that a set of common geo-ontologies (derived from the semantic framework) is shared by all participants. Also, service providers annotate their services using such geo-ontologies. The service discovery finds annotated services that are directly consumed by the composition process to build a concrete composition. As new compositions are published in the web services repository, not only single services are discovered but also compositions, thus increasing service reuse.
**Discovery and abstract composition**
Geo-service discovery in general involves the identification of service advertisements that may match a service request, which we refer to as matchmaking. Consider the service chain with \(n\) services:
\[
\text{chain}\ (S_1, ..., S_n)
\]
We seek cross-matches between the output parameters of a service and the input parameters of a subsequent service, and evaluate the behavioural aspects of the combination. For searching for a service \(S_{i+1}\) that follows a given service \(S_i\), an ontological request \(R\) (representing the service \(S_i\)) is tested against an ontological advertisement \(A\) (representing a candidate service \(S_{i+1}\)). In ontologies, a concept (e.g., ‘Building’) is interpreted as a set of individuals (e.g., ‘Louvre’, ‘Taj Mahal’, etc.). When ontologies are materialised as knowledge bases, concepts and their relationships are separated from the individuals. They
are contained in the so called TBox, respectively ABox\(^1\). The TBox (T stands for ‘Terminology’) holds declarations of concepts and the ABox contains assertions (hence the term ‘ABox’), specific to individuals (instances of the concepts) [4].
Depending on whether we use concept-based or individuals-based definitions of the operations, there are four possibilities to perform the matchmaking. Concepts are denoted with upper case, individuals with lower case.
- Match type I involves concept descriptions only. This is done by TBox reasoning. Match types II, III, IV are performed with individuals by ABox reasoning. Differences between TBox and ABox reasoning in the context of this paper have been discussed in [9]. In our current GeoMatchMaker prototype, we have opted for type II matches, because the entry of advertisements and the interpretation of the results are more straightforward than for the other match types. The matching has been performed with the (RacerPro, www.racer-systems.com/) reasoner (see Figure 3) by inferring all candidate individuals \(a\) in the knowledge base that instantiate \(R\).
RacerPro is a knowledge representation system that can be used for reasoning with ontologies. It can directly read OWL documents and represent them as TBoxes and ABoxes in DL knowledge bases. Through a Java API, called JRacer, RacerPro provides numerous functions for managing the knowledge base and reasoning with its TBoxes and ABoxes. A small subset has been used in the kernel of GeoMatchMaker to provide reasoning capabilities.
For brevity we elaborate only on the search of a service that follows the first service (for which we have selected the ADL Gazetteer service). Figure 4 shows the results in terms of a set of matching services. These are services that create a bounding box around the geometric point, generated by the gazetteer. They are further evaluated by refining the requesting concept until one is left. After selecting the BBoxCreate service, there is one service left to complete the chain. This is a service that must build a GetMapRequest from the bounding box. Information, such as feature selection and coordinate system metadata, which are needed by the GetMapRequest, are also added in this service.

\(^1\) These terms have no relationship whatsoever with the term BBox (bounding box parameter of a map server)
Figure 4 (b): The RiskMap service chain structure as a result of discovery and abstract composition.
The GeoMatchMaker prototype integrates the Protégé ontology editor (http://protege.stanford.edu/) and provides an interactive environment to compose the service chain. The chain can be exported for execution purposes in different forms, such as an OWL-S document, which supports nine control flow patterns. Figure 4b shows the structure of the service chain modelled as an OWL-S graph of individuals. The boxes represent instances of OWL-S process concepts. Amongst them are the discovered geo-operations (ADLGazetteer, BBoxCreate, MakeGetMapRequest) and supporting control constructs (Sequence, Perform, etc). The sequence pattern can be recognised by following the ‘first-rest’ control flow and is portrayed as a UML activity in Figure 1a.
Concrete composition and execution
The Open Geospatial Consortium, and the ISO technical committee for Geographic Information and Geomatics (ISO TC211, see http://www.isotc211.org/) have defined three design patterns for geographic service composition according to the degree of transparency of the web service chain complexity to the client [8]: transparent or user-defined chaining, opaque or aggregate service chaining, and translucent or workflow-managed chaining. As the name suggests, translucent chaining is midway between transparent and opaque chaining, offering balanced benefits as compared with the other two patterns [1].
Our concrete composition approach relies on the translucent chaining to reduce the complexity of design of geographic service chains to the user by means of the notion of integrated component [10], which is the fundamental building block for service composition. The idea consists of creating an integrated component from a set of candidate geographic web services with the same functionality. For instance, an integrated component for web mapping may comprise several concrete web mapping services, improving the chain flexibility because several web mapping services are available for carrying out the integrated component’s functionality. Next, users create more complex and heterogeneous integrated components by reusing simpler integrated components available already in catalogues. Each new integrated component encapsulates the functionality of the contained integrated components. Two interfaces control the access to an integrated component: the public interface openly expresses an integrated component’s functionality (described in WSDL-S), whereas the private interface encapsulates how an integrated component performs its functionality. For example, the snippet code below shows some features of WSDL-S to semantically annotate operations and parameters for the Gazetteer integrated component (public interface). The annotation (by the WSDL-S modelReference attribute) for the operation getCoordinates refers to the concept LocSpat in the geo-operation ontology, which formally defines an operation that returns a spatial attribute type, based on a location. WSDL part tags are annotated in the same manner.
```xml
<wsdl:message name="getMsgResponse">
<wsdl:part name="coordinates" element="xsd1:ResponseType"
wssem:modelReference="Ontology0#Point"/>
</wsdl:message>
<wsdl:message name="getMsgRequest">
<wsdl:part name="name" element="xsd1:RequestType"
wssem:modelReference="Ontology0#Point"/>
</wsdl:message>
```
The notion of integrated component in terms of encapsulation and of providing integrated services of geospatial information has similarities with the translucent chaining pattern described previously. Once an integrated component meets certain user requirements its description is thus transformed into an executable WSBPEL process document, which actually contains concrete and executable geographic web services.
The centre and right hand side of Figure 3 shows the concrete composition and execution. Service discovery produces an OWL-S document that contains an abstract chain, i.e., a list of appropriate web services (or compositions) for composition (Figure 4b). The link between service discovery and concrete composition consists of creating integrated components from such a list. For that, we offer three different possibilities (Figure 3). The first one automatically creates the corresponding integrated component from a WSDL-S description (Automatic IC Creation box, Figure 3). Given a WSDL description, the second possibility allows users to manually generate a new integrated component by annotating it with the concepts taken from shared geo-ontologies. In both cases, a new integrated composition is created from existing web services. Yet, as one goal is to improve service reuse, the service discovery can also discover existing compositions (seen as integrated components) to be used in new compositions. In this third case, the creation process is not necessary because the integrated component already exists. The composition process (IC Composition box, Figure 3) then constructs, using composition patterns, complex integrated components by incrementally reusing existing ones taken from the repository [10].
Figure 5 shows a screenshot of the Integrated Component Designer applied to the RiskMap scenario. This software tool is a set of Eclipse Plug-ins (www.eclipse.org) developed in Java. Figure 5 shows the graphical editor for defining the private interface of the RiskMap integrated component (represented by getRiskMap function). This component combines (reuses) two other
integrated components already available –LocationAttrToBox and UMN WebMapService-- by using the composition pattern sequence (red box in Figure 5). Each of them in itself is a composition. The former contains the first two services in the abstract chain, ADL Gazetteer and BBoxCreate, forming an intermediate composition that takes a city name as input and produces a bounding box. The latter integrates the last two services, MakeGetMapRequest and UMN MapServer, encapsulating a full GetMap request to retrieve the final map image.
The user might execute a given composition through the transformation process (IC transformation box, Figure 3). This process serialises the integrated component description representing our RiskMap composition into a WSBPEL process document. The right hand side of Figure 3 shows the service execution, which takes the WSBPEL process and produces the risk map. We have tested the resulting process in the Oracle BPEL Process Manager (www.oracle.com/technology/products/ias/bpel/index.html).
Related Work (possibly as sidebar)
Current OGC Web Service Common Specification (OWS) efforts [11] within the Open Geospatial Consortium are aligning basic geographic services such as Web Map Service (WMS) and Web Feature Service (WFS) with the mainstream publish-find-bind paradigm represented with SOAP, WSDL and UDDI. In this context, the recent Web Processing Service (WPS) specification provides access to spatial operations, ranging from simple calculations to complex models by means of web service interfaces exposing the parameters for data input, operation initialisation and data output [12], however without service chaining support. Within OGC, service discovery is handled by a service registry that provides service metadata with details on service types, as defined in ISO 19119 (Services) [8]. Currently there is no OGC specification that deals with semantics in support of service (and data) discovery. An attempt has been made in the OGC Geo Semantic Web Interoperability Experiment (GSW IE) [13], however with very limited results. Einspanier et al. [14] identify the need for the integrated use of syntax and semantics in service chaining. Other related research (although mainly on the semantic aspects of service chaining) is reported by Klien et al., [15] which addresses geographic ontology design, client interfacing and reasoning in the application field of disaster management.
Conclusion
One of the strengths of the presented integrated approach is the use of common ontologies for the different steps in geographic service chaining. Web-based ontologies provide a formal yet flexible mechanism to describe web services. Our GeoMatchMaker prototype does not support automatic discovery, but rather semi-automatic (human controlled) discovery. Another limitation lies in the exchange of workflow information between the prototypes. Currently, there is no single common format that holds workflow elements, ontology concepts and WSDL parameters. However, this can be implemented by a relatively simple style sheet transformation allowing, in this case, the reuse of existing compositions that are already annotated semantically in a semi-automatic way. From our implementation experiences, the WSDL-S approach has been implemented with less effort than the OWL-S grounding. Although OWL-S supports the whole range of discovery-composition-chaining, there are fewer enactment engines for it, compared to other standards such as WSDL and WSBPEL. From a practical point of view, a hybrid solution is therefore still preferred.
The strength of the approach described here lies in the way syntactic and semantic service descriptions are combined for chaining geographic services, allowing us to take advantage of the composition of the semantics used in the service discovery process and, in turn, permitting service reuse by discovering existing annotated compositions. Indeed, reuse becomes an important point in the concrete composition process because it is essential to rapidly create complex geo-processing services. Also, reusable services and compositions offer developers service description reuse and also knowledge reuse as borrowed from previous solutions and experiences applied to similar problems, of special interest in geographic applications where multiple users work in and study the same geographical region.
Acknowledgements
This work has been partially supported by the EU AWARE project SST4-2004-012257.
References
Carlos Granell is currently a PhD candidate at the Universitat Jaume I, Castellón, Spain. He holds a MSc in Computer Engineering from the same university. His research interests focus on the interoperability on GIS, and web service reuse and composition integrated in Spatial Data Infrastructure (SDI). Contact: carlos.granell@lsi.uji.es
Rob Lemmens is assistant professor at ITC. His research activities focus on interoperability issues in spatial data infrastructures and application development, based on ISO and Open Geospatial Consortium (OGC) specifications. Lemmens holds an MSc in Geodesy from Delft University of Technology and is currently pursuing his PhD on semantic interoperability of distributed geo-services. Contact: lemmens@itc.nl.
Michael Gould is a senior lecturer in Information Systems at the Universitat Jaume I, Castellón, Spain, where he teaches Geographic Information Systems and Systems Integration. His research interests include Spatial Data Infrastructures (SDI) and web services interoperability. Contact: gould@lsi.uji.es
Andreas Wytzisk is assistant professor at ITC. His research activities focus on interoperability issues in spatial data infrastructures, distributed simulations and sensor webs. Wytzisk holds a PhD in Geoinformatics from the University of Münster, Germany. Contact: wytzisk@itc.nl.
Rolf de By is associate professor at ITC, in the department of Geoinformation Processing. His research interests are in the fields of designing large and advanced information systems that handle geospatial data, spatial database technology and methods, and novel applications. De By holds an M.SC. Degree in Applied Mathematics and a Ph.D. degree in Computer Science, both from the University of Twente. Contact: deby@itc.nl
Peter van Oosterom obtained a MSc in Technical Computer Science in 1985 from Delft University of Technology, The Netherlands. In 1990 he received a PhD from Leiden University for this thesis "Reactive Data Structures for GIS". Since 2000 he is professor at the Delft University of Technology (OTB) and head of the section ‘GIS Technology’. He is European editor for the International Journal on Computers, Environment and Urban Systems CEUS. Contact: P.J.M.vanOosterom@tudelft.nl
|
{"Source-Url": "http://www.geotec.uji.es/pubs/eprints/2006-IEEEIC.pdf", "len_cl100k_base": 5873, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 22262, "total-output-tokens": 7223, "length": "2e12", "weborganizer": {"__label__adult": 0.0003376007080078125, "__label__art_design": 0.0008101463317871094, "__label__crime_law": 0.0006213188171386719, "__label__education_jobs": 0.0012874603271484375, "__label__entertainment": 0.00014078617095947266, "__label__fashion_beauty": 0.00022172927856445312, "__label__finance_business": 0.0004563331604003906, "__label__food_dining": 0.0004148483276367187, "__label__games": 0.0006227493286132812, "__label__hardware": 0.0012331008911132812, "__label__health": 0.0006575584411621094, "__label__history": 0.0009541511535644532, "__label__home_hobbies": 0.00011974573135375977, "__label__industrial": 0.000690460205078125, "__label__literature": 0.000698089599609375, "__label__politics": 0.0005717277526855469, "__label__religion": 0.0005326271057128906, "__label__science_tech": 0.36328125, "__label__social_life": 0.00014638900756835938, "__label__software": 0.037109375, "__label__software_dev": 0.58740234375, "__label__sports_fitness": 0.00025153160095214844, "__label__transportation": 0.0009703636169433594, "__label__travel": 0.00035858154296875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32384, 0.0158]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32384, 0.30794]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32384, 0.88687]], "google_gemma-3-12b-it_contains_pii": [[0, 4625, false], [4625, 9446, null], [9446, 10573, null], [10573, 12415, null], [12415, 15183, null], [15183, 17630, null], [17630, 21053, null], [21053, 23160, null], [23160, 28320, null], [28320, 32384, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4625, true], [4625, 9446, null], [9446, 10573, null], [10573, 12415, null], [12415, 15183, null], [15183, 17630, null], [17630, 21053, null], [21053, 23160, null], [23160, 28320, null], [28320, 32384, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32384, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32384, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32384, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32384, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32384, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32384, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32384, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32384, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32384, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32384, null]], "pdf_page_numbers": [[0, 4625, 1], [4625, 9446, 2], [9446, 10573, 3], [10573, 12415, 4], [12415, 15183, 5], [15183, 17630, 6], [17630, 21053, 7], [21053, 23160, 8], [23160, 28320, 9], [28320, 32384, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32384, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
8fadd6cf2e653a3dbd07227ed1395201d80124ce
|
A query optimizer based on a full reducer algorithm for SuperSQL
Arnaud WOLF†, Kento GOTO‡, and Motomichi TOYAMA††
† ‡ ‡ ‡Department of Information and Computer Science, Keio University
Hiyoshi 3–14–1, Kouhoku-ku, Yokohama-shi, Kanagawa, 223–8522 Japan
E-mail: †{arnaud,goto}@db.ics.keio.ac.jp, ‡toyama@ics.keio.ac.jp
Abstract SuperSQL is an extension of SQL that automatically formats data retrieved from the database into various kinds of application data as an output of a query. Output formats include, but are not limited to, HTML, HTML5, XML and PDF. SuperSQL continuously evolves to support current emerging technologies particularly related to web development. Current developments lead us to identify improvement points and remodel the design of the SuperSQL architecture. Specifically, the current implementation includes useless cartesian product that significantly slows down the execution of the queries, because all the tables are retrieved together, while we could actually retrieve them independently from each other. This paper proposes an optimizer by query decomposition based on a full reducer algorithm.
Key words Data retrieval, query optimization, SuperSQL, query language
1. Introduction
SuperSQL is an extension of SQL that enables to generate various kinds of application data directly as a result of a query. Its syntax is similar to SQL with additional formatting capabilities. Possible application data output formats include HTML, PDF, XML, XLS, Ajax, etc. The current main usage of this language is the generation of websites or web applications, with the advantage not having to use any other language that would require more programming skills. A sample of SuperSQL query with its resulting webpage is shown in Figure 1.
Figure 1 Sample SuperSQL query and its result
If many features enabling a richer and more flexible utilization of SuperSQL have been developed recently, SuperSQL also faces important performance issues. More specifically, the structure in the compiler that is responsible of data retrieval roughly generate the SQL query by gathering all the involved tables within it. This causes the generation of very big intermediate tables, slowing down the execution time.
Our current work consists in developing an optimizer that will split the original SQL query into several SQL queries by using query decomposition, in order to reduce the size of the intermediate tables. Our query decomposition model itself is based on the concept of full reducers, an old optimization technique often used in distributed databases.
The outline of this paper is as follow. In part 2, we introduce an simple experience highlighting the performance issues of SuperSQL. In part 3, we introduce more in details SuperSQL, in term of architecture and data representation. In part 4, we introduce the current optimizer implementation and part 5 is the conclusion.
2. Performance issues in SuperSQL
A very simple experiment shows the performance issues of the current implementation. The experience is based on the following query:
GENERATE HTML
[p.name]!, [s.name]!
FROM professor p, student s
WHERE p.dept = 'ICS' AND s.dept = 'ICS'
This query writes in an HTML file both lists of professors and students from the department ICS. While varying the
size of both tables professor and student, we executed the above query and collected the execution time. For a given size, we also executed the query that writes only the list of professors, and the one that writes only the list of students, and compare the sum of execution time of those queries to the one of the original query. The results are shown in Figure 2.

The results clearly shows that the original query induces a behaviour that is far from the expected trend, namely, the sum of the trends of the individual queries. This lead us to understand that we should manage to change the behaviour of the SuperSQL compiler, in order it to split the retrieval of the two tables.
### 3. SuperSQL inner architecture and process
As specified in the introduction, the syntax of SuperSQL is very similar to SQL. However, in addition, the user also has to specify how he wants the data to be organised in the generated output file. The structure of a SuperSQL query is the same as the one of a SQL query, except that the SELECT clause is replaced by another structure. A SuperSQL query always starts by GENERATE "format", where "format" is the desired format of the output file, and is followed by the Target Form Expression (TFE) clause, whose role is to organised the future data into a tree structure. We call the remaining part of the query the SQL clause. The structure derived from the TFE clause is called the schema.
Figure 3 shows the architecture of the SuperSQL compiler.

The TFE’s semantic is based on operators. The two main operators are the connectors and the repeaters. Connectors are used to connect the data within a specific dimension. Especially, in case of HTML files, there are three dimensions: the horizontal dimension represented by a comma (",") , the vertical dimension by an exclamation point ("!") and the link dimension by a percent symbol ("%")
The repeaters, symbolised by a pair of brackets ("[]"), connects the components written inside it in its associated dimension, and will unroll all the tuples of the corresponding relation.
From this semantic, a tree structure, called the TFE tree, representing the hierarchy of the future retrieved data, is derived. Figure 4 shows an example of query and its tree structure.
**GENERATE HTML**
\[
[ \text{a.name}, [\text{b.name}]!, [\text{c.name}]! ]!
\]
FROM A a, B b, C c

If two components are not embedded within the same repeaters, they are called independent, that is, they don’t form a tuple together. In case of our example, b and c are independent, but a and b or a and c are not independent. Therefore, a branch of the TFE tree is the largest type of set of components that are all dependent each other.
In the current implementation, the SQL query maker derives
the SQL queries just from the components issued from the SQL clause. It doesn’t use the schema. The SQL clause, containing FROM clause, WHERE clause and so on, is just copied in the final SQL query, and the SELECT clause gathers all the desired attributes. For instance, the SuperSQL query in Figure 4 leads to the following SQL query:
```
SELECT a.name, b.name, c.name
FROM A a, B b, C c
```
The result of executing this query is the cartesian product between the tables A, B and C, which can be a very huge table. However, the desired organisation specified by the schema doesn’t expresses a cartesian product, because tables B and C are here independent. To be more concrete, if each tables contains 30 tuples, the size of the intermediate table can reach almost 4000 tuples, while the size of the array in the output file would not overcome 30 rows. Therefore, a post-process is performed in order to remove useless tuples and get the final structure.
If the intermediate table retrieved from the cartesian product is very huge, the retrieval and the post-process operations cause huge overhead. Thus, the purpose of the optimizer we are developing is to reduce the size of this intermediate tables by splitting the SQL query into several SQL queries involving only tables that are dependent each other considering the schema.
The tree structure used in the optimizer described in this paper is derived from the TFE tree above. For our optimizer, we just need to know the tables in themselves, and not necessarily the attributes (at least, not at this stage). Therefore, our tree structure is similar to the above one, except that we replace each component by its table’s name, and remove the duplicates in each node. Moreover, in case a table appears more than one time in a branch, we only keep its uppermost occurence in the branch. Figure 5 shows the modified tree of the example in Figure 4.

**Figure 5** The modified TFE tree
### 4. Optimizer based on query decomposition and full reducer
#### 4.1 Principle of query decomposition
In this section, we introduce the general principle of query decomposition we designed under the logic of SuperSQL. The inputs and outputs of the query decomposition process are specified in Figure 6. The input is a SQL query from where we extract a set of tables and an algebraic extraction from the FROM clause, and a predicate from the WHERE clause.
The purpose of query decomposition is to find a partition of the set of branches that optimizes the data retrieval, and for this partition, to find the appropriate set of algebraic expressions $F$ and the set of predicates $P$ satisfying the constraint equation (1). Equation (1) expresses that the result obtained for each branch from each partition should be equivalent to the projection on this branch of the result obtained from the original query.
$$
\forall i \in [1, m], \forall B \in B_1, \pi_B(\sigma_P(\pi_B(F(B)))) \equiv \pi_B(\sigma_P(F(B)))
$$
#### 4.2 Principle of full reduction
Let us $Pa$ be a given partition of $B$. In many cases, it is possible to retrieve all the sets from $Pa$ with a unique query for each set. However, there are also many cases where it is not possible, because the expression $F$ or the predicate $P$ of the original query involves a dependance between some branches. As for an example, let us consider the following query:
```
GENERATE HTML
[a.name]!, [b.name]!, [c.name]!
FROM A a, B b, C c
WHERE a.id = b.id AND b.reg = c.reg AND a.dept = "ICS"
```
In case of this query, we have three branches, but the WHERE clause involves predicates between them. Therefore, if we retrieve separately each of the three branches, we still need to check, afterward, if each tuple of each relation satisfies the condition with at least one tuple in the another relation.
[2] designs an algorithm of full reduction destined to distributed database systems. In a distributed database system,
a query may involve relations in different locations, and the data retrieved from each of those locations has to be gathered in a common place before being treated. Therefore, in order to avoid overhead due to intersite communication, it can be desired to reduce the size of data from each location before joining them. In case of the above example, if A, B and C are stored in different locations, it could be useful to first select, for instance, only the tuples of A that satisfies the predicate with at least one tuple of B, before actually performing the join.
[2] introduces a technique to reduce as much as possible the size of relations by performing what they call a full-reducer sequence, applicable on simple queries (involving only inner joins and no special algebraic operators). It consists on the following steps. At first, given a sql query q, we construct a graph $G_q = \{R, E\}$, where $R$ is a set of relations, and $E$ a set of pairs of relations such as, for a pair $(R_1, R_2)$ of $R$, q includes a binary predicate involving both $R_1$ and $R_2$. We call this graph the query graph of q.
The second step is to perform a recursive sequence of updates, called full reducer, that reduces the size of the desired table by keeping only tuples that satisfies all the predicates with at least one tuple from other relations it is connected with. As this sequence is described in [2], we won’t describe it in details here.
An important point is that, according to [2], we can find a full reducer if and only if the query graph does not contain cycle, or namely, is a tree. But, if the query graph contains cycle, it is impossible to find a full reducer. Considering those limitations, under the scope of the optimizer, a SuperSQL query can be defined with the following inputs:
- a set of relations $R$.
- a set of branches $B$, with each element of $B$ being a subset of $R$.
- a predicate $P$.
Predicates are modeled as polynomials of elementary predicates. The + operator corresponds to OR and the $\times$ operator corresponds to AND.
$$P = \sum_{i=0}^{p} \prod_{j=0}^{m_i} p_{ij}$$
$p_{ij}$’s are called elementary predicates. Namely, it is the predicates that are directly found in the WHERE clause, such as "a.id = b.id". An elementary predicate can either be a unary predicate if it involves only one relation, or a binary predicate if it involves two relations.
4.4 The different steps of the optimization
In this section, we describe the different steps of the optimization performed by the optimizer we are developing.
4.4.1 Pretreatment of branches
As mentioned in Figure 6, in order to be usable as an input of the query decomposition process, the set of branches should be disjoint. However, in general, it is far from being the case. The tree represented in Figure 5 includes two queries that does not include complex operators, that is, queries that only involves inner joins in the FROM clause (e.g., no outer joins), no aggregates, and no HAVING clause in the WHERE clause. Especially, the generalisation on any FROM clause, even the one including outer joins, will be the object of near future works. The second one is about the predicate. We will see below that we need to factorize the predicate, in order to make the queries. If the factorization is not possible, then the optimization is also impossible.
As an example, let us consider the above example. The three branches derived from this example are $B_1 = (A, a), B_2 = (B, b)$ and $B_3 = (C, c)$. Figure 7 is an example of query tree under the context of SuperSQL, and Figure 8 is the full-reducer sequence that enables to retrieve the data from each branch.
branches: \( B_1 = \{(A, a), (B, b)\} \) and \( B_2 = \{(A,a), (C,c)\} \), and the relation \((A, a)\) is a common relation of the two branches. In order to make the branches distinct, we replace each common relation of the two branches by relations with same name, but different aliases. As for the example, the two branches become \( B_1 = \{(A, a1), (B, b)\} \) and \( B_2 = \{(A,a2), (C,c)\} \).
Let us assume that the predicate \( P \) of this query is defined as being:
\[
P = (a.v = b.v) \times (a.w = c.w)
\]
Then the alias in each of the elementary predicate is replaced by the alias of the specific branch, and a binary predicate specifying that the primary key of \( a1 \) and \( a2 \) should be equal is added.
\[
P' = \left( \prod_{pkey \in K_A} (a1.pkey = a2.pkey) \right) \times (a1.v = b.v) \times (a2.w = c.w)
\]
where \( K_A \) is the set of primary keys of \( A \). The set of relations having been duplicated like that is stored in prevision of data construction.
### 4.4.2 Pretreatment of predicates
As the vertices of the query graph are branches and not relations, we need to build it with predicates defined per branches, and not per relation. Thus, we need to extract those branches predicate from the predicate \( P' \). However, in order to do that, we need to transform \( P' \) into a specific shape.
\[
P' = \prod_{B \in B} P_B \prod_{(B', B'') \in B^2} P_{B, B''}
\]
So the second step of the optimizer is to factorise the predicate \( P' \) in order it to fit with equation (5). The \( P_B \)'s are used in queries materializing each branches, and the \( P_{B, B''} \)'s are used to build the query tree and perform the full reduction.
However, there may be some predicates that cannot be factorised as above. For instance, still considering the example of Figure 5, the predicate of the equation (6) cannot be factorised as above. In such a case, the optimization is stopped, and the original retrieval within a single query is performed.
\[
P' = (b.v = c.v) + (b.w = c.w)
\]
In order to check if a predicate is factorisable, we extract all the elementary predicates of \( P \) and put it in the appropriate monomial (the one involving its branch(es)), create a polynomial that is the product of all those monomial, and check the equality by identification. In case of the predicate of equation (6), for instance, the inequality (7) shows that \( P' \) is not factorisable.
\[
(b.v = c.v) \times (b.w = c.w) \neq (b.v = c.v) + (b.w = c.w)
\]
of emptiness has to be perform in order to ensure that the emptiness of the result set of one connected component of the query graph causes the emptiness of all the other connected components of the graph.
4.6 Data construction
The last step of the process is data construction. In this stage, the data is finally organised in a tree structure corresponding to the schema. Figure 11 shows an example of output structure of the data constructor.
The output structure includes a sub-structure called tuple tree. A tuple tree is a list of elements that are either data or another list of tuple tree. The root of the final structure is a single tuple tree, containing only the list of tuple trees for each root of the schema.
The data are input in the structure branch after branch. For a given branch, data are input tuple after tuple. For the insertion of a given tuple, the structure is traversed recursively considering the schema. When considering a given node of the tree, the existence of a tuple tree containing the values of the given tuple is checked. Then if necessary, a new tuple tree containing those values is created. The same process is applied recursively to the subtrees of this tuple tree.
As this step requires to manipulate the entire set of data retrieved from the database, the smaller the intermediate tables are, the faster the execution is.
The optimizer stands on a heuristic: we assume that the more the original query is divided, the faster the execution is. As a matter of fact, the decomposition necessarily leads to smaller intermediate tables. The weak point of this heuristic is that the data retrieval, with materialization and multiple update during full reduction, may cause some important overhead. However, our assumption stands on the hypothesis that users of SuperSQL don’t often write queries with a great number of branches, or with complex predicates. Therefore, we assume that the query graph won’t cause long full-reduction chains, and that the overhead due to the data retrieval will be compensated by the gain on performance on data-construction. All those assertions will be verified on near future works.
5. Conclusion and future works
In order to face the issues of performance due to undesirable cartesian product during the data retrieval, we designed an optimizer based on query decomposition and full-reduction. Figure 12 shows the flowgraph of all the steps of
optimization.
We adapted the full-reduction principle widely used in distributed database for the purpose of SuperSQL. The developed optimizer has limitations concerning the complexity of queries, and currently accepts only queries excluding outer joins, having clause and aggregates. The optimizer has not been properly evaluated yet, but the evaluation of this limited version is the next stage of our work. The evaluation will include, the evaluation of gain of performances provided by the optimizer, the measurement of the overhead brought by the optimization ourselves, and the validity of our heuristic, that is, considering that users mostly writes queries that tends to be executed more efficiently with our optimizer.
Near future works will also involves the generalisation of the query decomposition model to algebraic expressions including outer joins, that will strongly rely on the notion of independence. This step requires deeper thinkings related to relational algebra.
References
|
{"Source-Url": "http://db-event.jpn.org/deim2016/papers/225.pdf", "len_cl100k_base": 4413, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 19581, "total-output-tokens": 5019, "length": "2e12", "weborganizer": {"__label__adult": 0.00025582313537597656, "__label__art_design": 0.00021898746490478516, "__label__crime_law": 0.00029659271240234375, "__label__education_jobs": 0.0006585121154785156, "__label__entertainment": 5.060434341430664e-05, "__label__fashion_beauty": 0.00012624263763427734, "__label__finance_business": 0.00025200843811035156, "__label__food_dining": 0.00030803680419921875, "__label__games": 0.00030612945556640625, "__label__hardware": 0.0006952285766601562, "__label__health": 0.00052642822265625, "__label__history": 0.00016570091247558594, "__label__home_hobbies": 7.545948028564453e-05, "__label__industrial": 0.0003402233123779297, "__label__literature": 0.0001895427703857422, "__label__politics": 0.0001761913299560547, "__label__religion": 0.0003457069396972656, "__label__science_tech": 0.028076171875, "__label__social_life": 7.241964340209961e-05, "__label__software": 0.01428985595703125, "__label__software_dev": 0.9521484375, "__label__sports_fitness": 0.0001742839813232422, "__label__transportation": 0.00029850006103515625, "__label__travel": 0.00016045570373535156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20371, 0.01256]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20371, 0.82168]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20371, 0.9197]], "google_gemma-3-12b-it_contains_pii": [[0, 3287, false], [3287, 6213, null], [6213, 10162, null], [10162, 13828, null], [13828, 16316, null], [16316, 18738, null], [18738, 20371, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3287, true], [3287, 6213, null], [6213, 10162, null], [10162, 13828, null], [13828, 16316, null], [16316, 18738, null], [18738, 20371, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20371, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20371, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20371, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20371, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20371, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20371, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20371, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20371, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20371, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20371, null]], "pdf_page_numbers": [[0, 3287, 1], [3287, 6213, 2], [6213, 10162, 3], [10162, 13828, 4], [13828, 16316, 5], [16316, 18738, 6], [18738, 20371, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20371, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
6a4e0ea01d44c5beccf880e2e20bf1ab09e1ca61
|
Simple Testing Can Prevent Most Critical Failures
An Analysis of Production Failures in Distributed Data-Intensive Systems
DING YUAN, YU LUO, XIN ZHUANG, GUILHERME RENNA RODRIGUES, XU ZHAO, YONGLE ZHANG, PRANAY U. JAIN, AND MICHAEL STUMM
Large, production-quality distributed systems still fail periodically, sometimes catastrophically where most or all users experience an outage or data loss. Conventional wisdom has it that these failures can only manifest themselves on large production clusters and are extremely difficult to prevent a priori, because these systems are designed to be fault tolerant and are well-tested. By investigating 198 user-reported failures that occurred on production-quality distributed systems, we found that almost all (92%) of the catastrophic system failures are the result of incorrect handling of non-fatal errors, and, surprisingly, many of them are caused by trivial mistakes such as error handlers that are empty or that contain expressions like “FIXME” or “TODO” in the comments. We therefore developed a simple static checker, Aspirator, capable of locating trivial bugs in error handlers; it found 143 new bugs and bad practices that have been fixed or confirmed by the developers.
Our study also includes a number of additional observations that may be helpful in improving testing and debugging strategies. We found that from a testing point of view, almost all failures require only three or fewer nodes to reproduce, which is good news considering that these services typically run on a very large number of nodes. In addition, we found that a majority of the failures can simply be reproduced by unit tests even though conventional wisdom has it that failures that occur on a distributed system in production are extremely hard to reproduce offline. Nevertheless, we found the failure manifestations are generally complex, typically requiring multiple input events occurring in a specific order.
The 198 randomly sampled, real world, user-reported failures we studied are from the issue tracking databases of five popular distributed data-analytic and storage systems: Cassandra, HBase, HDFS, Hadoop MapReduce, and Redis. We focused on distributed, data-intensive systems because they are the building blocks of many Internet software services, and we selected the five systems because they are widely used and are considered production quality.
<table>
<thead>
<tr>
<th>Software</th>
<th>Language</th>
<th>Failures</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>Total</td>
</tr>
<tr>
<td>Cassandra</td>
<td>Java</td>
<td>3,923</td>
</tr>
<tr>
<td>HBase</td>
<td>Java</td>
<td>5,804</td>
</tr>
<tr>
<td>HDFS</td>
<td>Java</td>
<td>2,828</td>
</tr>
<tr>
<td>MapReduce</td>
<td>Java</td>
<td>3,469</td>
</tr>
<tr>
<td>Redis</td>
<td>C</td>
<td>1,192</td>
</tr>
<tr>
<td>Total</td>
<td></td>
<td>17,216</td>
</tr>
</tbody>
</table>
Table 1: Number of reported and sampled failures for the systems we studied, and the catastrophic ones from the sample set
Table 1 shows the distribution of the failure sets. For each sampled failure ticket, we carefully studied the failure report, the discussion between users and developers, related error logs, the source code, and patches to understand the root cause and its propagation leading to the failure.
We further studied the characteristics of a specific subset of failures—the catastrophic failures, which we define as those failures that affect all or a majority of users instead of only a subset of users. Catastrophic failures are of particular interest because they are the most costly ones for the service providers, and they are not supposed to occur, considering these distributed systems are designed to withstand and automatically recover from component failures.
**General Findings**
What follows is a list of all of our general findings. Overall, our findings indicate that the failures are relatively complex, but they identify a number of opportunities for improved testing and diagnosis. Note that we only discuss the first five of the general findings in this article. Our OSDI paper [6] contains detailed discussions on the other general findings, and findings for catastrophic failures are discussed below (Findings 11-13).
1. A majority (77%) of the failures require more than one input event to manifest.
2. A significant number (38%) of failures require input events that typically occur only on long running systems.
3. The specific order of events is important in 88% of the failures that require multiple input events.
4. Twenty-six percent of the failures are non-deterministic—they are not guaranteed to manifest given the right input event sequences.
5. Almost all (98%) of the failures are guaranteed to manifest on no more than three nodes.
6. Among the non-deterministic failures, 53% have timing constraints only on the input events.
7. Seventy-six percent of the failures print explicit failure-related error messages.
8. For a majority (84%) of the failures, all of their triggering events are logged.
9. Logs are noisy: the median of the number of log messages printed by each failure is 824.
10. A majority (77%) of the production failures can simply be reproduced by a unit test.
**Finding 1:** A majority (77%) of the failures require more than one input event to manifest, but most of the failures (90%) require no more than three.
Figure 1 provides an example where two input events, a load balance event and a node crash, are required to take down the cluster. Note that we consider the events to be “input events” from a testing and diagnostic point of view—some of the events (e.g., “load balance” and “node crash”) are not strictly user inputs but can easily be emulated in testing.
**Finding 2:** A significant number (38%) of failures require input events that typically occur only on long running systems.
The load balance event in Figure 1 is such an example. This finding suggests that many of these failures can be hard to expose during normal testing unless such events are intentionally exercised by testing tools.
**Finding 3:** The specific order of events is important in 88% of the failures that require multiple input events.
Consider again the example shown in Figure 1. The failure only manifests when the load balance event occurs before the crash of slave B. A different event order will not lead to failure.
Simple Testing Can Prevent Most Critical Failures
In many cases, even with the right combination and sequence of input events the failure is not guaranteed to manifest:
Finding 4: Twenty-six percent of the failures are non-deterministic—they are not guaranteed to manifest given the right input event sequences.
In these cases, additional timing relationships are required for the failures to manifest. For example, the failure in Figure 1 can only manifest when slave B crashes after the znode is deleted. If it crashes before the HMaster deletes the znode, the failure would not be triggered.
Findings 1–4 show the complexity of failures in large distributed systems. To expose the failures in testing, we need to not only explore the combination of multiple input events from an exceedingly large event space with many only occurring on long running systems, we also need to explore different permutations. Some further require additional timing relationships.
The production failures we studied typically manifested themselves on configurations with a large number of nodes. This raises the question of how many nodes are required for an effective testing and debugging system.
Finding 5: Almost all (98%) of the failures are guaranteed to manifest on no more than three nodes.
The number is similar for catastrophic failures, where 98% of them manifest on no more than three nodes. Finding 5 implies that it is not necessary to have a large cluster to test for and reproduce failures.
Note that Finding 5 does not contradict the conventional wisdom that distributed system failures are more likely to manifest on large clusters. In the end, testing is a probabilistic exercise. A large cluster usually involves more diverse workloads and fault modes, thus increasing the chances for failures to manifest. However, what our finding suggests is that it is not necessary to have a large cluster of machines to expose bugs, as long as the specific sequence of input events occurs.
Catastrophic Failures
Table 1 shows that 48 failures in our failure set have catastrophic consequences. We classify a failure to be catastrophic when it prevents all or a majority of the users from their normal access to the system. In practice, these failures result in a cluster-wide outage, a hung cluster, or a loss to all or to a majority of the user data.
The fact that there are so many catastrophic failures is perhaps surprising given that the systems considered all have high availability (HA) mechanisms designed to prevent component failures from taking down the entire service. For example, all of the four systems with a master-slave design—namely, HBase, HDFS, MapReduce, and Redis—are designed to, on a master node failure, automatically elect a new master node and fail over to it. Cassandra is a peer-to-peer system and thus by design avoids single points of failure. Then why do catastrophic failures still occur?
Finding 11: Almost all catastrophic failures (92%) are the result of incorrect handling of non-fatal errors explicitly signaled in software (see Figure 2).
These catastrophic failures are the result of more than one fault triggering, where the initial fault, whether due to hardware, misconfiguration, or bug, first manifests itself explicitly as a non-fatal error—for example, by throwing an exception or having a system call return an error. This error need not be catastrophic; however, in the vast majority of cases, the handling of the explicit error was faulty, resulting in an error manifesting itself as a catastrophic failure.
Overall, we found that the developers are good at anticipating possible errors. In all but one case, the errors were properly checked for in the software. However, we found the developers were often negligent in handling these errors. This is further...
Figure 1: A failure in HBase that requires two input events to trigger. A load balance event first causes a region R to be transferred from an overloaded slave A to a more idle slave B. After B opens R, HMaster deletes the ZooKeeper znode that is used to indicate R is being opened. If slave B crashes at this moment, another slave C is assigned to serve the region. After C opens R, HMaster tries to delete the same ZooKeeper znode again, but deleteOpenedZNode() throws an exception because the znode is already deleted. This exception takes down the entire cluster.
Figure 2: Breakdown of all catastrophic failures by their error handling.
Simple Testing Can Prevent Most Critical Failures
corroborated in Findings 12 and 13, below. To be fair, we should point out that our findings are skewed in the sense that our study did not expose the many errors that are correctly caught and handled (as evidenced by the long uptime of these systems).
Nevertheless, the correctness of error handling code is particularly important given their impact. Previous studies [4, 5] show that the initial faults in distributed system failures are highly diversified (e.g., bugs, misconfigurations, node crashes, hardware faults), and in practice it is simply impossible to eliminate all of them [1]. It is therefore unavoidable that some of these faults will manifest themselves into errors, and error handling then becomes the last line of defense [3].
Trivial Mistakes in Error Handlers
Finding 12: Thirty-five percent of the catastrophic failures are caused by trivial mistakes in error handling logic—ones that simply violate best programming practices, and that can be detected without system-specific knowledge.
Figure 2 breaks down the trivial mistakes into three categories: (1) the error handler ignores explicit errors; (2) the error handler over-catches an exception and aborts the system; and (3) the error handler contains “TODO” or “FIXME” comments.
Twenty-five percent of the catastrophic failures were caused by ignoring explicit errors. (An error handler that only logs the error is also considered to be ignoring the error.) For systems written in Java, the exceptions were all explicitly thrown, whereas in Redis they were system call error returns.
Another 8% of the catastrophic failures were caused by developers prematurely aborting the entire cluster on a non-fatal exception. While in principle one would need system-specific knowledge to determine when to bring down the entire cluster, the aborts we observed were all within exception over-catches, where a higher level exception is used to catch multiple different lower-level exceptions. Figure 3 shows such an example.
Figure 3: An entire HDFS cluster brought down by an over-catch
System-Specific Bugs
Finding 13: In 23% of the catastrophic failures, the mistakes in error handling were system specific, but were still easily detectable. More formally, the incorrect error handling in these cases would be exposed by 100% statement coverage testing on the error handling logic.
In other words, once the problematic basic block in the error handling code is triggered, the failure is guaranteed to be exposed. This suggests that these basic blocks were faulty and simply never tested. The failure shown in Figure 1 belongs to this category. Once a test case can deterministically trigger the catch block, the failure will manifest with 100% certainty.
Hence, a good strategy to prevent these failures is to start from existing error handling logic and try to reverse-engineer test cases that trigger them. While high statement coverage on error handling code might seem difficult to achieve, aiming for higher statement coverage in testing might still be a better strategy than a strategy of applying random fault injections. Our finding suggests that a “bottom-up” approach could be more effective: start from the error handling logic and reverse-engineer a test case to expose errors there.
The remaining 34% of catastrophic failures involve complex bugs in the error handling logic. While our study cannot provide constructive suggestions on how to identify such bugs, we found they only account for one third of the catastrophic failures.
Aspirator: A Simple Checker
In the subsection “Trivial Mistakes in Error Handlers,” we observed that some of the most catastrophic failures are caused by trivial mistakes that fall into three simple categories: (1) error handlers that are empty or only contain log printing statements;
Simple Testing Can Prevent Most Critical Failures
We built a rule-based static checker, Aspirator, capable of locating these bug patterns. We provided two implementations of Aspirator: one as a stand-alone tool that analyzes Java bytecode, and another version that can be integrated with the Java build system to catch these bugs at compile-time. The implementation details of Aspirator can be found in our OSDI paper [6].
Checking Real-World Systems
We first evaluated Aspirator on the set of catastrophic failures used in our study. If Aspirator had been used and the identified bugs fixed, 33% of the Cassandra, HBase, HDFS, and MapReduce’s catastrophic failures we studied would have been prevented. We then used Aspirator to check the latest stable versions of these four systems plus five other systems: Cloudstack, Hive, Tomcat, Spark, and ZooKeeper.
We categorized each warning generated by Aspirator into one of three categories: bug, bad practice, and false positive. Bugs are the cases where the error handling logic will clearly lead to unexpected failures. False positives are those that clearly would not lead to a failure. Bad practices are cases that the error handling logic is suspicious of, but we could not definitively infer the consequences without domain knowledge. For example, if deleting a temporary file throws an exception and is subsequently ignored, it may be inconsequential. However, it is nevertheless considered a bad practice because it may indicate a more serious problem in the file system.
Overall, Aspirator detected 121 new bugs and 379 bad practices along with 115 false positives. Aspirator found new bugs in every system we checked.
Many bugs detected by Aspirator could indeed lead to catastrophic failures. For example, all four bugs caught by the abort-in-over-catch checker could bring down the cluster on an unexpected exception similar to the one in Figure 3. They have all been fixed by the developers of the respective systems.
Some bugs can also cause the cluster to hang. Aspirator detected five bugs in HBase and Hive that have a pattern similar to the one depicted in Figure 5(a). In this example, when tableLock cannot be released, HBase only outputs an error message and continues executing, which can deadlock all servers accessing the table. The developers fixed this bug by immediately cleaning up the states and aborting the problematic server.
Figure 5(b) shows a bug that could lead to data loss. An IOException could be thrown when HDFS is recovering the updates from the edit log. Ignoring this exception could cause a silent data loss.
Experience
Interaction with developers: We reported 171 bugs and bad practices to the developers of the respective systems: 143 have already been confirmed or fixed by the developers, 17 were rejected, and developers never responded to the other 11 reports.
We received mixed feedback from developers. On the one hand, positive comments include: “I really want to fix issues in this line, because I really want us to use exceptions properly and never ignore them”; “No one would have looked at this hidden feature; ignoring exceptions is bad precisely for this reason”; and “Catching Throwable [i.e., exception over-catch] is bad; we should fix these.” On the other hand, we received negative comments like: “I fail to see the reason to handle every exception.”
There are a few reasons why developers may be oblivious to the handling of errors. First, some errors are ignored because they are not regarded as critical at the time, and the importance of the error handling is realized only when the system suffers a serious failure. We hope to raise developers’ awareness by showing that many of the most catastrophic failures today are caused precisely by such obliviousness to the correctness of error handling logic.
Second, developers may believe the errors would never (or only very rarely) occur. Consider the following code snippet detected by Aspirator from HBase:
```java
try {
t = new TimeRange(timestamp, timestamp+1);
} catch (IOException e) {
// Will never happen
}
```
In this case, the developers thought the constructor could never throw an exception, so they ignored it (as per the comment in the code). We observed many empty error handlers containing similar comments in the systems we checked. We argue that errors that “can never happen” should be handled defensively to prevent them from propagating. This is because developers’ judgment could be wrong, later code evolutions may enable the error, and allowing such unexpected errors to propagate can be deadly.
In the HBase example above, developers’ judgment was indeed wrong. The constructor is implemented as follows:
```java
public TimeRange (long min, long max)
throws IOException {
if (max < min)
throw new IOException("max < min");
}
```
where an IOException is thrown on an integer overflow; yet swallowing this exception could lead to a data loss. The developers later fixed this by handling the IOException properly.
Third, proper handling of the errors can be difficult. It is often much harder to reason about the correctness of a system’s abnormal execution path than its normal execution path. The problem is further exacerbated by the reality that many of the exceptions are thrown by poorly documented third-party components. We surmise that in many cases, even the developers may not fully understand the possible causes or the potential consequences of an exception. This is evidenced by the following code snippet from Cloudstack:
```java
} catch (NoTransitionException ne) {
/* Why this can happen? Ask God not me. */
}
```
We observed similar comments from empty exception handlers in other systems as well.
Finally, feature development is often prioritized over exception handler coding when release deadlines loom. We embarrassingly experienced this ourselves when we ran Aspirator on Aspirator’s code: We found five empty exception handlers, all of them for the purpose of catching exceptions thrown by the underlying libraries and put there only so that the code would compile.
**Good practice in Cassandra:** Among the nine systems we checked, Cassandra has the lowest bug-to-handler-block ratio, indicating that Cassandra developers are careful in following good programming practices in exception handling. In particular, the vast majority of the exceptions are handled by recursively propagating them to the callers, and are handled by top level methods in the call graphs. Interestingly, among the five systems we studied, Cassandra also has the lowest rate of catastrophic failures in its randomly sampled failure set (see Table 1).
**Reactions from HBase developers:** Our OSDI paper prompted HBase developers to start the initiative to fix all the existing bad practices. They intend to use Aspirator as their compile-time checker [2].
**Conclusions**
We presented an in-depth analysis of 198 user-reported failures in five widely used, data-intensive distributed systems. We found that the error-manifestation sequences leading to the failures are relatively complex. However, we also found that almost all of the most catastrophic failures are caused by incorrect error handling, and more than half of them are trivial mistakes or can be exposed by statement coverage testing.
Existing testing techniques will find it difficult to successfully uncover many of these error-handling bugs. They all use a “top-down” approach: start the system using generic inputs and actively inject errors at different stages. However, the size of the input and state space makes the problem of exposing these bugs intractable.
Instead, we suggest a three-pronged approach to expose these bugs: (1) use a tool similar to the Aspirator that is capable of identifying a number of trivial bugs; (2) enforce code reviews on error-handling code, since the error-handling logic is often simply wrong; and (3) purposefully construct test cases that can reach each error-handling code block.
Our detailed analysis of the failures and the source code of Aspirator are publicly available at: http://www.eecg.toronto.edu/failureAnalysis/.
**Acknowledgments**
We greatly appreciate the anonymous OSDI reviewers, Jason Flinn, Leonid Ryzhyk, Ashvin Goel, David Lie, and Rik Farrow for their insightful feedback. We thank Dongcai Shen for help with reproducing five bugs. This research is supported by an NSERC Discovery grant, NetApp Faculty Fellowship, and Connaught New Researcher Award.
**References**
|
{"Source-Url": "https://www.eecg.utoronto.ca/~stumm/Papers/Yuan-Login15.pdf", "len_cl100k_base": 4766, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18662, "total-output-tokens": 5488, "length": "2e12", "weborganizer": {"__label__adult": 0.00028586387634277344, "__label__art_design": 0.00021266937255859375, "__label__crime_law": 0.00030612945556640625, "__label__education_jobs": 0.0005974769592285156, "__label__entertainment": 6.23464584350586e-05, "__label__fashion_beauty": 0.00012540817260742188, "__label__finance_business": 0.00019276142120361328, "__label__food_dining": 0.00030541419982910156, "__label__games": 0.0004091262817382813, "__label__hardware": 0.0010852813720703125, "__label__health": 0.0005168914794921875, "__label__history": 0.0001901388168334961, "__label__home_hobbies": 7.82012939453125e-05, "__label__industrial": 0.00029587745666503906, "__label__literature": 0.0002522468566894531, "__label__politics": 0.0002065896987915039, "__label__religion": 0.00036978721618652344, "__label__science_tech": 0.033050537109375, "__label__social_life": 0.0001041889190673828, "__label__software": 0.00962066650390625, "__label__software_dev": 0.951171875, "__label__sports_fitness": 0.00022840499877929688, "__label__transportation": 0.0003762245178222656, "__label__travel": 0.00015974044799804688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24363, 0.01597]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24363, 0.37665]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24363, 0.92309]], "google_gemma-3-12b-it_contains_pii": [[0, 3025, false], [3025, 6395, null], [6395, 10858, null], [10858, 14727, null], [14727, 19583, null], [19583, 24363, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3025, true], [3025, 6395, null], [6395, 10858, null], [10858, 14727, null], [14727, 19583, null], [19583, 24363, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24363, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24363, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24363, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24363, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24363, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24363, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24363, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24363, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24363, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24363, null]], "pdf_page_numbers": [[0, 3025, 1], [3025, 6395, 2], [6395, 10858, 3], [10858, 14727, 4], [14727, 19583, 5], [19583, 24363, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24363, 0.07258]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
af966868b66dddcf1b1faa537ced7c75d489adc9
|
• Web Ontology Language
– W3C Recommendation for the Semantic Web, 2004
– OWL 2 (currently Proposed W3C Recommendation) forthcoming this October
• We already present this here
• Semantic Web KR language based on description logics (DLs)
– OWL DL is essentially DL SROIQ(D)
– KR for web resources, using URIs as identifiers
– Using web-enabled syntaxes, e.g. based on XML or RDF
• We mostly use concise DL syntax, some RDF syntax examples
– Many technical and extra-logical aspects, e.g. datatypes
• We focus on the logical core language
OWL Rationale
An ontology language for the Web ...
- Open World Assumption
- Reasonable trade-off between expressivity and scalability
- Integrates with RDF and RDF Schema
- Fully declarative semantics
Features (for OWL 2 DL):
- Fragment of first-order predicate logic (FOL)
- Decidable
- Known complexity classes (N2ExpTime for OWL 2 DL)
- Reasonably efficient for real KBs
OWL Building Blocks
- individuals (written as URIs): `ex:markus`
- aka: constants (FOL), resources (RDF)
- classes (also written as URIs): `ex:Female`
- aka: concepts, unary predicates (FOL)
- properties (also written as URIs): `ex:married`
- aka: roles (DL), binary predicates (FOL)
• model theory (aka extensional semantics)
• OWL DL Interpretation $\mathcal{I}$:
\[\Delta \subset \mathcal{I}, \mathcal{I}_I(\text{uri}), \mathcal{I}_C(\text{uri}), \mathcal{I}_R(\text{uri})\]
On the OWL Syntax
- OWL statements are written down as (sets of) RDF triples
- OWL facts (aka: assertions) are written down like in RDF
- some RDF language elements are reused
- new language elements from the OWL namespace
- more complex statements are constructed by using bnodes (we “hide“ them for convenience)
Class Membership
- induri rdf:type classuri.
- true in \( \mathcal{I} \), if \( I_I(\text{induri}) \in I_C(\text{classuri}) \)
- Example:
\[
\text{ex:nicolas rdf:type ex:Male}
\]
\( I_I \)
\( I_C \)
Property Membership
- \( \text{induri}_1 \text{ propuri } \text{induri}_2 \).
- true in \( \mathcal{I} \), if
\[ \langle I_1(\text{induri}_1), I_1(\text{induri}_2) \rangle \in I_p(\text{propuri}) \]
- Example:
ex:carla ex:marriedWith ex:nicolas
Class Inclusion
- classuri1 rdfs:subClassOf classuri2.
- true in \( \mathcal{I} \), if \( I_C(\text{classuri1}) \subseteq I_C(\text{classuri2}) \)
- Example:
ex:President rdfs:subClassOf ex:Politician
Property Inclusion
- \( \text{propuri1} \text{ rdfs:subPropertyOf } \text{propuri2} \).
- True in \( \mathcal{I} \), if \( I_R(\text{propuri1}) \subseteq I_R(\text{propuri2}) \).
- Example:
- \( \text{ex:sonOf} \text{ rdfs:subPropertyOf } \text{ex:childOf} \)
Predefined Classes & Properties
- **owl:Thing** – the class containing everything
- $I_C(\text{owl:Thing}) = \Delta$
- **owl:Nothing** – the empty class
- $I_C(\text{owl:Nothing}) = \emptyset$
- **owl:topProperty** – the property connecting everything
- $I_R(\text{owl:topProperty}) = \Delta \times \Delta$
- **owl:bottomProperty** – the empty property
- $I_R(\text{owl:bottomProperty}) = \emptyset$
Complex Classes: Intersection
- \([\text{owl:intersectionOf}(\text{class}_1, \ldots, \text{class}_n)]\)
- \(I_C([\text{owl:intersectionOf}(\text{class}_1, \ldots, \text{class}_n)]) = I_C(\text{class}_1) \cap \ldots \cap I_C(\text{class}_n)\)
- Example:
- \([\text{owl:intersectionOf}(\text{ex:Actor}, \text{ex:Politician})]\)
Complex Classes: Union
- $[\text{owl:unionOf}(\text{class}_1, \ldots, \text{class}_n)]$
- $I_C([\text{owl:unionOf}(\text{class}_1, \ldots, \text{class}_n)])$
$= I_C(\text{class}_1) \cup \ldots \cup I_C(\text{class}_n)$
- Example:
$[\text{owl:unionOf}(\text{ex:Actor}, \text{ex:Politician})]$
Complex Classes: Complement
- $[\text{owl:complementOf} \ class]$
- $I_C([\text{owl:complementOf} \ class]) = \Delta - I_C(class)$
- Example:
$[\text{owl:complementOf} \ ex:Politician]$
Complex Classes: Existential Property Restriction
- \[ [ \text{rdf:type} \quad \text{owl:Restriction} ; \\]
\[ \text{owl:onProperty} \quad prop ; \]
\[ \text{owl:someValuesFrom} \quad \text{class} \ ] \]
- \( I_C(...)=\{x|\langle x,y\rangle \in I_R(prop) \text{ for some } y \in I_C(class)\} \)
- Example: \[ [ \text{rdf:type} \quad \text{owl:Restriction} ; \\]
\[ \text{owl:onProperty} \quad \text{ex:parentOf} ; \]
\[ \text{owl:someValuesFrom} \quad \text{ex:Male} \] \]
Complex Classes: Universal Property Restriction
- \( [ \text{rdf:type} \, \text{owl:Restriction} ; \, \text{owl:onProperty} \, prop ; \, \text{owl:allValuesFrom} \, \text{class} ] \)
- \( I_C(...) = \{ x | \langle x, y \rangle \in I_R(prop) \text{ implies } y \in I_C(class) \} \)
- Example: \( [ \text{rdf:type} \, \text{owl:Restriction} ; \, \text{owl:onProperty} \, \text{ex:parentOf} ; \, \text{owl:allValuesFrom} \, \text{ex:Male} ] \)
Syntactic Sugar: Disjointness, Domain & Range Statements
\( \text{class1} \ \text{owl:disjointWith} \ \text{class2} \).
- same as:
\[ \text{[owl:intersectionOf (class1, class2)]} \]
\( \text{rdfs:subClassOf} \ \text{owl:Nothing} \).
\( \text{propuri} \ \text{rdf:domain} \ \text{class} \).
- same as:
\[ \text{[rdf:type owl:Restriction; owl:onProperty propuri; owl:someValuesFrom owl:Thing]} \]
\( \text{rdfs:subClassOf} \ \text{class} \).
\( \text{propuri} \ \text{rdf:range} \ \text{class} \).
- same as:
\( \text{owl:Thing} \ \text{rdfs:subClassOf [rdf:type owl:Restriction; owl:onProperty propuri; owl:allValuesFrom class]} \).
• Advanced Features of OWL
– more class constructors
– extended property modeling
– handling of data values
More Complex Classes: Qualified At-Least Restriction
- \[ \text{[ rdf:type owl:Restriction; owl:minQualifiedCardinality } "n"^\text{xsd:nonNegativeInteger}; owl:onProperty } prop; owl:onClass } class \]
- \( I_C(...)=\{x| \#\{\langle x,y\rangle \in I_R(prop) | y \in I_C(class)\} \geq n\} \)
- Example:
\[ \text{[ rdf:type owl:Restriction; owl:minQualifiedCardinality } "2"^\text{xsd:nonNegativeInteger}; owl:onClass } ex:Male; owl:onProperty } ex:parentOf \]
More Qualified Cardinalities
• in analogy to at-least restrictions:
– at-most:
\texttt{owl:maxQualifiedCardinality}
– exact cardinality:
\texttt{owl:QualifiedCardinality}
More Complex Classes: Enumeration of Individuals
- \([\text{owl:oneOf} \ (\text{induri1}, \ldots, \text{indurin})]\)
- \(I_C([\text{owl:oneOf} \ (\text{induri1}, \ldots, \text{indurin})]) = \{I_I(\text{induri1}), \ldots, I_I(\text{indurin})\}\)
- Example:
\([\text{owl:oneOf} \ (\text{ex:georgec}, \text{ex:arnolds})]\)
More Complex Classes: Self Restriction
- \[
\text{[ rdf:type } \text{owl:Restriction} ; \text{owl:onProperty } prop ; \text{owl:hasSelf } "true"^{xsd:boolean} ]}
- \[I_C(...) = \{ x | \langle x, x \rangle \in I_R(prop) \} \]
- Example: \[
\text{[ rdf:type } \text{owl:Restriction} ; \text{owl:onProperty } ex:hasKilled ; \text{owl:hasSelf } "true"^{xsd:boolean} ]}
\]
Inverse Properties
• \([\text{owl:inverseOf } prop]\)
• \(I_R([\text{owl:inverseOf } prop]) = \{\langle y, x \rangle | \langle x, y \rangle \in I_R(prop)\}\)
• Example: \([\text{owl:inverseOf ex:childOf}]\)
Property Chain Axioms
- \( prop \ quad \text{owl:propertyChainAxiom} ( \text{prop1}, \ldots, \text{propn}) \).
- true in \( \mathcal{I} \), if \( I_R(\text{prop1}) \circ \ldots \circ I_R(\text{propn}) \subseteq I_R(\text{prop}) \)
- Example:
\( \text{ex:siblingOf} \quad \text{owl:propertyChainAxiom} \quad (\text{ex:childOf}, \text{ex:parentOf}) \).
Decidability problems
- property chain axioms can easily lead to undecidability
- in order to retain decidability, two global constraints are imposed on OWL DL ontologies:
- the set of property chain axioms and subproperty statements must be regular
- properties used in cardinality and self restrictions must be simple properties
in the following, we abbreviate
\[ R \text{owl:propertyChainAxiom}(S_1 \ldots S_n) \text{ by } S_1 \circ \ldots \circ S_n \sqsubseteq R \]
\[ S \text{owlrdfs:subPropertyOf} R \text{ by } S \sqsubseteq R \]
regularity restriction:
– there must be a strict linear order \( \prec \) on the properties
– every property chain or subproperty axiom has to have one of the following forms where \( S_i \prec R \) for all \( i = 1, 2, \ldots, n \):
\[
R \circ R \sqsubseteq R \quad [\text{owl:inverseOf } R] \sqsubseteq R \\
R \circ S_1 \circ S_2 \circ \ldots \circ S_n \sqsubseteq R \\
S_i \circ S_2 \circ \ldots \circ S_n \circ R \sqsubseteq R
\]
– Example 1: \( R \circ S \sqsubseteq R \quad S \circ S \sqsubseteq S \quad R \circ S \circ R \sqsubseteq T \)
– regular with order \( S \prec R \prec T \)
– Example 2: \( R \circ T \circ S \sqsubseteq T \)
– not regular because form not admissible
– Example 3: \( R \circ S \sqsubseteq S \quad S \circ R \sqsubseteq R \)
– not regular because no adequate order exists
Property Chain Axioms:
Regularity
• combining property chain axioms and cardinality or self restrictions may lead to undecidability
• restriction: use only *simple* properties in cardinality expressions (i.e. those which cannot be – directly or indirectly – inferred from property chains)
• technically:
– for any property chain axiom $S_1 \circ S_2 \circ ... \circ S_n \sqsubseteq R$ with $n>1$, $R$ is non-simple
– for any subproperty axiom $S \sqsubseteq R$ with $S$ non-simple, $R$ is non-simple
– all other properties are simple
• Example:
$Q \circ P \sqsubseteq R \quad R \circ P \sqsubseteq R \quad R \sqsubseteq S \quad P \sqsubseteq R \quad Q \sqsubseteq S$
non-simple: $R, S$ simple: $P, Q$
Property Characteristics
- OWL also allows for specifying that properties are:
- disjoint from another
- functional
- inverse functional
- transitive
- symmetric
- asymmetric
- reflexive
- irreflexive
\[\text{syntactic sugar w.r.t.} \]
\[\text{already introduced} \]
\[\text{modeling features} \]
Datatypes in OWL
- like in RDF, properties can also be used to associate individuals with data values:
ex:john ex:hasAge "42"^^xsd:integer.
- those datatype properties must not be used as individual-interrelating object properties at the same time.
- datatypes supported by OWL:
Simple Data Integration in OWL
• Practical problem: given ontologies from different sources, which identifiers refer to the same individuals?
• Typical approaches in OWL:
– Explicitly specify equality (owl:sameAs)
– Use inverse functional properties (“same values → same individual”)
• Problems:
– equality requires explicit mappings (rare on the Web)
– OWL DL disallows inverse functional datatype properties (complicated interplay with datatype definitions!)
– Only one property used globally for identification, no property combinations (Example: “All Informatik 2009 participants with the same name and birthday are the same.”)
OWL 2 Keys
OWL 2 provides a way to model
“All Informatik 2009 participants with same name and birthday are the same.“
→ **Keys** (expressed with `owl:hasKey`)
- **Restriction**: Keys apply only to named individuals – objects of the interpretation domain to which a constant symbol refers.
- This is not an expressive feature of description logics!
Other OWLs
• OWL 1 contained three “species” of OWL:
– **OWL DL**: a DL-based KR language with an RDF syntax
• not all RDF documents are OWL DL ontologies
– **OWL Lite**: a restricted version of OWL DL
– **OWL Full**: an extension of RDF to give semantics to the OWL keywords
• intended to behave “similar” to OWL DL but applicable to all RDF documents
• entailment problem undecidable (if the semantics is non-contradictory)
• OWL 2: OWL 2 DL and OWL 2 Full to extend OWL 1 species
Quo Vadis, OWL Lite?
**OWL Lite as failure:**
- Defined as fragment of OWL 1 DL, intended to be simpler
- However: almost as complex as OWL DL (ExpTime)
- Complex syntax hides real expressive power
- Current usage in ontologies coincidental rather than intentionally
Original goal: simpler implementation and usage
→ approach in OWL 2: three simpler **language profiles:**
- **OWL 2 QL**
- **OWL 2 EL**
- **OWL 2 RL**
OWL 2 Profiles
Original goal: simpler implementation and usage
→ approach in OWL 2: three simpler language profiles:
OWL 2 QL OWL 2 EL OWL 2 RL
Design principle for profiles:
Identify maximal OWL 2 sublanguages that are still implementable in Ptime.
Main source of intractability: non-determinism (requires guessing/backtracking)
• disjunction, or negation + conjunction
• Max. cardinality restrictions
• Combining existentials and universals in superclasses
• Non-unary finite class expressions (nominals) or datatype expressions (not discussed here)
→ features that are not allowed in any OWL 2 profile
Many further features can lead to non-determinism – care needed!
OWL 2 EL
OWL profile based on description logic EL++
- Intuition: focus on terminological expressivity used for light-weight ontologies
- Allow existential but not universal, only \texttt{rdfs:range} (special kind of universals) allowed with restrictions
- Property domains, class/property hierarchies, class intersections, disjoint classes/properties, property chains, \texttt{Self}, nominals (singleton classes), and keys fully supported
- No inverse or symmetric properties, no disjunctions or negations
OWL 2 EL: Features
- Standard reasoning in OWL 2 EL: PTime-complete
- Used by practically relevant ontologies: Prime example is SNOMED CT (clinical terms ontology with classes and properties in the order of $10^5$)
- Fast implementations available: full classification of SNOMED-CT in <1 min; real-time responsivity when preprocessed (modules)
OWL profile that resembles an OWL-based rule language:
- Intuition: subclass axioms in OWL RL can be understood as rule-like implications with head (superclass) and body (subclass)
- Different restrictions on subclasses and superclasses:
- subclasses can only be class names, nominals, conjunctsions, disjunctions, existentials if applied only to subclass-type expressions
- superclasses can be class names, universals or nominals; also max. cardinalities of 0 or 1 are allowed, all with superclass-type filler expressions only
- Property domains and ranges only for subclass-type expressions; property hierarchies, disjointness, inverses, (a)symmetry, transitivity, chains, (inverse)functionality, irreflexivity fully supported
- Disjoint classes and classes in keys need subclass-type expressions, equivalence only for expressions that are sub- and superclass-type, no restrictions on equality
OWL 2 RL: Features
- Standard reasoning in OWL 2 RL: PTime-complete
- Rule-based reading simplifies modelling and implementation:
even naïve implementations can be useful
- Fast and scalable implementations exist
Also: possibly useful for combining OWL with rules
**OWL 2 QL**
**OWL profile that can be used to query data-rich applications:**
- **Intuition:** use OWL concepts as light-weight queries, allow query answering using rewriting in SQL on top of relational DBs
- **Different restrictions on subclasses and superclasses:**
- subclasses can only be class names or existentials with unrestricted (⊤) filler
- superclasses can be class names, existentials or conjunctions with superclass filler (recursive), or negations with subclass filler
- **Property hierarchies, disjointness, inverses, (a)symmetry supported, restrictions on range and domain**
- **Disjoint or equivalence of classes only for subclass-type expressions**
- **No disjunctions, universals, Self, keys, nominals, equality, property chains, transitive properties, cardinalities, or functional properties**
OWL 2 QL: Features
- Standard reasoning in OWL 2 QL: PTime, instance retrieval even LogSpace (actually $\text{AC}_0$) w.r.t. size of data
- Convenient light-weight interface to legacy data
- Fast implementations on top of legacy database systems (relational or RDF): highly scalable to very large datasets
Do We Really Need So Many OWLs?
Three new OWL profiles with somewhat complex descriptions … why not just one?
- The union of any two of the profiles is no longer light-weight!
QL+RL, QL+EL, RL+EL all ExpTime-hard
- Restricting to fewer profiles = giving up useful feature combinations
- Rationale: profiles are “maximal” (well, not quite) well-behaved fragments of OWL 2
→ Pick suitable feature set for applications
- In particular, nobody is forced to implement all of a profile
OWL in Practice: Tools
- Editors ([http://semanticweb.org/wiki/Editors](http://semanticweb.org/wiki/Editors))
- Most common editor: Protégé 4
- Other tools: TopBraid Composer ($), NeOn toolkit
- Special purpose apps, esp. for light-weight ontologies (e.g. FOAF editors)
- Reasoners ([http://semanticweb.org/wiki/Reasoners](http://semanticweb.org/wiki/Reasoners))
- OWL DL: Pellet, HermiT, FaCT++, RacerPro ($)
- OWL EL: CEL, SHER, snorocket ($), ELLY (extension of IRIS)
- OWL RL: OWLIM, jena, Oracle OWL Reasoner (part of O11g) ($),
- OWL QL: Owlgres, QuOnto, Quill
- Many tools use the OWL API library (Java)
- Note: many other Semantic Web tools are found online
Non-standard Reasoning in OWL
There is more to do than editing and inferencing:
• **Explanation**: reasoning task of providing axiom sets to explain a conclusion (important for editing and debugging)
• ** Conjunctive querying**: check entailment of complex query patterns
• **Modularisation**: extract sub-ontologies that suffice for (dis)proving a certain conclusion
• **Repair**: determine ways to repair inconsistencies (related to explanation)
• **Least Common Subsumer**: assuming that class unions are not available, find the smallest class expression that subsumes two given classes
• **Abduction**: given an observed conclusion, derive possible input facts that would lead to this conclusion
• …
→ All implemented, tasks on top common in standard tools today
Summary and Outlook
• OWL: an expressive ontology language with practical impact
• Structurally representable in RDF
• Reasoning typical based on extensional ("direct") semantics:
– closely related to description logics and first-order logic (with equality)
– different from RDF semantics, but compatible for many purposes
• Various flavours for different applications:
– OWL Full provides RDF-based semantics (undecidable)
– OWL DL decidable but complex (N2ExpTime)
– OWL profiles for light-weight reasoning (in PTime)
Version 2 of the Web Ontology Language almost complete:
Official specification expected by Oct 2009
Further Reading
- P. Hitzler, S. Rudolph, M. Krötzsch: *Foundations of Semantic Web Technologies*. CRC Press, 2009. (Chapter 4 and 5 closely related to this lecture)
Selected research articles:
- F. Baader, S. Brandt, C. Lutz: *Pushing the EL envelope*. In Proc. of the 19th Joint Int. Conf. on Artificial Intelligence (IJCAI 2005), 2005. (paper introducing description logic EL++ underlying OWL EL)
|
{"Source-Url": "https://www.semantic-web-book.org/w/images/d/d3/Informatik09-Semantic-Web-2-OWL.pdf", "len_cl100k_base": 5434, "olmocr-version": "0.1.50", "pdf-total-pages": 45, "total-fallback-pages": 0, "total-input-tokens": 76347, "total-output-tokens": 7995, "length": "2e12", "weborganizer": {"__label__adult": 0.00033402442932128906, "__label__art_design": 0.0005369186401367188, "__label__crime_law": 0.0006184577941894531, "__label__education_jobs": 0.0007472038269042969, "__label__entertainment": 0.0001316070556640625, "__label__fashion_beauty": 0.00017261505126953125, "__label__finance_business": 0.0005960464477539062, "__label__food_dining": 0.0003862380981445313, "__label__games": 0.0005540847778320312, "__label__hardware": 0.0006322860717773438, "__label__health": 0.0006108283996582031, "__label__history": 0.0003247261047363281, "__label__home_hobbies": 0.00012135505676269533, "__label__industrial": 0.0005636215209960938, "__label__literature": 0.0006184577941894531, "__label__politics": 0.0004935264587402344, "__label__religion": 0.0006117820739746094, "__label__science_tech": 0.104248046875, "__label__social_life": 0.00016224384307861328, "__label__software": 0.0303955078125, "__label__software_dev": 0.85595703125, "__label__sports_fitness": 0.0002536773681640625, "__label__transportation": 0.00051116943359375, "__label__travel": 0.0002341270446777344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21058, 0.00803]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21058, 0.47312]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21058, 0.66954]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 562, false], [562, 940, null], [940, 1231, null], [1231, 1426, null], [1426, 1741, null], [1741, 1944, null], [1944, 2195, null], [2195, 2402, null], [2402, 2665, null], [2665, 3077, null], [3077, 3406, null], [3406, 3704, null], [3704, 3893, null], [3893, 4377, null], [4377, 4819, null], [4819, 5467, null], [5467, 5581, null], [5581, 6047, null], [6047, 6231, null], [6231, 6554, null], [6554, 6927, null], [6927, 7135, null], [7135, 7488, null], [7488, 7824, null], [7824, 8881, null], [8881, 9557, null], [9557, 9880, null], [9880, 10647, null], [10647, 11292, null], [11292, 11643, null], [11643, 12146, null], [12146, 12569, null], [12569, 13256, null], [13256, 13765, null], [13765, 14110, null], [14110, 15014, null], [15014, 15282, null], [15282, 16104, null], [16104, 16411, null], [16411, 16897, null], [16897, 17578, null], [17578, 18354, null], [18354, 18987, null], [18987, 21058, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 562, true], [562, 940, null], [940, 1231, null], [1231, 1426, null], [1426, 1741, null], [1741, 1944, null], [1944, 2195, null], [2195, 2402, null], [2402, 2665, null], [2665, 3077, null], [3077, 3406, null], [3406, 3704, null], [3704, 3893, null], [3893, 4377, null], [4377, 4819, null], [4819, 5467, null], [5467, 5581, null], [5581, 6047, null], [6047, 6231, null], [6231, 6554, null], [6554, 6927, null], [6927, 7135, null], [7135, 7488, null], [7488, 7824, null], [7824, 8881, null], [8881, 9557, null], [9557, 9880, null], [9880, 10647, null], [10647, 11292, null], [11292, 11643, null], [11643, 12146, null], [12146, 12569, null], [12569, 13256, null], [13256, 13765, null], [13765, 14110, null], [14110, 15014, null], [15014, 15282, null], [15282, 16104, null], [16104, 16411, null], [16411, 16897, null], [16897, 17578, null], [17578, 18354, null], [18354, 18987, null], [18987, 21058, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21058, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21058, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21058, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21058, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21058, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21058, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21058, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21058, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21058, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21058, null]], "pdf_page_numbers": [[0, 0, 1], [0, 562, 2], [562, 940, 3], [940, 1231, 4], [1231, 1426, 5], [1426, 1741, 6], [1741, 1944, 7], [1944, 2195, 8], [2195, 2402, 9], [2402, 2665, 10], [2665, 3077, 11], [3077, 3406, 12], [3406, 3704, 13], [3704, 3893, 14], [3893, 4377, 15], [4377, 4819, 16], [4819, 5467, 17], [5467, 5581, 18], [5581, 6047, 19], [6047, 6231, 20], [6231, 6554, 21], [6554, 6927, 22], [6927, 7135, 23], [7135, 7488, 24], [7488, 7824, 25], [7824, 8881, 26], [8881, 9557, 27], [9557, 9880, 28], [9880, 10647, 29], [10647, 11292, 30], [11292, 11643, 31], [11643, 12146, 32], [12146, 12569, 33], [12569, 13256, 34], [13256, 13765, 35], [13765, 14110, 36], [14110, 15014, 37], [15014, 15282, 38], [15282, 16104, 39], [16104, 16411, 40], [16411, 16897, 41], [16897, 17578, 42], [17578, 18354, 43], [18354, 18987, 44], [18987, 21058, 45]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21058, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
cca7a2646f2f24182e4c4bd9cad8a581965dd78f
|
Workspace
Intro
Workspaces were added to Tiki4 and further improved in Tiki5. In Tiki10 a GUI was added for some basic features.
Workspace is a large project which may or may not impact Tiki as a whole. Previous efforts like AulaWiki built a parallel structure with highly useful features for those who need workspaces. The main issue with it is that the rest of the community lived on without any of the changes. This can be seen in two ways: optional features should not affect those who do not use it, or as a lack of collaboration on the project leading to little used, brittle code.
Many people already use Tiki to collaborate on projects as small teams without workspaces or AulaWiki. Be it through implicit trust that others will not play in their sandbox or creating multiple instances of Tiki. Both of these solutions currently have scalability issue. How can we improve the experience without changing everything?
This roadmap changes the fundamental question asked in development from:
*What can I build to solve the workspace issue?*
to:
*What can we improve in order to get closer to workspaces?* and *By improving X, what will those who have no interest in workspaces gain from it?*
Workspaces should not be about building another pile of code, but assembling some of the many existing functionalities. If it means improving every single piece along the way, that's what it should be. While there are deadlines in play, the project does not end with the summer and it would be better to have tangible improvements on existing functionalities, making workspaces closer to reality, than a new unstable feature.
The remainder of this roadmap will attempt to explain the incremental improvements required to Tiki in order to achieve workspaces.
GUI
Development of the Graphical User Interface (GUI) for workspaces started during TikiFestBarcelona3. A quick demo of the progress achieved by then can be seen here:
Issues to resolve
Workspace users only want to see what is relevant to
their workspace
Information overload is always a problem. Even on small collaboration sites, the recent changes can quickly grow to a level where the list is too long for anyone to bother looking at. Category trees become so large that no one can find what they are looking for. Information has to be filtered. It has nothing to do with access rights, it's a matter of personal choice.
**Perspectives** allow the user to select which point of view he desires to have on Tiki. By changing the perspective, the user selects which workspace he is working on right now and which information he finds relevant.
By themselves, the perspectives don't do much. They are implemented by overriding the global preferences, thus creating multiple sets of global preferences. Actual preferences need to be implemented to filter the visible data and multiple components need to be updated to take into account the perspective's preferences. However, perspectives are entirely transparent. Only the preferences need to be considered.
One such new preference that would be required is **category jail**, similar to the UNIX concept limiting what the user is allowed to view in the filesystem by hiding the higher sections of the filesystem tree from them. Adapted to categories, a category jail would allow to limit the visible portion of the category tree to a certain category.
A crucial component of the UI (that is not handled through preferences) is the modules. Module visibility is typically restricted by groups. Limiting the visibility of modules could either be done by changing the active groups or adding a perspective filter on the module. Both can be done without significant changes. In the first case, the perspective would contain a preference listing the relevant groups in the perspective and the list would serve as a mask on the user's group list when selecting the modules to display. The second one would add a generic parameter to modules and only display when the perspective is active.
**Some contexts require privacy and to hide the work of a workgroup**
Tiki has very fine-grained global & object permissions. In regards to category permissions, Tiki is currently (3.x) limited in terms of flexibility. The lack of fine-grained permissions make understanding the impact of granting permissions hard to understand. For this reason, a revamp of the category permission system is underway. The complete granularity will be available to categories.
This issue resolved, rethinking how large amounts of categories should be handled becomes possible. At this time, one of the major limitations of categories is that anyone with edit rights on a page (or the equivalent for other objects) can change categories on the page. In a world where categories are used for categorization, this makes perfect sense. However, when changing categories grants or revokes permissions, **category security** is required.
Ultimately, who is allowed to add an object to a category or remove one from it is specific to the category itself, not to the object. The permissions that apply to the categories must be sorted out and deployed in the code base.
Because **perspectives** may point to views in which the user has no rights, visibility permissions on the perspectives may also be desired.
Another issue relevant to security is **permission auditing**, which is to be able to view which permissions
apply to an object, and from where they were granted. The revamp of the permission system will make this task easier, but interfaces to audit the permissions and compare them may be required.
**Administrators want to manage delegate management of workspaces**
In large organizations, creating workspaces will be a day to day task. Creating a new workspace has to be quick and easy. Through **data channels**, workspace templates can be created. Effectively, this would allow to configure a new workspace based on a local configuration. The workspace template could create a set of categories, a perspective, new groups and set-up all the category permissions required to get the workspace up and running. Data channels are not the only option to create new workspaces, however, creating a dedicated interface for this task is likely to be time consuming and should be postponed to later releases.
In fact, because data channels rely on groups to determine "execution" rights, it may be possible to keep the global administrator entirely out of the loop.
Afterwards, all that would be required is to add the users to the specific groups. Considering the data channel has a parameter to specify the workspace leader, the leader could then be responsible for adding the members. Adding and removing members from groups currently requires administrator privileges (tiki_p_adminusers) at the instance level. In an **emergent group** context (aka Organic groups), this is unfortunate. To resolve this, finer grained permissions on groups would be required. Effectively, this would require treating groups as object themselves and to grant permissions on them.
**Workspaces need to have a life of their own**
Especially for large workgroups and emergent contexts, it's important to reduce the effort required by the group leader(s) and to let them delegate tasks within the workspace. It should be possible to delegate simple tasks (like approving new members or suspending troublemakers) to moderators. While it would be possible to simply grant member administration rights to these people, in many cases this would seem too wide. The introduction of **group transitions** would allow to specify paths between two groups and to grant rights on the transition to a group of moderators.
It should be implemented with perms on group
These transitions would simplify the management of workspaces and define community-wide policies on how to handle workspace management. By reducing the complexity of group management, you allow less technical people in the organization to lead a workspace.
Similarly, the concept of transitions may be adapted to **category transitions**, which would enable workflow-like patterns for document management. These may be required when coordinating specifications between multiple independent entities where an approval process is required. Just like normal categories, categories part of a transition set would have permissions assigned to them, limiting the ability to edit for example, and permissions on the transitions themselves.
A sample use case would be the approval of engineering documents.
Workspace leaders may need to customize the configurations locally
**Perspectives** allow to override any preference. Because they are created through profiles, the burden of selecting which one is appropriate is left to the administrator. A perspective management UI would narrow down the set of available preferences. The same way the administrator does not really want to handle the day to day management of the workspaces, he may want to delegate some of the configurations of the perspective to the workspace leaders. Such configuration may be enabling or disabling the forums, changing the theme or any other relevant configuration.
To be done efficiently, **Preferences** would be required. Essentially, the type of field and validation rules for each preference have to be defined at a higher level to be able to generate dynamic interfaces from a list of preferences without having to duplicate and maintain large amounts of code.
**Features**
The previous sections introduced the features and how they fit in the global picture. This section details each of the features, their impact on the rest of the project, the dependencies among them and their current state. Ideally, each of these is seen as a separate development effort and provides a worthwhile improvement to Tiki as a whole.
**Perspectives**
Perspectives were introduced in trunk on July 19th. They are in three parts:
1. Application of the preference overrides in tiki-setup.php
2. Creation of new perspectives from profiles
3. Perspective switching module
These three components provide a usable base to work from. The only change required for workspaces is the introduction of a `tiki_p_view_perspective` permission currently pending the merge of `perms-take2` in trunk. The introduction of the permission would only affect the perspective switch module and the companion `tiki-switch_perspective.php` file. No changes are required in `tiki-setup.php` or in the profile. Adding permissions on the perspective would be handled through the standard object permission handling in profiles.
Outside of workspaces, perspectives would be useful to create micro-sites with a different visual appearance (with Site Identity preferences, not just Theme Control Center), the introduction of an Administrator perspective to allow the site’s administrator to switch between a normal view of the site and a perspective that would contain more administrator controls.
In the future, a better interface to manage perspectives may be desired. Ideally, the perspective itself would define which preferences can be updated within it and a customized interface could be built through **Preferences**. Combined with the idea of delegating configuration, two additional permissions could be introduced: `tiki_p_edit_perspective` and `tiki_p_edit_perspective_full`.
**Breakdown**
• Perspective base - 4.0 done
• Perspective view permissions - 4.0 done
• Perspective UI - 5.0 started, depends on Preferences
See:
• Perspective
Category jail
The category jail preference can theoretically be used without perspectives, however, the use is limited. It could be used to limit the visible categories to a limited set of content-related categories and hide the permission management ones. However, in the context of workspaces, the category jail is a pivot concept and enables the real purpose of perspectives.
Rather than a jail, it can be deployed as a suggestion. By default, it could display only the jailed categories, but an option could allow to show all categories instead.
The jail could also force new objects to be created to belong to the category as well. Whether this respects the user’s permissions has to be evaluated.
Technically, the jail is implemented as a preference which will contain a category ID. Any object listing or category listing should restrict the displayed items to the category in the preference and child nodes. To speed up the lookup, the complete list of sub-categories could be stored in preferences.
The exact work required to achieve the category jail has yet to be analyzed. The creation of the preference is trivial, however, multiple components will need to be updated.
Breakdown
• Category jail - 4.0
◦ Categorize.php/tpl to enable category jail on all objects done
◦ Update modules listing objects, like recent changes ready to work on (see Deployment of Category Jail)
Updating all modules is a significant task. It might be a good idea to flag modules that are jail aware in the documentation of the module. Enabling category jail on the module may also be an option.
Reference:
Updating all modules
Perspective filter
Optional: Alternative to Active groups
The perspective filter is a local modification in the modules execution path. There are already multiple module arguments that can be used to filter the visible modules, including pages, sections, themes and many others. Adding an additional argument and the handling is a localized change that requires little work.
Instead of filtering on perspective, it may also be possible to filter on categories and use the category jail filter. However, directly on the perspective may make it easier to grasp.
**Breakdown**
- Perspective filter - 4.0 **done**
**Active groups (Cancelled)**
**Category permission system**
The *perms-take2* branch modifies how category permissions are resolved. It provides the full permission granularity on categories and provides more efficient ways of resolving permissions.
These changes to the category permissions impact the permission management user interfaces. The current category permission interface is no longer suitable. Instead, the object permission interface is closer to being more suitable, but needs to be adapted to display the category permissions correctly when setting permissions on categories.
At this time, to remain efficient when fetching permissions, no inheritance applies to category permissions. Only the permissions applied directly on the category are considered. To accommodate this, the interface must allow bulk modification of the permissions to child categories. An alternative would be to update the database schema to allow efficient lookup of parent categories. This can be done with a parent category table that needs to be updated and kept consistent with the hierarchy or by entirely modifying the category structure to nested sets.
**Breakdown**
- Permission lookup - 4.0 **done**
- Update of the permission interface - 4.0 **in progress** jonnyb, luciash
- Category permission inheritance - 6.0 **pending category structure redesign** Inheritance can wait for 6.0, but copying perms from parent to children, or at least from one category to multiple others, cannot - this is a fundamental requirement of category functionality and has been done in 5.0
**Related wishes**
- WYSIWYCA for all permissions : feature_check in Table: users_permissions
- Mass assignment of permissions, especially for wiki pages
- Better/Easier reporting of item/object permissions which override category and group permissions
- When creating a page, how to inherit permissions from source page?
- Item/Object perms: copy permissions from another object. (especially for wiki pages and categories)
- Permissions: when assigning permissions to item, an option to start with current general permissions
- Add green & yellow permission keys on tiki-listpages.php
**Category security**
One of the current limitations of the category system in Tiki is that only the administrator can modify the category tree. Workspaces need to allow for emergent category creation to suit the needs of the
workgroup. Moreover, the visibility of categories is handled globally, which is well suited for small trees, but becomes impossible on larger trees. Filtering is required.
The solution here is to have category-specific permissions that affect the categories themselves, and not the objects contained in them. Possible permissions are:
- `tiki_p_view_category` to indicate if the category and its subcategories are visible to the user
- `tiki_p_add_object` to indicate if the user is allowed to add objects in the category
- `tiki_p_remove_object` to indicate if the user is allowed to remove objects from the category
- `tiki_p_create_category` to create new categories
Additionally, an object specific permission may be desired
- `tiki_p_modify_object_categories` to lock the categories on an object altogether, regardless of the permissions on the categories.
This change mostly impacts the categorize php/tpl component which allows to change the categories on the object. Effectively, it will need to be updated so that visibility rules apply, but also so that the permissions on each category applies. For example, if a user is not allowed to remove a category, adding or changing other categories would not affect the one category he is not allowed to modify.
Permissions like `tiki_p_create_category` would also need to be scoped to the category that grants it. For example, if a user has the permission on Workspaces > Chemistry > Autumn 09, he would only be allowed to create categories under it. Because there is no inheritance in phase 1, only the category itself has to be considered.
**Breakdown**
- Visibility - 4.0 **done**
- Object management - 4.0 **done**
- Category creation - 4.0 or 5.0 **ready to work on**
**Permission auditing**
As a companion to effective permission management, the ability to quickly view who has which permission on which object allows to increase confidence in the system.
Permission auditing can be seen either as a dashboard for administrators and workspace leaders and as additional information presented on the object's permission page. Within the permission page, allowing to display permissions from different levels, like category level or global level, would allow the permission assigner to have a baseline on the permissions to grant and make educated decisions.
It must answer the following questions:
- Which of global, category or object permissions apply?
- If category permissions, which categories provide permissions?
- Are permissions more open or more restrictive than the parent level?
- Compared to another object, are the permissions more open or more restrictive?
Data channels
Workspaces are a combination of multiple features providing an experience of workgroup and focus, not a feature in itself. While it may be possible to create different interfaces to set-up the groups, categories, permissions, perspectives, transitions and all sorts of objects, using profiles greatly simplifies the task and provides a good starting point to prepare workspaces and test out the improvements to be made.
A certain load of work is required by the administrator to set-up the data channel and profiles initially, but once the template is defined, instantiation can be made multiple times and easily. Typical workspace templates can be distributed through profiles.tiki.org.
Profiles have been mostly tested out and are stable. Data channels on the other hand are in their infancy and have a single use case at this time. Some modifications may be required to get them to work correctly with all datatypes. Some conflicts may occur in the handlers when called multiple times.
A convenient interface to call data channels and input data would be useful.
Organic / Emergent groups
As part of the workspaces, groups will primarily be created through data channels and follow a standard template. However, for larger workspaces, one may want to create subgroups to grant special permissions to a few people or simply to identify them (participants to a workshop, experts, ...).
Without emergent groups, the attention of the administrator is required to create new groups. Granting rights to create new groups allows for the possibility to create groups named in a way that expresses a much larger meaning, which may not be required. Consider someone creating a group called "Experts" for their workspace. Groups are defined globally, so the "Experts" group would be available globally, even though it was created for a subfield and a team of 10 people in a 2000 person organization. Just like category creation would be limited to the category on which the permission was granted, it could be possible to impose a prefix to the group based on which category the right was granted on.
Generally, it's important for a group leader to be able to manage users in his workspace. By treating groups as objects, permissions can be assigned to groups for other groups. For example, a lead could be allowed to add and remove members to a group he controls.
Permissions on groups
• tiki_p_add_member to allow someone to add group members
• tiki_p_remove_member to allow someone to remove group members
• tiki_p_join_group to allow someone to join or leave a group on his own
• tiki_p_remove_group to allow someone to destroy the group
Permissions on categories
• tiki_p_create_group to create groups under the category, although the group is not categorized itself
Breakdown
• Member management - 4.0 done
• Self join (modify current implementation) - 4.0 done
• Creation and removal - 5.0
Questions
• Should not be possible to have group perms on Anonymous group
• Need a link somewhere to assign perms to group (already works at: tiki-objectpermissions.php?objectId=Registered&objectName=Registered&objectType=group&permType=group)
• Do these group perms apply globally/in a category as well? (or just on specific group?)
See also:
Organic Groups
Group transitions
(As reference, see graphic above)
Adding and removing group members to promote them is a tedious and error prone task. By introducing transitions between groups, which can be triggered based on a permission, group administration can be delegated in a safe way and reducing the complexity of the task.
This feature is useful for self-managing groups within workspaces, but it could also be used to simplify the code in the Tiki registration process and the multiple approval and validation steps. Correcting the blocked validation states would simply be a group change for the administrator and it would allow installations to customize their validation process.
This feature would require the introduction of a new permission to trigger transitions, a table to store the transitions and some user interface modifications to allow triggering of transitions.
Breakdown
• Support for transitions - 4.0 done
• Deployment of transitions in the registration process - 4.0/5.0
Category transitions
Category transitions are an extension of the group transitions, the same concept could be applied to categories, allowing groups of users to trigger transitions on objects, effectively allowing them to change the applicable categories on the object even if they would not usually be able to change those categories due to category security.
Depending on how it is implemented, it could share the implementation with group transitions. This could evolve into an Approval Workflow.
Breakdown
- Category transitions - 5.0 done
Dynamic preferences
Preferences are not a workspace feature by themselves, but they would allow to enable for the workspace leader to customize the perspective's configuration. However, the scope of this feature is much larger and long term. Globally, this feature would allow:
- The simplification of the current administration panel's code.
- The creation of restricted administration panels with a subset of options to create lesser administrators. For example, a wiki farm administrator may want to only allow Tiki administrators to enable stable features
- The creation of a configuration search for those times you know what you are searching for, but don't know where it was arbitrarily placed in the administrator panel.
- The creation of a perspective configuration panel.
- To build the preferences in a profile editor.
The downside is that Preferences are a massive task. The type of field and validation rules for each preference have to be defined. A prior initiative called Magic aimed to do this. However, complexity of the code and departure of the effort's lead caused the initiative to abort. A CSV file contains some of the information required, but is now outdated.
Breakdown
- Define general structure and interface - 5.0 in progress in trunk for 4.0
- Document features / import data - 5.0 mostly done
Category structure redesign
In order to support permission inheritance in category permissions, which would simplify administration, modifications are required in the category structure. Without such modifications, it is not possible to fetch permissions efficiently. One of the alternatives is to create a table to contain the relationship from each category to all its parents, allowing to join categories with it and obtain all parents in a single query. The alternative is to change the structure entirely and use nested sets instead.
The conversion to nested sets is a much larger effort, however, it would allow for much more flexibility in
the future. A similar change would also be required in structures.
**Breakdown**
- Conversion of the category structure - 6.0
---
**Short term notes**
Jotting things down to remember to do
- "Default category assigned to uncategorized objects edited by a user with this default group:" in tiki-admingroups.php should be reworded and transferred to perspectives
- http://demo.tiki.org/trunk/tiki-searchresults.php?highlight=cgcom&boolean=on&search=Go shows a result when logged as admin but not as anonymous. Yet, this file gallery has view perms for Anon.
**Related links**
- Workspace Helper
- Workspace Ideas
alias
- Workspaces RoadMap
- Workspace RoadMap
- Workspaces
|
{"Source-Url": "https://dev.tiki.org/tiki-print.php?display=pdf&page=Workspace", "len_cl100k_base": 5170, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 20738, "total-output-tokens": 5786, "length": "2e12", "weborganizer": {"__label__adult": 0.000263214111328125, "__label__art_design": 0.0005908012390136719, "__label__crime_law": 0.0001977682113647461, "__label__education_jobs": 0.0012464523315429688, "__label__entertainment": 7.659196853637695e-05, "__label__fashion_beauty": 9.691715240478516e-05, "__label__finance_business": 0.0003528594970703125, "__label__food_dining": 0.00020706653594970703, "__label__games": 0.00045013427734375, "__label__hardware": 0.0003094673156738281, "__label__health": 0.00013494491577148438, "__label__history": 0.0001443624496459961, "__label__home_hobbies": 0.0001291036605834961, "__label__industrial": 0.0001361370086669922, "__label__literature": 0.0001761913299560547, "__label__politics": 0.00012946128845214844, "__label__religion": 0.0002608299255371094, "__label__science_tech": 0.001308441162109375, "__label__social_life": 0.00025916099548339844, "__label__software": 0.0693359375, "__label__software_dev": 0.923828125, "__label__sports_fitness": 0.0001169443130493164, "__label__transportation": 0.00014495849609375, "__label__travel": 0.0001817941665649414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26492, 0.00761]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26492, 0.41196]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26492, 0.91402]], "google_gemma-3-12b-it_contains_pii": [[0, 2200, false], [2200, 5603, null], [5603, 8737, null], [8737, 11582, null], [11582, 13734, null], [13734, 16367, null], [16367, 19010, null], [19010, 21412, null], [21412, 23283, null], [23283, 25812, null], [25812, 26492, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2200, true], [2200, 5603, null], [5603, 8737, null], [8737, 11582, null], [11582, 13734, null], [13734, 16367, null], [16367, 19010, null], [19010, 21412, null], [21412, 23283, null], [23283, 25812, null], [25812, 26492, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26492, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26492, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26492, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26492, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26492, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26492, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26492, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26492, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26492, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26492, null]], "pdf_page_numbers": [[0, 2200, 1], [2200, 5603, 2], [5603, 8737, 3], [8737, 11582, 4], [11582, 13734, 5], [13734, 16367, 6], [16367, 19010, 7], [19010, 21412, 8], [21412, 23283, 9], [23283, 25812, 10], [25812, 26492, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26492, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
937f66792531542fce99dd44644318623257fc94
|
SHOW ME YOUR PROPERTIES!
The Potential of Property-Based Testing in Agent-Based Simulation
Jonathan Thaler
Peer-Olaf Siebers
School Of Computer Science
University of Nottingham
7301 Wollaton Rd
Nottingham, United Kingdom
{jonathan.thaler,peer-olaf.siebers}@nottingham.ac.uk
ABSTRACT
This paper presents property-based testing, an approach for testing implementations of agent-based simulations (ABS), never considered so far in this field. It is a complementary technique to unit-testing and allows to test specifications and laws of an implementation directly in code which is then checked using automated test-data generation. As case-studies, we present two different models, an agent-based SIR model and the SugarScape model, in which we will show how to apply property-based testing to explanatory and exploratory agent-based models and what its limits are.
Keywords: Agent-Based Simulation, Validation & Verification, Property-Based Testing, Haskell
1 INTRODUCTION
When implementing an Agent-Based Simulation (ABS) it is of fundamental importance that the implementation is correct up to some specification and that this specification matches the real world in some way. This process is called verification and validation (V&V), where validation is the process of ensuring that a model or specification is sufficiently accurate for the purpose at hand, whereas verification is the process of ensuring that the model design has been transformed into a computer model with sufficient accuracy (Robinson 2014). In other words, validation determines if we are building the right model and verification if we are building the right model (Balci 1998).
One can argue that ABS should require more rigorous programming standards than other computer simulations (Polhill, Izquierdo, and Gotts 2005). Because researchers in ABS look for an emergent behavior in the dynamics of the simulation, they are always tempted to look for some surprising behavior and expect something unexpected from their simulation. Also, due to ABS mostly exploratory nature, there exists some amount of uncertainty about the dynamics the simulation will produce before running it. The authors Ormerod and Rosewell (2006) see the current process of building ABS as a discovery process where often models of an ABS lack an analytical solution (in general) which makes verification much harder if there is no such solution. Thus it is often very difficult to judge whether an unexpected outcome can be attributed to the model or has in fact its roots in a subtle programming error (Galán et al. 2009).
In general this implies that we can only raise the confidence in the correctness of the simulation: it is not possible to prove that a model is valid, instead one should think of confidence in its validity. Therefore, the process of V&V is not the proof that a model is correct but the process of trying to prove that the model is incorrect. The more checks one carries out which show that it is not incorrect, the more confidence we can place on the models validity. To tackle such a problem in software, software engineers have developed the concept of test-driven development (TDD).
Test-Driven Development (TDD) was rediscovered in the early 00s by Kent Beck (Beck 2002) as a way to a more agile approach to software-engineering, where instead of doing each step (requirements, implementation, testing,...) as separated from each other, all of them are combined in shorter cycles. Put shortly, in TDD tests are written for each feature before actually implementing it, then the feature is fully implemented and the tests for it should pass. This cycle is repeated until the implementation of all requirements has finished. Traditionally TDD relies on so-called unit-tests which can be understood as a piece of code which when run isolated, tests some functionality of an implementation. Thus we can say that test-driven development in general and unit-testing together with code-coverage in particular, guarantee the correctness of an implementation to some informal degree, which has been proven to be sufficiently enough through years of practice in the software industry all over the world.
In this paper our aim is to introduce and discuss property-based testing, a complementary method of testing the implementation of an ABS, which allows to directly express model Specifications and laws in code and test them through automated test-data generation. We see it as an addition to TDD where it works in combination with unit-testing to verify and validate a simulation to increase the confidence in its correctness and is a useful tool for expressing regression tests. To our best knowledge property-based testing has never been looked at in the context of ABS and this paper is the first one to do so.
Property-based testing has its origins (Claessen and Hughes 2000, Claessen and Hughes 2002, Runciman, Naylor, and Lindblad 2008) in the pure functional programming language Haskell (Hudak et al. 2007) where it was first conceived and implemented. It has been successfully used for testing Haskell code for years and also been proven to be useful in the industry (Hughes 2007). To make this paper sufficiently self-contained we avoid discussing it from a Haskell perspective and present it more on a conceptual level.
We claim that property-based testing is a natural fit for ABS and a valuable addition to the already existing testing methods in this field. To substantiate and test our claims, we present two case-studies. First, the agent-based SIR model inspired by Macal (2010), which is of explanatory nature, where we show how to express formal model Specifications in property-tests. Second, the SugarScape model of Epstein and Axtell (1996), which is of exploratory nature, where we show how to express hypotheses in property-tests and how to property-test agent functionality.
Further we claim that our research is not only applicable to theoretical models like the ones mentioned above but has also importance for Internet of Things (IoT), currently a hot topic in the field of Multi-Agent Systems (MAS) and ABS. ABS is conceptually related to IoT due to both having roots in MAS: in IoT as well as in ABS things interact locally with each other, out of which the whole system behavior emerges. Thus ABS allows to model and simulate large IoT systems and networks before installing them, acting as a kind of prototype and validation & verification mechanism (Savaglio et al. 2018). As our paper is focused on exactly that topic, we claim that it is highly relevant for IoT as well.
The structure of the paper is as follows: First we present related work in Section 2. Then we give a more in-depth explanation of property-based testing in Section 3. Next we shortly discuss how to conceptually apply property-based testing to ABS in Section 4. The heart of the paper are the two case-studies, which we present in Section 5 and 6. Finally, we conclude and discuss further research in Section 7.
2 RELATED WORK
Research on TDD of ABS is quite new and thus there exist relatively few publications. The work of Collier and Ozik (2013) is the first to discuss how to apply TDD to ABS, using unit-testing to verify the correctness of the implementation up to a certain level. They show how to implement unit-tests within the RePast Framework and make the important point that such a software needs to be designed to be sufficiently modular otherwise testing becomes too cumbersome and involves too many parts. Asta, Özcan, and Siebers (2014) discuss a similar approach to DES in the AnyLogic software toolkit.
Onggo and Karatas (2016) propose Test Driven Simulation Modeling (TDSM) which combines techniques from TDD to simulation modeling. The authors present a case study for maritime search-operations where they employ ABS. They emphasize that simulation modeling is an iterative process, where changes are made to existing parts, making a TDD approach to simulation modeling a good match. They present how to validate their model against analytical solutions from theory using unit-tests by running the whole simulation within a unit-test and then perform a statistical comparison against a formal specification. This approach is important for our SIR and Sugarscape case studies.
Gurcan, Dikenelli, and Bernon (2013) give an in-depth and detailed overview over verification, validation and testing of agent-based models and simulations and proposes a generic framework for it. The authors present a generic UML class-model for their framework which they then implement in the two ABS frameworks RePast and MASON. Both of them are implemented in Java and the authors provide a detailed description how their generic testing framework architecture works and how it utilizes JUnit to run automated tests. To demonstrate their framework they provide also a case study of an agent-base simulation of synaptic connectivity where they provide an in-depth explanation of their levels of test together with code.
Although the work on TDD is scarce in ABS, there exists quite some research on applying TDD and unit-testing to Multi-Agent Systems (MAS). Although MAS is a different discipline than ABS, the latter one has derived many technical concepts from the former one, thus testing concepts applied to MAS might also be applicable to ABS. Nguyen et al. (2011) performed a survey of testing in MAS. It distinguishes between unit-tests of parts that make up an agent, agent tests which test the combined functionality of parts that make up an agent, integration tests which test the interaction of agents within an environment and observe emergent behavior, system tests which test the MAS as a system running at the target environment and acceptance test in which stakeholders verify that the software meets their goal. Although not all ABS simulations need acceptance and system tests, still this classification gives a good direction and can be directly transferred to ABS.
Onggo and Karatas (2016) explicitly mention the problem of test coverage, which would often require to write a large number of tests manually to cover the parameter ranges sufficiently enough - property-based testing addresses exactly this problem by automating the test-data generation. Note that this is closely related to data-generators (Gurcan, Dikenelli, and Bernon 2013) and load generators and random testing (Burnstein 2010) but property-based testing goes one step further by integrating this into a specification language directly into code, emphasizing a declarative approach and pushing the generators behind the scenes, making them transparent and focusing on the specification rather than on the data-generation.
3 PROPERTY-BASED TESTING
Property-based testing allows to formulate functional specifications in code which then a property-based testing library tries to falsify by automatically generating test-data, covering as many cases as possible. When a case is found for which the property fails, the library then reduces the test-data to its simplest form for which the test still fails e.g. shrinking a list to a smaller size. It is clear to see that this kind of testing is especially suited to ABS, because we can formulate specifications, meaning we describe what to test
instead of how to test. Also the deductive nature of falsification in property-based testing suits very well the constructive and exploratory nature of ABS. Further, the automatic test-generation can make testing of large scenarios in ABS, which is almost always stochastic by nature, feasible as it does not require the programmer to specify all test-cases by hand, as is required in traditional unit-tests.
Property-based testing was invented by Claessen and Hughes (2000), Claessen and Hughes (2002) in which they present the QuickCheck library in Haskell, which tries to falsify the specifications by randomly sampling the space. We argue, that the stochastic sampling nature of this approach is particularly well suited to ABS, because it is itself almost always driven by stochastic events and randomness in the agents behavior, thus this correlation should make it straightforward to map ABS to property-testing. A challenge when using QuickCheck is to write custom test-data generators for agents and the environment, which cover the space sufficiently enough to not miss out on important test-cases. According to Claessen and Hughes (2000) "The major limitation is that there is no measurement of test coverage.". QuickCheck provides help to report the distribution of test-cases but still it could be the case that simple test-cases which would fail are never tested because of the stochastic nature of QuickCheck.
To give a rough idea on how property-based testing works in Haskell, in Listing 1 we give a few examples of properties on lists, which are directly expressed as functions in Haskell. Such a function has to return a Bool, which indicates True in case the test succeeds or False if not and can take input arguments which data is automatically generated by QuickCheck. Note that the first line of each function defines its name, its inputs ([Int] is a list of integers) and the output which is the last type (Bool). Note that the (++) operator concatenates two lists, reverse simply reverses a list.
```haskell
append_associative :: [Int] -> [Int] -> [Int] -> Bool
append_associative xs ys zs = (xs ++ ys) ++ zs == xs ++ (ys ++ zs)
reverse_distributive :: [Int] -> [Int] -> Bool
reverse_distributive xs ys = reverse (xs ++ ys) == reverse ys ++ reverse xs
reverse_reverse :: [Int] -> Bool
reverse_reverse xs = reverse (reverse xs) == xs
```
Listing 1: Examples of properties of lists, expressed in Haskell code.
As a remedy for the potential sampling difficulties of QuickCheck, there exists also a deterministic property-testing library called SmallCheck (Runciman, Naylor, and Lindblad 2008), which instead of randomly sampling the test-space, enumerates test-cases exhaustively up to some depth. It is based on two observations, derived from model-checking, that (1) "If a program fails to meet its specification in some cases, it almost always fails in some simple case" and (2) "If a program does not fail in any simple case, it hardly ever fails in any case" (Runciman, Naylor, and Lindblad 2008). This non-stochastic approach to property-based testing might be a complementary addition in some cases, where the tests are of non-stochastic nature with a search-space which is too large to implement manually by unit-tests but is relatively easy and small enough to enumerate exhaustively. The main difficulty and weakness of using SmallCheck is to reduce the dimensionality of the test-case depth search to prevent combinatorial explosion, which would lead to an exponential number of cases. Thus, one can see QuickCheck and SmallCheck as complementary instead of in opposition to each other. Note that in this paper we only use QuickCheck due to the match of ABS stochastic nature and the random test generation. Also note that we regard property-based testing as complementary to unit-tests and not in opposition - we see it as an addition in the TDD process of developing an ABS.
4 TESTING ABS IMPLEMENTATIONS
Generally we need to distinguish between two types of testing / verification in ABS.
1. Testing / verification of models for which we have real-world data or an analytical solution which can act as a ground-truth - examples for such models are the SIR model, stock-market simulations, social simulations of all kind.
2. Testing / verification of models which are of exploratory nature, inspired by real-world phenomena but for which no ground-truth per se exists - examples for such models is the Sugarscape model of Epstein and Axtell (1996).
The baseline is that either one has an analytical model as the foundation of an agent-based model or one does not. In the former case, e.g. the SIR model, one can very easily validate the dynamics generated by the simulation to the one generated by the analytical solution through System Dynamics. In the latter case one has basically no idea or description of the emergent behavior of the system prior to its execution e.g. SugarScape. In this case it is important to have some hypothesis about the emergent property / dynamics. The question is how verification / validation works in this setting as there is no formal description of the expected behavior: we don’t have a ground-truth against which we can compare our simulation dynamics.
One distinguishes between black-box and white-box verification where in white-box verification one looks directly at code and reasons about it whereas in black-box verification one generally feeds input to the software / functions / methods and compares it to expected output. Black-box verification is our primary concern in this paper as property-based testing is an instance of black-box verification. In the case of ABS we have the following levels of black-box tests:
1. Isolated and interacting agent behavior parts - test the individual parts which make up the agent behavior under given inputs and test if interaction between agents are correct. For this we can use traditional unit-tests as shown by Collier and Ozik (2013) and also property-based testing as we will show in the use-cases.
2. Simulation dynamics - compare emergent dynamics of the ABS as a whole under given inputs to an analytical solution or real-world dynamics in case there exists some, using statistical tests. We see this type of tests conceptually as property-tests as well because we are testing properties of the model / simulation as we will see in the use-cases. Technically speaking we can both use traditional unit-tests and also property-based tests to implement them - conceptually they are property-tests.
3. Hypotheses - test whether hypotheses about the model are valid or invalid. This is very similar to the previous point but without comparing it to analytical solutions or real-world dynamics but only to some hypothetical values.
5 CASE STUDY 1: SIR
As first use-case we discuss property-based testing for the explanatory agent-based SIR model. It is a very well studied and understood compartment model from epidemiology (Kermack and McKendrick 1927) which allows to simulate the dynamics of an infectious disease like influenza, tuberculosis, chicken pox, rubella and measles spreading through a population. We implemented an agent-based version of this model, inspired by Macal (2010) with the code freely accessible from our repository (Thaler 2019a).
In this model, people in a population of size \( N \) can be in either one of three states Susceptible, Infected or Recovered at a particular time, where it is assumed that initially there is at least one infected person in the population. Thus, there are always a total of \( N \) people, divided into \( S \) susceptibles, \( I \) infected and \( R \)
recovered ones. People interact on average with a given rate of $\beta$ other people per time-unit and become infected with a given probability $\gamma$ when interacting with an infected person. When infected, a person recovers on average after $\delta$ time-units and is then immune to further infections. An interaction between infected persons does not lead to re-infection, thus these interactions are ignored in this model. Due to the models’ origin in System Dynamics (SD) (Porter 1962), there exists a top-down formalization in SD with the following equations. The dynamics are driven by two the rates $infectionRate = \frac{\beta S \gamma N}{I}$ and $recoveryRate = \frac{I}{\delta}$.
The change of susceptible agents $S$ per time-unit is $\frac{dS}{dt} = -infectionRate$, the one of infected agents $I$ is $\frac{dI}{dt} = infectionRate - recoveryRate$ and for recovered agents $R$ it is $\frac{dR}{dt} = recoveryRate$.
5.1 Deriving a property
Our goal is to derive a property which connects the agent-based implementation to the SD equations. The foundation are both the infection- and recovery-rate where the infection-rate determines how many Susceptible agents per time-unit become Infected and the recovery-rate determines how many Infected agents per time-unit become Recovered. Let’s look at Algorithm 1, describing the susceptible agent behavior, which is key for the infection-rate:
```
generate on average $\beta$ make-contact events per time-unit;
if make-contact event then
select random agent $randA$ from population;
if agent $randA$ infected then
become infected with probability $\gamma$;
end
end
```
Algorithm 1: Susceptible behavior
Per time-unit, a susceptible agent makes on average contact with $\beta$ other agents, where in the case of a contact with an infected agent, the susceptible agent becomes infected with a given probability $\gamma$. In this description there is another probability hidden, the probability of making contact with an infected agent, which is simply the ratio of number of infected agents to number non-infected agents. We can now derive the formula for the probability of a Susceptible agent to become infected: $\frac{\beta \gamma \text{number of infected} (I)}{\text{number of non-infected} (N)}$. When we look at the formula we can see that it is conceptually the same representation of the infection-rate of the SD specification as shown above - except that it only considers a single Susceptible agent instead of the aggregate of $S$ susceptible agents. We have now a property we can check using a property-test.
5.2 Constructing the property-based test
Having a property (law), we want to construct a property-test for it. The formula is invariant under random population mixes and thus should hold for varying agent populations where the mix of Susceptible, Infected and Recovered agents is random - thus we use QuickCheck to generate the population randomly, the property must still hold.
Obviously we need to pay attention to the fact that we are dealing with a stochastic system thus we can only talk about averages and thus it does not suffice to only run a single agent but we are repeating this for 1,000 Susceptible agents (all with different random-number seeds). We thus compute the simulated infection-rate simply by counting the agents which got infected and divide it by the number of total replications $N = 1,000$. To check whether the test has passed we run it 100 times and use a two-sided T-test to check if the sample infection-rate is statistically significant equal to the hypothetical infection-rate. When executing the tests,
QuickCheck generates 100 test-cases by randomly generating 100 different randAs inputs to the test. All have to pass for the whole property-test to pass. See Algorithm 2 for the pseudo-code of this property-based test.
**Algorithm 2: Property-based test for infection-rate.**
This is the very power which property-based testing is offering us: we directly express the specification of the original SD model in a test of our agent-based implementation and let QuickCheck generate random test cases for us. This closely ties our implementation to the original specification and raises the confidence to a very high level that it is actually a valid and correct implementation. Also using this test we can determine the optimal $\Delta t$ for running our simulation: because the SIR model is a time-driven one, we need to select a sufficiently small $\Delta t$ to avoid sampling issues (Thaler, Altenkirch, and Siebers 2018). Using this property-test one can start out with an initial $\Delta t$, halving it until the tests pass. Further, by using property-tests we found out about a special case we haven’t covered in the implementation of the Susceptible agent behavior. This shows that property-based testing is not only useful for encoding specifications for regression tests but that is indeed also a valuable tool in finding real bugs e.g. due to missed edge-cases.
**6 CASE STUDY II: SUGARSCAPE**
We now look at how property-based testing can be made of use in the *exploratory* Sugarscape model of Epstein and Axtell (1996). It was one of the first models in ABS, with the aim to *grow* an artificial society by simulation and connect observations in their simulation to phenomenon observed in real-world societies. In this model a population of agents move around in a discrete 2D environment, where sugar grows, and interact with each other and the environment in many different ways. The main features of this model are (amongst others): searching, harvesting and consuming of resources, wealth and age distributions, popu-
lation dynamics under sexual reproduction, cultural processes and transmission, combat and assimilation, bilateral decentralized trading (bartering) between agents with endogenous demand and supply, disease processes transmission and immunology. For our research we undertook a \textit{full and validated} implementation of the Sugarscape model, with the code and a description of the validation process freely accessible from our repository (Thaler 2019b).
Whereas in the explanatory SIR case-study we had an analytical solution, inspired by the SD origins of the model, the fundamental difference in the exploratory Sugarscape model is that no such analytical solutions exist. This raises the question, which properties we can actually test in such a model - we propose the following:
- **Environment behavior** - the Sugarscape environment has its own behavior which boils down to regrowing of resources. The correct working can be tested using property-tests by generating random environments and checking laws governing the regrowth.
- **Agent behavior** - obviously full agent behavior could be tested with property-tests, using randomly generated agents (with random values in their properties). It turned out to be quite difficult to derive properties for full agent behavior, thus in this paper we restricted ourselves to test parts of agent behavior and also left out testing of agent interactions.
- **Emergent behavior** - although we don’t have analytical descriptions of properties of our model in the case of Sugarscape, there still exist informal descriptions and more formal hypotheses about emergent properties. Property-testing can be used to check them and if proved to be valid can be seen as regression tests.
### 6.1 Environment behavior
The environment in the Sugarscape model has some very simple behavior: each site has a sugar level and when harvested by an agent, it regrows back to the full level over time. Depending on the configuration of the model it either grows back immediately within 1 tick or over multiple ticks. We can construct simple property-tests for these behaviors. In the case the sugar grows back immediately, we let QuickCheck generate a random environment and then run the environment behavior for 1 tick and then check the property that all sites have to be back to their maximum sugar level. In the case of regrow over multiple ticks, we also use QuickCheck to generate a random environment but additionally a random \textit{positive} rate (which is a floating point number) which we then use to calculate the number of ticks until full regrowth. After running the random environment for the given number of ticks all sites have to be back to full sugar level - see Algorithm 3 for this case.
Note that QuickCheck initially doesn’t know how to generate a random environment because each site consists of a custom data-structure for which QuickCheck is not able to generate random instances by default. This problem is solved by writing a custom data-generator, for which existing QuickCheck functions can be used e.g. picking the current sugar level of a site from a random range.
The Sugarscape environment is a torus where the coordinates wrap around in both dimensions. To check whether the implementation of the wrapping calculation is correct we used both unit- and property-tests. With the unit-tests we carefully constructed all possible cases we could think of and came up with 13 test-cases. With the property-based test in Algorithm 4, we simply defined a single test-case where we expressed the property, that after wrapping \textit{any} random coordinates supplied by QuickCheck, the wrapped coordinates have to be within bounds.
input : Random environment $env$ generated by QuickCheck
input : Random regrowth rate $randRate$ generated by QuickCheck
maxTicks = maxSugarCapacityOnSites / randRate;
environment = runEnvironmentTicks maxTicks env;
sites = getEnvironmentSites environment;
if all sites maxSugarLevel then
PASS;
else
FAIL;
end
Algorithm 3: Property-based test for rate-based regrow of sugar on all sites.
input : Random 2D discrete coordinate $randCoord$ generated by QuickCheck
(x, y) = wrapCoordinates randCoord;
if (x ≥ 0 and x ≤ environmentDimX) and (y ≥ 0 and y ≤ environmentDimY) then
PASS;
else
FAIL;
end
Algorithm 4: Property-based test for wrap-coordinates functionality.
6.2 Agent behavior
We implemented a number of property-tests for agent functions which just cover a part of an agents behavior: checks whether an agent has died of age or starved to death, the metabolism, immunization step, check if an agent is a potential borrower or fertile, lookout, trading transactions. We provided custom data-generators for the agents and let QuickCheck generate the random data and us running the agent with the provided data, checking for the properties.
As an example, provided in Algorithm 5, we give the property-test of an agent dying of age, which happens when the agents age is greater or equal its maximum age. It might look trivial but property-based testing helps us here to clearly state the invariants (properties) and relieves us from constructing all possible edge-cases because we rely on QuickChecks abilities to cover them for us.
input : Random agent $ag$ with random age generated by QuickCheck
died = hasAgentDiedOfAge ag;
if died == (age ag >= maxAge ag) then
PASS;
else
FAIL;
end
Algorithm 5: Property-based test for agent dying of age.
6.3 Emergent properties
In the validation and verification process of our Sugarscape implementation we put informal descriptions and hypotheses about emergent properties from the Sugarscape book into formal property-tests. Examples for such hypotheses / informal descriptions of emergent properties are e.g. the carrying capacity becomes
stable after 100 steps; when agents trade with each other, after 1,000 steps the standard deviation of trading prices is less than 0.05; when there are cultures, after 2,700 steps either one culture dominates the other or both are equally present.
The property we test for is whether the emergent property under test is stable under varying random-number seeds or not. Put another way, we let QuickCheck generate random number streams and require that the tests all pass. Unfortunately, this revealed that this property doesn’t hold for all hypotheses. The problem is that QuickCheck generates by default 100 test-cases for each property-test where all need to pass for the whole property-test to pass - this wasn’t the case, where most of the 100 test-cases passed but unfortunately not all. Thus in this case a different approach is required: instead of requiring every test to pass we require that most tests pass, which can be achieved using a T-test with a confidence interval of e.g. 95%. This means we won’t use QuickCheck anymore and resort to a normal unit-test where we run the simulation 100 times with different random number streams each time and then performing a T-test with a 95% confidence interval. Note that we are now technically speaking of a unit-test but conceptually it is still a property-test.
In Algorithm 6 we show a property-test for checking whether after 1,000 steps the standard deviation of trading prices is less than 0.05. The test passes if out of 100 runs a 95% confidence interval is reached using a T-test.
maxTicks = 1000;
replications = 100;
stdAverage = 0.05;
tradingPriceStdsList = empty list;
for i ← 1 to replications do
rng = new random number generator;
simContext = initSimulation rng;
out = runSimulation maxTicks simContext;
tps = extractTradingPrices out;
tpsStd = calculate standard deviation of tps;
insert tpsStd into tradingPriceStdsList;
end
tTestPass = perform 1-sided t-test comparing stdAverage with tradingPriceStdsList on a 0.95 interval;
if tTestPass then
PASS;
else
FAIL;
end
Algorithm 6: Property-based test for trading prices.
7 CONCLUSIONS
We found property-based testing particularly well suited for ABS firstly due to ABS stochastic nature and second because we can formulate specifications, meaning we describe what to test instead of how to test. Also the deductive nature of falsification in property-based testing suits very well the constructive and often exploratory nature of ABS.
Although property-based testing has its origins in Haskell, similar libraries have been developed for other languages e.g. Java, Python, C++ as well and we hope that our research has sparked an interest in applying property-based testing to the established object-oriented languages in ABS as well.
We didn’t look into testing full agent and interacting agent behavior using property-tests due to its complexity which would justify a whole paper alone. Due to its inherent stateful nature with complex dependencies between valid states and agents actions we need a more sophisticated approach as outlined by De Vries (2019), where the author shows how to build a meta-model and commands which allow to specify properties and valid state-transitions which can be generated automatically. We leave this for further research.
ACKNOWLEDGMENTS
The authors would like to thank J. Hey for valuable feedback and discussions.
REFERENCES
AUTHOR BIOGRAPHIES
**JONATHAN THALER** is a Ph.D. student at the University of Nottingham and part of the Intelligent Modelling and Analysis Group (http://www.cs.nott.ac.uk/~psxjat/). His main research interest is the benefits and drawbacks of using pure functional programming with Haskell for implementing Agent-Based Simulations.
**DR. PEER-OLAF SIEBERS** is an Assistant Professor at the School of Computer Science, University of Nottingham, UK (http://www.cs.nott.ac.uk/~pszps/). His main research interest is the application of computer simulation to study human-centric complex adaptive systems. He is a strong advocate of Object Oriented Agent-Based Social Simulation. This is a novel and highly interdisciplinary research field, involving disciplines like Social Science, Economics, Psychology, Operations Research, Geography, and Computer Science. His current research focuses on Urban Sustainability and he is a co-investigator in several related projects and a member of the university’s "Sustainable and Resilient Cities" Research Priority Area management team.
|
{"Source-Url": "https://scs.org/wp-content/uploads/2020/02/SHOW-ME-YOUR-PROPERTIESTHE-POTENTIAL-OF-PROPERTY-BASED-TESTING-IN-AGENT-BASED-SIMULATION.pdf", "len_cl100k_base": 7170, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 30454, "total-output-tokens": 9543, "length": "2e12", "weborganizer": {"__label__adult": 0.0004992485046386719, "__label__art_design": 0.0003614425659179687, "__label__crime_law": 0.0004584789276123047, "__label__education_jobs": 0.0016126632690429688, "__label__entertainment": 9.238719940185548e-05, "__label__fashion_beauty": 0.0002366304397583008, "__label__finance_business": 0.0003135204315185547, "__label__food_dining": 0.0004849433898925781, "__label__games": 0.0011262893676757812, "__label__hardware": 0.0008101463317871094, "__label__health": 0.0009512901306152344, "__label__history": 0.00042128562927246094, "__label__home_hobbies": 0.00013339519500732422, "__label__industrial": 0.00046133995056152344, "__label__literature": 0.0004982948303222656, "__label__politics": 0.0004143714904785156, "__label__religion": 0.00047087669372558594, "__label__science_tech": 0.050445556640625, "__label__social_life": 0.000164031982421875, "__label__software": 0.00521087646484375, "__label__software_dev": 0.93359375, "__label__sports_fitness": 0.0004496574401855469, "__label__transportation": 0.0006709098815917969, "__label__travel": 0.0002570152282714844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40072, 0.02211]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40072, 0.58124]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40072, 0.90794]], "google_gemma-3-12b-it_contains_pii": [[0, 2581, false], [2581, 7001, null], [7001, 11281, null], [11281, 15198, null], [15198, 18930, null], [18930, 22562, null], [22562, 24598, null], [24598, 28298, null], [28298, 30412, null], [30412, 33187, null], [33187, 36316, null], [36316, 40072, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2581, true], [2581, 7001, null], [7001, 11281, null], [11281, 15198, null], [15198, 18930, null], [18930, 22562, null], [22562, 24598, null], [24598, 28298, null], [28298, 30412, null], [30412, 33187, null], [33187, 36316, null], [36316, 40072, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40072, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40072, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40072, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40072, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40072, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40072, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40072, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40072, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40072, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40072, null]], "pdf_page_numbers": [[0, 2581, 1], [2581, 7001, 2], [7001, 11281, 3], [11281, 15198, 4], [15198, 18930, 5], [18930, 22562, 6], [22562, 24598, 7], [24598, 28298, 8], [28298, 30412, 9], [30412, 33187, 10], [33187, 36316, 11], [36316, 40072, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40072, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
55ae7569cc2ee561ba2e6f6896635bf3f86f10eb
|
Document Structure and Multilingual Authoring
Caroline Brun Marc Dymetman Veronika Lux
Xerox Research Centre Europe
6 chemin de Maupertuis
38240 Meylan, France
{brun,dymetman,lux}@xrce.xerox.com
Abstract
The use of XML-based authoring tools is swiftly becoming a standard in the world of technical documentation. An XML document is a mixture of structure (the tags) and surface (text between the tags). The structure reflects the choices made by the author during the top-down stepwise refinement of the document under control of a DTD grammar. These choices are typically choices of meaning which are independent of the language in which the document is rendered, and can be seen as a kind of interlingua for the class of documents which is modeled by the DTD. Based on this remark, we advocate a radicalization of XML authoring, where the semantic content of the document is accounted for exclusively in terms of choice structures, and where appropriate rendering realizaition mechanisms are responsible for producing the surface, possibly in several languages simultaneously. In this view, XML authoring has strong connections to natural language generation and text authoring. We describe the IG (Interaction Grammar) formalism, an extension of DTD’s which permits powerful linguistic manipulations, and show its application to the production of multilingual versions of a certain class of pharmaceutical documents.
1 Introduction
The world of technical documentation is forcefully moving towards the use of authoring tools based on the XML markup language (W3C, 1998; Pardi, 1999). This language is based on grammatical specifications, called DTD’s, which are roughly similar to context-free grammars1 with an arbitrary number of non-terminals and exactly one predefined terminal called pCDATA. The pCDATA terminal has a special status: it can dominate any character string (subject to certain restrictions on the characters allowed). Authoring is seen as a top-down interactive process of step-wise refinement of the root nonterminal (corresponding to the whole document) where the author iteratively selects a rule for expanding a nonterminal already present in the tree and where in addition s/he can choose an arbitrary sequence of characters (roughly) for expanding the pCDATA node. The resulting document is a mixture of tree-like structure (the context-free derivation tree corresponding to the author’s selections), represented through tags, and of surface, represented as free-text (PCDATA) between the tags.
We see however a tension between the structure and surface aspects of an XML document:
- While structural choices are under system control (they have to be compatible with the DTD), surface choices are not.2
- Surface strings are treated as unanalysable chunks for the styling mechanisms that render the XML document to the reader. They can be displayed in a given font or moved around, but they lack the internal structure that would permit to “re-purpose” them for different rendering situations, such as displaying on mobile telephone screens, wording differently for a specific audience, or producing procedurally adequate phonetic output. This situation stands in contrast with the underlying philosophy of XML which emphasizes the separation between content specification and the multiplesituations in which this content can be exploited.
- Structural decisions tend to be associated with choices of meaning which are independent of the language in which the document is rendered. Thus for instance the DTD for an aircraft maintenance manual might distinguish between two kinds of risks: caution (material damage risk) and warning (risk to the operator). By selecting one of these options (a choice that will lead to further lower-level choices), the author takes a decision of a semantic nature, which is quite independent of the language in which the document is to be rendered, and which could be exploited to produce multilingual versions of the
1But see (Wood, 1995; Prescod, 1998) for discussions of the differences.
2With the emergence of schemas (W3C, 1999a), which permits some typing of the surface (float, boolean, string, etc.), some degree of control is becoming more feasible.
document. By contrast, a PCDATA string is language-specific and ill-suited for multilingual applications.
These remarks point to a possible radical view of XML authoring that advocates that surface strings be altogether eliminated from the document content, and that author choices be all under the explicit control of the DTD and reflected in the document structure. Such a view, which is argued for in a related paper (Dymetman et al., 2000), emphasizes the link between XML document authoring and multilingual text authoring/generation (Power and Scott, 1998; Hartley and Paris, 1997; Cech, 1996): the choices made by the author are treated as a kind of interlingua (specific to the class of documents being modelled), and it is the responsibility of appropriate “rendering” mechanisms to produce actual text from these choices in the different languages\(^3\) under consideration.
For such a program, existing XML tools suffer however from serious limitations. First, DTD’s are too poor in expressive power (they are close to context-free grammars) for expressing dependencies between different parts of the document, an aspect which becomes central as soon as the document micro-structure (its fine-grained semantic structure) starts to play a prominent role, as opposed to simply its macro-structure (its organization in large semantic units, typically larger than a paragraph). Second, current rendering mechanisms such as CSS (Cascading Style Sheets) or XSLT (XSL transformation language) (W3C, 1999b) are ill-adapted for handling even simple linguistic phenomena such as morphological variation or subject-verb agreement.
In order to overcome these limitations, we are using a formalism, Interaction Grammars (IG), a specialization of Definite Clause Grammars (Pereira and Warren, 1980) which originates in A. Ranta’s Grammatical Framework (GF) (Ranta; Mäenpää and Ranta, 1999; Dymetman et al., 2000), a grammatical formalism based on Martin-Löf’s Type Theory (Martin-Löf, 1984) and building on previous experience with interactive mathematical proof editors (Magnusson and Nordström, 1994). In this formalism, the carrier of meaning is a choice tree (called “abstract tree” in GF), a strongly typed object in which dependencies between substructures can be easily stated using the notion of dependent types.
The remainder of this paper is organized as follows. In section 2, we give a high level overview of the Multilingual Document Authoring (MDA) system that we have developed at XRCE. In section 3, we present in some detail the formalism of Interaction Grammars. In section 4, we describe an application of MDA to a certain domain of pharmaceutical documents.
2 Our approach to Multilingual Document Authoring
Our Multilingual Document Authoring system has the following main features:
First, the authoring process is monolingual, but the results are multilingual. At each point of the process the author can view in his/her own language the text s/he has authored so far, and areas where the text still needs refinement are highlighted. Menus for selecting a refinement are also presented to the author is his/her own language. Thus, the author is always overtly working in the language s/he knows, but is implicitly building a language-independent representation of the document content. From this representation, the system builds multilingual texts in any of several languages simultaneously. This approach characterizes our system as belonging to an emerging paradigm of “natural language authoring” (Power and Scott, 1998; Hartley and Paris, 1997), which is distinguished from natural language generation by the fact that the semantic input is provided interactively by a person rather than by a program accessing digital knowledge representations.
Second, the system maintains strong control both over the semantics and the realizations of the document. At the semantic level, dependencies between different parts of the representation of the document content can be imposed: for instance the choice of a certain chemical at a certain point in a maintenance manual may lead to an obligatory warning at another point in the manual. At the realization level, which is not directly manipulated by the author, the system can impose terminological choices (e.g. company-specific nomenclature for a given concept) or stylistic choices (such as choosing between using the infinitive or the imperative mode in French to express an instruction to an operator).
Finally, and possibly most distinctively, the semantic representation underlying the authoring process is strongly document-centric and geared towards directly expressing the choices which uniquely characterize a given document in an homogeneous class of documents belonging to the same domain. Our view is document-centric in the sense that it takes as its point of departure the widespread practice of using XML tools for authoring the macro-structure of documents, and extends this practice towards an account of their micro-structure. But the analysis of the micro-structure is only pushed as far as is necessary in order to account for the variability inside the class of documents considered, and not in terms of the ultimate meaning constituents of language. This micro-structure can in general be determined by studying a corpus of documents and by
\(^3\)The word “language” should be understood here in an extended sense that not only covers English, French, etc., but also different styles or modes of communication.
exposing the structure of choices that distinguish a
given document from other documents in this class.
This structure of choices is represented in a choice
tree, which is viewed as the semantic representation
for the document.\footnote{This kind of semantic representation
stands in contrast to some representations commonly used in NLP,
which tend to emphasize the fine-grained predicate-argument structure
of sentences independently of the productivity of such analyses
for a given class of documents.} One single choice may be
associated with text realizations of drastically different
granularities: while in a pharmaceutical document
the choice of an ingredient may result in the produc-
tion of a single word, the choice of a "responsibility-
waiver" may result in a long stereotypical paragraph
of text, the further analysis of which would be totally
counter-productive.
3 Interaction Grammars
Let us now give some details about the formalism
of Interaction Grammars. We start by explaining the
notion of choice tree on the basis of a simple
context-free grammar, analogous to a DTD.
Context-free grammars and choice trees
Let's consider the following context-free grammar
for describing simple "addresses" in English such as
"Paris, France".\footnote{For compatibility with the notations to follow, we use
lowercase to denote nonterminals, and quoted strings to denote
terminals, rather than the more usual uppercase/lowercase
conventions.}
\begin{verbatim}
address --> city, ",",
country. country --> "France".
country --> "Germany".
city --> "Paris".
city --> "Hamburg".
city --> "the capital of",
country. city.
\end{verbatim}
What does it mean, remembering the XML anal-
ogy, to author a "document" with such a CFG? It
means that the author is iteratively presented with
partial derivation trees relative to the grammar (par-
tial in the sense that leaves can be terminals or non-
terminals), and at each given authoring step both
selects a certain nonterminal to "refine"; and also a
given rule to extend this non-terminal one step fur-
ther; this action is repeated until the derivation tree
is complete.
If one conventionally uses the identifier
\texttt{nonterminal\_i}, to name the i-th rule expanding
the nonterminal \texttt{nonterminal}, then the collection
of choices made by the author during a session can
be represented by a \texttt{choice tree} labelled with rule
identifiers, also called \texttt{combinators}. An example
of such a tree is \texttt{address(city2,country2)}
which corresponds to choices leading to the output
"Hamburg, Germany".\footnote{Such a choice tree can be projected into a derivation
tree in a straightforward way, by mapping a combinator
\texttt{nonterminal\_i} into the nonterminal name \texttt{nonterminal}, and
by introducing terminal material as required by the specific
rules.}
In practice, rather than using combinator names which strictly adhere to
this numbering scheme, we prefer to use mnemonic names
directly relating to the meaning of the choices. In the sequel we will use the names \texttt{adr},
\texttt{fra}, \texttt{ger}, \texttt{par}, \texttt{ham}, \texttt{cap} for the six rules in the
example grammar. The choice tree just described is
thus written \texttt{adr(ham,ger)}.
Making choice trees explicit
As we have argued previously, choices trees are in our view the cen-
tral repository of document content and we want to
manipulate them explicitly. Definite Clause Grammars represent possibly the simplest extension of
context-free grammars permitting such manipulation.
Our context-free grammar can be extended straightforwardly into the DCG:\footnote{According to the usual logic programming conventions,
lowercase letters denote predicates and functionals, whereas up-
percase letters denote metavariables that can be instantiated
with terms.}
\begin{verbatim}
address(adr(Co,C)) --> city(C), ",",
country(Co).
country(fra) --> "France".
country(ger) --> "Germany".
city(par) --> "Paris".
city(ham) --> "Hamburg".
city(cap(Co)) --> "the capital of",
country(Co).
\end{verbatim}
What these rules do is simply to construct choice
trees recursively. Thus, the first rule says that if the
author has described a city through the choice tree
\texttt{C} and a country through the choice tree \texttt{Co},
then the choice tree \texttt{adr(Co,C)} represents the description of
an address.
If now, in this DCG, we "forget" all the terminals,
which are language-specific, by replacing them with
the empty string, we obtain the following "abstract
grammar":
\begin{verbatim}
address(adr(Co,C)) --> city(C), country(Co).
country(fra) --> [].
country(ger) --> [].
city(par) --> [].
city(ham) --> [].
city(cap(Co)) --> country(Co).
\end{verbatim}
which is in fact equivalent to the definite clause
\texttt{program}.\footnote{In the sense that rewriting the nonterminal goal
\texttt{address(adr(Co,C))} to the empty string in the DCG is equiv-
alent to proving the goal \texttt{address(adr(Co,C))} in the program.}
address(adr(Co,C)) :- city(C), country(Co).
country(fra).
country(ger).
city(par).
city(ham).
city(cap(Co)) :- country(Co).
This abstract grammar (or, equivalently, this logic
program), is language independent and recursively
defines a set of well-formed choice trees of different
categories, or types. Thus, the tree adr(ham,ger)
is well-formed "in" the type address, and the tree
cap(fra) well-formed in the type city.
Dependent Types In order to stress the type-
related aspects of the previous tree specifications,
we are actually using in our current implementa-
tion the following notation for the previous abstract
grammar:
adr(Co,C)::address --> C::city,
Co::country.
fra::country --> [].
ger::country --> [].
par::city --> [].
ham::city --> [].
cap(Co)::city --> Co::country.
The first rule is then read: "if C is a tree of
type city, and Co a tree of type country, then
adr(Co,C) is a tree of type address", and similarly
for the remaining rules.
The grammars we have given so far are deficient
in one important respect: there is no dependency
between the city and the country in the same ad-
sress, so that the tree adr(ham,fra) is well-formed
in the type address. In order to remedy this prob-
lem, dependent types (Ranta: Martin-Löf, 1984)
can be used. From our point of view, a dependent
type is simply a type that can be parametrized by objects
of other types. We write:
adr(Co,C)::address --> C::city(Co),
Co::country.
fra::country --> [].
ger::country --> [].
par::city(fra) --> [].
ham::city(ger) --> [].
cap(Co)::city(Co) --> Co::country.
in which the type city is now parametrized by objects of
type country, and where the notation
par::city(fra) is read as "par is a tree of the type:
city of fra".9
9In terms of the underlying Prolog implemen-
tation: '.' is simply an infix operator for a predicate of arity 2 which relates
an object and its type, and both simple and dependent types
are handled straightforwardly.
Parallel Grammars and Semantics-driven
Compositionality for Text Realization We
have just explained how abstract grammars can be
used for specifying well-formed typed trees represen-
ting the content of a document.
In order to produce actual multilingual documents
from such specifications, a simple approach is to al-
low for parallel realization English, French, ..., gram-
mars, which all have the same underlying abstract
grammar (program), but which introduce terminals
pecific to the language at hand. Thus the follow-
ing French and English grammars are parallel to the
previous abstract grammar:10
adr(Co,C)::address --> C::city(Co), "",
Co::country.
fra::country --> "France".
ger::country --> "Germany".
par::city(fra) --> "Paris".
ham::city(ger) --> "Hamburg".
cap(Co)::city(Co) --> "the capital of",
Co::country.
adr(Co,C)::address --> C::city(Co), "",
Co::country.
fra::country --> "la France".
ger::country --> "l'Allemagne".
par::city(fra) --> "Paris".
ham::city(ger) --> "Hambourg".
cap(Co)::city(Co) --> "la capitale de",
Co::country.
This view of realization is essentially the one we
have adopted in the prototype at the time of writ-
ing, with some straightforward additions permitting
the handling of agreement constraints and morpho-
logical variants. This simple approach has proven
quite adequate for the class of documents we have
been interested in.
However, such an approach sees the activity of
generating text from an abstract structure as ba-
sically a compositional process on strings, that is,
a process where strings are recursively associated
with subtrees and concatenated to produce strings
at the next subtree level. But such a direct proce-
dure has well-known limitations when the semantic
and syntactic levels do not have a direct correspon-
dence (simple example: ordering a list of modifiers
around a noun). We are currently experimenting
with a powerful extension of string compositionality
where the objects compositionally associated with
abstract subtrees are not strings, but syntactic rep-
resentations with rich internal structure. The text
10Because the order of goals in the right-hand side of an ab-
stract grammar rule is irrelevant, the goals on the right-hand
sides of rule in two parallel realization grammars can appear
in a different order, which permits certain reorganizations of
the linguistic material (situation not shown in the example).
27
itself is obtained from the syntactic representation associated with the total tree by simply enumerating its leaves.
In this extended view, realization grammars have rules of the following form:
\[ a1(B,C,...):a(D,...) \rightarrow \text{Syn} \rightarrow B::b(E,...) \rightarrow \text{SynB}, C::c(F,...) \rightarrow \text{SynC}, \]
\{\text{constraints}(B,C,...,D,E,F,...), \}
\{\text{compose}_\text{english}(\text{SynB}, \text{SynC}, ..., \text{Syn})\}.
The rule shown is a rule for English: the syntactic representations are language dependent; parallel rules for the other languages are obtained by replacing the \text{compose}_\text{english} constraint (which is unique to this rule) by constraints appropriate to the other languages under consideration.
**Heterogeneous Trees and Interactivity** Natural language authoring is different from natural language generation in one crucial respect. Whenever the abstract tree to be generated is incomplete (for instance the tree \text{cap}(\text{Co})), that is, has some leaves which are yet uninstanciated variables, the generation process should not proceed with nondeterministically enumerating texts for all the possible instantiations of the initial incomplete structure. Instead it should display to the author as much of the text as it can in its present “knowledge state”, and enter into an interaction with the author to allow her to further refine the incomplete structure, that is, to further instantiate some of the uninstanciated leaves. To this purpose, it is useful to introduce along with the usual combinators (\text{adr}, \text{fra}, \text{cap}, etc.) new combinators of arity 0 called \text{typenames}, which are notated \text{type}, and are of type \text{type}. These combinators are allowed to stand as leaves (e.g. in the tree \text{cap}(\text{country})) and the trees thus obtained are said to be heterogeneous. The typenames are treated by the text generation process as if they were standard semantic units, that is, they are associated with text units which are generated “at their proper place” in the generated output. These text units are specially phrased and highlighted to indicate to the author that some choice has to be made to refine the underlying type (e.g. obtaining the text “la capitale de PAYS”). This choice has the effect of further instantiating the incomplete tree with “true” combinators, and the generation process is iterated.
4 An Application to Pharmaceutical Documents
4.1 Corpus selection
Our corpus consists in drug notices extracted from “Le VIDAL® de la Famille” (Éditions du Vidal, 1998), a practical book about health made for the general public. Le VIDAL® includes a collection of notices for around 5 500 drugs available in France. As the publisher, OVP-Éditions du Vidal has taken care of homogeneity across the notices, reformatting and reformulating source information. The main source are the New Drug Authorizations (Autorisation de Mise sur le Marché), regulatory documents written by pharmaceutical laboratories and approved by legal authorities.
Relative to multilingual document authoring, this corpus has three features which we considered highly desirable: (1) it deals with a restricted semantic domain (for which various terminological resources are available), (2) it is a homogeneous collection of documents all complying to the same division in sections and sub-sections, (3) there is a strong trend in international bodies such as the EEC towards making drug package notices (which are similar to VIDAL notices) available in multilingual versions strictly aligned on a common model.\(^1\)
4.2 Corpus analysis
An analysis of a large collection of notices from Le VIDAL® de la famille, describing different drugs, from different laboratories was conducted in order to identify:
- the structure of a notice,
- the semantic dependencies between elements in the structure.
For this task, all the meta-information available is useful, in particular, explanations provided by Le VIDAL® de la famille and help of a domain expert. Corpus study was a necessary preliminary task before modeling the notices in the IG formalism presented in section 2.
4.2.1 Structure
Notices from Le VIDAL® are all built on the same model, including a title (the name of the drug, plus some general information about it), followed by sections describing the main characteristics of the drug: general description, composition, indications, contraindications, warnings, drug interactions, pregnancy and breast-feeding, dosage and administration, possible side effects. This initial knowledge about the semantic content of the document is captured with a first, simple context free rule, such as:
\[
\text{vidalNotice}(T,D,C,I,CI,W,DI,PAF,DaA,Psi)::\text{notice} \rightarrow T::\text{title},
D::\text{description},
C::\text{composition},
\]
\(^1\)A similar but less extended corpus was previously built by the third author as the basis for a prototype of multilingual document authoring using GF.
I::indications,
CI::contraindications,
W::warnings,
DI::drugsInteraction,
PaBF::pregnancyAndBreastFeeding,
DaA::dosageAndAdmin,
PSI::possibleSideEffects.
Each section is associated with context-free rules that describe its internal structure:
vidalTitle(N, AP, ..., ...):title
-->
N::nameOfDrug,
AP::activePrinciples, ...
vidalDescription(N,PF,P...):description
-->
['DESCRIPTION'],
N::nameOfDrug,
PF::pharmaceutForm,
P::package, ...
vidalDosageAndAdmin(D,A):dosageAndAdmin
-->
['DOSE AND ADMINISTRATION'],
D::dosage,
A::administration.
tablet::pharmaceutForm -- ['tablet'].
eyeDrops::pharmaceutForm -- ['eye drops'].
At this point, we allow parallel realizations for French and English. So, in addition to the English grammar given above, we have the French grammar:
vidalTitle(N, AP, ..., ...):title
-->
N::nameOfDrug,
AP::activePrinciples, ...
vidalDescr(N,PF,P...):description
-->
['PRÉSENTATION'],
N::nameOfDrug,
PF::pharmaceutForm,
P::package, ...
vidalDosageAndAdmin(D,A):dosageAndAdmin
-->
['MODE D’EMPLOI ET POSOLOGIE'],
D::dosage,
A::administration.
tablet::pharmaceutForm -- ['comprimé'].
eyeDrops::pharmaceutForm -- ['collyre'].
This first grammar is fully equivalent to a XML DTD that describes the structure of a notice, though it distinguishes finer-grained units than traditional DTDs tends to do.
4.2.2 Modeling dependencies
But IG goes further than XML DTDs with regard to the semantic control of documents: it enables us to express dependencies which may arise in different parts of a document, including long-distance dependencies, through the use of dependent types presented in section 2.
Identification of the dependencies to be modeled was done in a second stage of the corpus study. For example, we identified dependencies between:
- the pharmaceutical form of a given drug (concept pharmaceutForm) and its packaging (concept package),
- particular ingredients given in the section composition and warning instructions given in the section warnings,
- categories of patients the drug is intended for in the section description and posology indicated for each category in the section indications.
To illustrate the modeling task, we now give more details about one particular dependency identified. Intuitively, it appears that there is a strong link between the pharmaceutical form of a given drug and the way it should be administered: tablets are swallowed, eye drops are put in the eyes, powder is diluted in water etc. In our first grammar, the pharmaceutical form concept appears in the description section, since the administration way is described in the dosage and administration section. The use of dependent types permitted to link these sections together according to the pharmaceutical form. The parts of the (English) grammar involved become:
vidalNotice(T,D,C,I,CI,W,DI,PaBF,DaA,PSI):notice
-->
T::title,
D::description(PF),
C::composition,
I::indications,
CI::contraindications,
W::warnings,
DI::drugsInteraction,
PaBF::pregnancyAndBreastFeeding,
DaA::dosageAndAdmin(PF),
PSI::possibleSideEffects.
vidalDescription(N,PF,P...):description(PF)
-->
['DESCRIPTION'],
N::nameOfDrug,
PF::pharmaceutForm,
P::package, ...
vidalDosageAndAdmin(D,A):dosageAndAdmin(PF)
-->
['DOSE AND ADMINISTRATION'],
D::dosage,
A::administration(PF).
The administration section should now be described according to the pharmaceutical form it presupposes, several administration ways being compatible with each form:
tablesAdmin1::administration(Tablet) --> ['Swallow the tablets without crunching them.'].
tablesAdmin2::administration(Tablet) --> ['Let the tablets melt under the tongue.'].
eyeDropsAdmin::administration(EyeDrops) --> ['Pull the lower eyelid down while looking up and squeeze the eye drops, so that they fall between the eyelid and the eyeball.'].
The consequence of such a modeling is a better control of the semantic content of the document in the process of being authored: once the user chooses tablet as pharmaceutical form in the section description, his choice is restricted between the two concepts tabletsAdmin1 and tabletsAdmin2 in the administration section. If he chooses eye drops as the pharmaceutical form, there is no choice left if the administration section: the text fragment corresponding to the concept eyeDropsAdmin will be generated automatically in the document.
This example illustrates how dependencies are propagated into the macro-structure, but they can be propagated into the micro-structure as well: for example, in the description section, we can express that the packaging of the drugs is also dependent of their form: tablets are packaged in boxes, eye drops in flasks, powder in packets, etc.:
vidalDescription(W,P,...)::description(PF) --> ['DESCRIPTION'],
W::nameOfDrug,
P:pharmaceutForm,
P::package(PF), ...
box::package(Tablet) --> ['Box'],
flask::package(EyeDrops) --> ['Flask'].
This example shows that the granularity degree of the linguistic realization can vary from full text segment (administration ways) to single words (forms like tablet, eye drops, powder, etc.). This is highly related to the reusability of the concept: references to specific forms may appear in many parts of the document, while the administration ways are more or less frozen segments.\footnote{For a discussion of some of the issues regarding the use of templates in natural language generation systems, see (Reiter, 1995).
Figure 1: A stage in the authoring of a notice, with French text shown.}
4.3 An Example
Screen copies of the IG interface during an authoring process of a VITAL notice are given on figures 1 and 2. Figure 1 represents the notice authored in French at a given stage. The fields still to be refined by the user appear in dark. When the author wants to refine a given field, a pulldown menu presenting the choices for this field appears on the screen. Here, the author chooses to refine the field \textit{avaler} in the administration (mode d'emploi et posologie) section: the corresponding menu proposes the list of administration ways corresponding to the pharmaceutical form tablet he has chosen before. Figure 2 shows the parallel notice in English but one step further, i.e. once he has selected the administration way.
5 Conclusion
XML-based authoring tools are more and more widely used in the business community for supporting the production of technical documentation, controlling their quality and improving their reusability. In this paper, we have stressed the connections between these practices and current research in natural language generation and authoring. We have described a formalism which removes some of the limitations of DTD’s when used for the production of multilingual texts and presented its application to a certain domain of pharmaceutical documents.
Acknowledgements Thanks to Jean-Pierre Chanod, Marie-Hélène Corréard, Sylvain Pogodalla and Arne Ranta for important contributions, discussions and comments.
References
OVP Éditions du Vidal, editor. 1998. Le VIDAL de la famille. HACHETTE.
W3C. 1999b. XSL Transformations (XSLT), November. W3C recommendation.
|
{"Source-Url": "https://www.cs.bgu.ac.il/~nlg2000/final/Dymetman/origin.pdf", "len_cl100k_base": 6968, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 45343, "total-output-tokens": 8288, "length": "2e12", "weborganizer": {"__label__adult": 0.0006194114685058594, "__label__art_design": 0.0017728805541992188, "__label__crime_law": 0.0008683204650878906, "__label__education_jobs": 0.00768280029296875, "__label__entertainment": 0.0003843307495117187, "__label__fashion_beauty": 0.0003995895385742187, "__label__finance_business": 0.0007905960083007812, "__label__food_dining": 0.0005574226379394531, "__label__games": 0.0010633468627929688, "__label__hardware": 0.0008697509765625, "__label__health": 0.001834869384765625, "__label__history": 0.000659942626953125, "__label__home_hobbies": 0.00015854835510253906, "__label__industrial": 0.0007996559143066406, "__label__literature": 0.00679779052734375, "__label__politics": 0.0005888938903808594, "__label__religion": 0.0010986328125, "__label__science_tech": 0.260009765625, "__label__social_life": 0.0002827644348144531, "__label__software": 0.050567626953125, "__label__software_dev": 0.6611328125, "__label__sports_fitness": 0.00031375885009765625, "__label__transportation": 0.0006136894226074219, "__label__travel": 0.000247955322265625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33503, 0.01049]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33503, 0.5762]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33503, 0.86008]], "google_gemma-3-12b-it_contains_pii": [[0, 4237, false], [4237, 9748, null], [9748, 14730, null], [14730, 19191, null], [19191, 24200, null], [24200, 27503, null], [27503, 30469, null], [30469, 33503, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4237, true], [4237, 9748, null], [9748, 14730, null], [14730, 19191, null], [19191, 24200, null], [24200, 27503, null], [27503, 30469, null], [30469, 33503, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33503, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33503, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33503, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33503, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33503, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33503, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33503, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33503, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33503, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33503, null]], "pdf_page_numbers": [[0, 4237, 1], [4237, 9748, 2], [9748, 14730, 3], [14730, 19191, 4], [19191, 24200, 5], [24200, 27503, 6], [27503, 30469, 7], [30469, 33503, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33503, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
7f5a1eae167ef2d24b15e1cd1fcddd0ec7840449
|
Grouping and joining transformations in the data extraction process
Marcin Gorawski*, Paweł Marks
Institute of Computer Science, Silesian University of Technology,
Akademicka 16, 44-100 Gliwice, Poland
Abstract
In this paper we present a method of describing ETL processes (Extraction, Transformation and Loading) using graphs. We focus on implementation aspects such as division of a whole process into threads, communication and data exchange between threads, deadlock prevention. Methods of processing of large data sets using insufficient memory resources are also presented upon examples of joining and grouping nodes. Our solution is compared to the efficiency of the OS-level virtual memory in a few tests. Their results are presented and discussed.
1. Introduction
Nowadays data warehouses gather tens of gigabytes of data. The data, before loading to the warehouse, is often read from many various sources. These sources can differ in terms of a data format, so there is necessity of applying proper data transformations making the data uniformly formatted. In consecutive steps the data set is filtered, grouped, joined, aggregated and finally loaded to a destination. The destination can be one or more warehouse tables. A whole process of reading, transforming and data loading is called data extraction process (ETL).
The transformations used in the ETL process can differ in terms of complexity. A few of them are simple (e.g. filtration, projection), whereas others are very long lasting and require a lot of operational memory (e.g. grouping, joining). However, the common feature of the transformations is that each one contains at least one input and an output. This allows to describe the extraction process using a graph, whose nodes correspond to objects performing some operations on tuples, and its edges define data flow paths.
Most of commercial tools like Oracle WB do not consider internal structure of transformations and graph architecture of ETL processes. Exceptions are the research [1,2], where the authors describe ETL ARKTOS (ARKTOS II) tool. It
*Corresponding author: e-mail address: Marcin.Gorawski@polsl.pl
Marcin Gorawski, Paweł Marks
can (graphically) model and execute practical ETL scenarios, providing us with primitive expressions that brings the control over the typical tasks using declarative language. Work [3] presents advanced research on the prototypes containing the AJAX data cleaning tool.
To optimize the ETL process, there is often designed a dedicated extraction application, adjusted to requirements of a particular data warehouse system. Based on the authors’ experiences [4,5], a decision was made to build a developmental ETL environment using JavaBeans components. Similar approach was proposed, in the meantime in work [6]. J2EE architecture with the ETL and ETLLet container was presented there, providing efficient ways of execution, controlling and monitoring of ETL process tasks for the continuous data propagation case.
Further speeding up of the ETL process forced us to give the JavaBeans platform up. An ETL-DR environment [7] is a successor to the ETL/JB and DR/JB [8]. It is a set of Java object classes, used by a designer to build extraction applications. These are analogous to JavaBeans components in the DR/JB environment. However, object properties are saved in an external configuration file, which is read by an environment manager object. It relieves us from recompilation of the application each time the extraction parameters change. In comparison to ETL/JB and DR/JB we improved significantly the processing efficiency and complexity of the most important transformations: grouping and joining. The possibility of storing data on a disk was added when the size of the data set requires much more memory than it is available.
In the following sections we present in detail a method of describing ETL processes using graphs and we show how this description influences the implementation. The problems resulting from the graph usage are also discussed and the methods of data processing using insufficient memory resources are presented.
### 2. Extraction graph
Operations performed during the extraction process can be divided into three groups:
- reading source data,
- data transformations,
- writing data to a destination.

Nodes belonging to the above mentioned operation groups are respectively: extractors (E), transformations (T) and inserters (I). From the graph point of view extractors have only outputs, transformations have both inputs and outputs, whereas inserters contain inputs only. By connecting inputs to outputs we create a connection net that defines data flow paths (Fig. 1). The data flow inside the node is possible in one direction only: from the inputs to the outputs, in the opposite direction it is forbidden. It is also assumed that connection net does not contain closed loops, which means there is no possibility to enter the same graph node traversing along the selected path of the graph. Such a net of nodes and connections is called the directed acyclic graph DAG.
3. ETL-DR data extraction environment
ETL-DR is our research environment designed in Java. It uses the extraction graph idea presented above to describe the extraction processes. During processing each graph node is associated with a thread, that is an instance of a transformation, or an extractor, or an inserter.
Available components are:
1. Extractors
- FileExtractor (FE) – reads tuples from a source file,
- DBExtractor (DE) – reads tuples from a database,
2. Transformations
- AggregationTransformation (AgT) – aggregates a specified attribute,
- FilterTransformation (FiT) – filters the stream of tuples,
- FunctionTransformation (FuT) – user-definable tuple transformation,
- GeneratorTransformation (GeT) – generates ID for each tuple,
- GroupTransformation (GrT) – grouping,
- JoinTransformation (JoT) – joining,
- MergeTransformation (MeT) – merges two streams of tuples,
- ProjectionTransformation (PrT) – projection,
- UnionTransformation (UnT) – union,
3. Inserters
- FileInserter (FI) – writes tuples to a destination file,
- DBInserter (DI) – writes tuples to a database table via JDBC interface,
- OracleDBInserter (ODI) – writes tuples to a database using Oracle specific SQL*Loader,
4. Specials
- VMQueue (VMQ) – FIFO queue which stores data on a disk.
Most of the components process data on-the-fly, which means each tuple just received is transformed or analyzed independently and there is no need to gather a whole data set. The exceptions are: joining node JoT, grouping node GrT and VMQ queue.
3.1. Implementation of graph nodes interconnections
In order to facilitate analysis of interconnections between the graph nodes we have to describe the structure of inputs and outputs of the ETL-DR extraction graph nodes. Each node has a unique ID. Each node input contains ID of a source node assigned by the graph designer, and an automatically assigned number of an output channel of the source node. A node output is a multichannel FIFO buffer with the number of channels equal to the number of inputs connected to the node (Fig. 2). When a node produces output tuples, it puts them into its output, where they are grouped into tuple packets. Upper limit of the packet size is defined by the designer. Packets are gathered in queues, separately for each output channel. The queue size is also limited to avoid unnecessary memory consumption.
Fig. 2. Nodes interconnection on the implementation level. Data produced by the node 123 are stored in a multichannel output buffer. Source of the node 124 is defined as a node with ID = 123 and logical channel number = 1
3.2. Data exchange between nodes and a risk of deadlock
Let us analyze a case of processing performed by a part of the graph presented in Fig. 3a. The function node FuT(11) produces tuples with attributes (eID, date, transactionsPerDay), and the grouping node GrT(12) computes an average number of transactions for each employee. This is similar to the SQL query below:
```
SELECT eID, AVG(transactionsPerDay) AS avgTPD
FROM GrTFuT
GROUP BY eID
```
The joining node JoT(13) performs an action defined by the following SQL query:
```
SELECT s1.eID, s1.date, s1.transactionsPerDay, s2.avgTPD
FROM JoTFuT s1, JoTGrT s2
WHERE s1.eID = s2.eID
```
Such simple operations like grouping and joining are dangerous because they can be a reason of deadlock. This is a result of the data transferring method between node threads.
The joining node works as follows: it receives tuples from the slave input and puts them into a temporary buffer, next it receives tuples from the primary input. Each tuple from the primary input is checked if it can be joined with tuples in a temporary buffer according to the specified join condition. In the presented example, slave input is the one connected to the grouping node, as it is greatly possible, that after grouping the size of the data set will decrease and a smaller number of tuples will be kept in memory. Tuples generated by the function node are simultaneously gathered in both output channels of the node for the nodes JoT(13) and GrT(12). The grouping node aggregates data all the time, but the joining node waits for the grouped data first, and it still does not read anything from the function node. After exceeding the limit of the output queue size, the function node is halted until the queue size decreases below the specified level. This way a deadlock occurs:
- the node FuT(11) waits until the node JoT(13) starts reading data from it,
- the node GrT(12) waits for the data from the node FuT(11),
- the node JoT(13) waits for the data from the node GrT(12).

To eliminate the reason for the deadlock we have to make sure that the data from the function node FuT(11) are fetched continuously without exceeding the queue size limit. To do it we created a special VMQueue component. This is a FIFO queue with ability of storing data on a disk. It reads tuples from its input, no matter if they can be handled further or not. If tuples are fetched from the VMQ node continuously it does nothing more but transfers data from the input to the output. In the other case, it writes tuples to the disk in order to avoid overfilling of the output queue of its source node. Next, when VMQueue destination continues processing, the tuples are read from the disk and sent to the queue output. Inserting a VMQueue node between FuT(11) and JoT(13) avoids the deadlock (Fig. 3b).
3.3. Formal definition of the deadlock prone graph nodes subset
A deadlock may occur if two or more data flow paths that split in one node of the graph, meet again in another node. In other words, a given node $X$ is connected with any of its direct or indirect source nodes by two or more paths. This let us conclude that node $X$ must have more than one input.
Let us represent a set of source nodes of the node $X$ as $\text{SourceNodes}(X)$, and a set of source nodes of the i-th input of $X$ as $\text{InputSourceNodes}(X,i)$. We can define:
- $\text{InputSourceNodes}(X,i) = \text{SourceNodes}(X.\text{in}[i].\text{sourceID}) \cup \{X.\text{in}[i].\text{sourceID}\}$
- $\text{SourceNodes}(X) = \phi$ if $X$ is an extractor,
- $\text{SourceNodes}(X) = \bigcup_{i=1}^{n} \text{InputSourceNodes}(X,i)$ if $X$ is a transformation or an inserter
- $\text{CommonNodes}(X,i,j) = \text{InputSourceNodes}(X,i) \cap \text{InputSourceNodes}(X,j)$
- $\text{LastNode}(N) = \{X \in N: \text{SourceNodes}(X) = N \setminus \{X\}\}$
If for each node $X$ of an extraction graph, which is not an extractor, the following condition is satisfied:
$$\forall_{i \in [1,n]} \forall_{j \in [1,n]} \ i \neq j \Rightarrow \text{CommonNodes}(X,i,j) = \phi$$
then deadlock cannot occur. Otherwise deadlock is possible and we should use VMQueue component and insert it into the graph, to avoid the application hang. Insertion of VMQueue node makes sense only behind the nodes from a $\text{LastNode}(\text{CommonNodes}(X,I,j))$ set, that is a set of the last nodes from the set of common parts of the two data flow paths. In the example presented in the previous section it was the FuT(11) node (Fig. 3b).
3.4. Temporary data buffering on disk
During an extraction process a large number of tuples is processed. When they need to be buffered, there is a problem of selection of the right place for the buffer. Keeping them in memory is impossible because the size of the data set is usually much bigger than that of the available RAM. The only solution is storing the data on a disk. Two approaches are possible: virtual memory supported by the operating system or storing implemented on the application level in algorithms used in transformation nodes. In our ETL-DR environment the nodes using application-level virtual memory are: VMQueue, GroupTransformation and JoinTransformation.
**VMQueue Component.** As it was presented in Sect. 3.2 VMQueue component is a FIFO queue able to store the buffered data on a disk. Its task is to ensure the data is read from its source as it comes, even if the node receiving data from VMQueue does not work. In such a case tuples are stored in a disk file rather than put into the output buffer. Next when possible, tuples are read from the file.
and hand further. Because of a sequential access to the disk file, this solution is more efficient than the OS-level virtual memory.
**GroupTransformation Component.** A grouping component can work in one of three modes:
1. input tuples are sorted according to the grouping attribute values,
2. tuples are not sorted, grouping in memory,
3. tuples are not sorted, external grouping.
**procedure** Group()
**Begin**
List fileList;
While Input.hasTuples() do
Tuple T = Input.getTuple();
If not HM.contains(Attributes(T)) then
HM.put(Attributes(T), Aggregates(T));
End if
Aggregates AG = HM.get(Attributes(T));
AG.doAggregate(T);
If HM.size() > SIZELIMIT then
fileList.add(WriteToFile(HM));
HM.clear();
End if
End while
AggrSource as = getSource(fileList, HM);
Aggregates AG = null;
While as.hasNext() do
If AG == null then
AG = a.next();
Else
Aggregates newAG = as.next();
If (newAG.attr == AG.attr) then
AG.aggregate(newAG);
Else
ProduceOutputTuple(AG);
AG = newAG;
End if
End if
End while
ProduceOutputTuple(AG);
**End**
Fig. 4. External grouping algorithm
In case 1) aggregates are computed as they come, and memory usage level is very low. In case 2) each new combination of the grouping attributes is saved in a hash table with associated aggregates. If such a combination appears again during processing, it is located and the aggregates are updated. The number of entries in the hash table at the end of the processing equals to the number of tuples produced. Both cases 1) and 2) use only RAM.
Case 3) has features of the processing used in cases 1) and 2). First, data set is gathered in the hash table and aggregates are computed (Fig. 4). When the number of entries in the table exceeds the specified limit, the content of the table is written to the external file in the sorted order according to the grouping attribute values. Next, the hash table is cleared and the processing is continued. Such a cycle repeats until the input tuple stream ends. Then the data integration process is run. Tuples are read from the previously created files and final aggregates values are computed. This is very similar to case 1) processing with the exception of getting data from external files instead of the node input.
**JoinTransformation Component.** A joining node works based on the algorithm presented in Fig. 5. The first step is collecting tuples from the slave input. They can be loaded to a temporary associating array or written to a temporary disk file. Before writing to the file, tuples are sorted according to the joining attributes using the external version of the standard Merge-Sort algorithm: tuples are gathered in memory, if the limit of tuples in memory is exceeded they are sorted and written to a file. Next portions of the data set are treated in the same way. Finally, tuples from all the generated sorted files are integrated into one big sorted file. Sorting lets us locate any tuple in the external file in $\log(n)$ time using the binary search algorithm.
**procedure** Join()
**Begin**
**While** Input(2).hasTuples() **do**
Tuple T = Input(2).getTuple();
HM.put(Attributes(T), T);
**End while**
**While** Input(1).hasTuples() **do**
Tuple T = Input(1).getTuple();
Tuple[]} TT = HM.get(Attributes(T));
**For each** JT **in** TT **do**
Tuple O = Join(T, JT);
ProduceOutputTuple(O);
**End for**
**End while**
**End**
Fig. 5. General joining algorithm
Additional indexing structure located in memory also decreases searching time, by reducing the number of accesses to the file. The index holds locations of the accessed tuples, which enables narrowing down the searching range when accessing consecutive tuples.
The second phase is the same, no matter if the temporary buffer is located in memory or on a disk. Only the implementation of the HM (HashMap) object changes in the algorithm presented in Fig. 5. Each tuple from the primary input is checked if it can be joined with tuples in the temporary buffer according to the specified join condition.
4. External processing tests
For tests we used data files that forced Java Virtual Machine to use much more memory than it was physically available. Tests were performed using the computer with AMD Athlon 2000 processor working under Windows XP Professional. During tests we were changing the size of the available RAM.
4.1. Grouping test
Grouping was tested based on the extraction graph containing an extractor $FE$, a grouping node $GrT$ and an inserter $I$ (Fig. 6). The extractor reads the tuple stream with attributes $(eID, date, value)$, in which for each employee $eID$ and for each day of his work, the transaction values were saved. The number of employee transactions per day varied from 1 to 20. The processing can be described by SQL query as:
```
SELECT eID, date, sum(value) as sumVal, count(*) as trCount
FROM GrT_{FE}
GROUP BY eID, date
```
The processing time was measured depending on the number of input tuples (10, 15, 20 and 25 millions) and the type of processing. The result chart contains the total processing time (TT) and the moment of loading the first tuple to a destination, so called Critical Time (CT). During all the tests using external grouping (Ext) JVM was assigned only 100MB of RAM. During grouping in memory, we examined the two cases: JVM memory was set with some margin (Normal) and with a minimal possible amount of RAM (Hard) that guaranteed successful completion of the task. The obtained results are shown in Fig. 7.
The test computer contained 384MB physical RAM, and for JVM using virtual memory and for 10m and 15m tuples it was assigned respectively 450MB and 550MB during Normal test, then 300MB and 425MB during Hard test.
As it can be seen, the most efficient processing method is definitely the one using application-level data storing. Its processing time changes from 129 sec. to 322 sec. depending on the number of input tuples. The use of OS-level virtual memory causes that the whole process takes much more time. Only for 10 million of input tuples and strongly limited JVM memory, which resulted in a very low usage of the virtual memory, we obtained results slightly better than for built-in data storing. However, for 15 million tuples the processing takes an extremely long time (the line going rapidly outside the chart). The main reason for so low efficiency of a virtual memory are random accesses to the memory caused by updating aggregates in temporary buffers and Java garbage collector. The application-level storing accesses data files sequentially, and as a result this method is much more efficient.
We have not finished the OS-level virtual memory tests for 20m and 25m tuples because it needed extremely long time (several hours). Our goal was only to show that the application-level buffering can be much better than the OS-level buffering.
4.2. Joining test
Joining test is based on the extraction graph shown in Fig. 8. The extractors read the same number of tuples: $FE_1$ reads tuples with attributes...
(eID, date, depID), describing where each employee was working each day, whereas FE2 reads a set of tuples produced in the previous test (eID, date, sumVal, trCount). Joining attributes are (eID, date) and processing times were measured for the following number of input tuples from each extractor: 10, 15 and 20 millions.
During the test the computer was equipped with 256MB RAM, JVM was assigned 100MB RAM when joining with data storing on disk was used, and respectively 400MB and 600MB for 10 and 15 million of tuples when using virtual memory. In this test we can still observe benefits of using application-level data storing, but the difference in comparison to OS virtual memory is not so big as in the grouping test because this time the external file is accessed randomly, not sequentially. The obtained results are presented in Fig. 9.

**Fig. 8. Joining test extraction graph**
**Fig. 9. Processing times measured during joining test. TT is a total processing time, whereas CT denotes a moment when the first output tuple is produced (Critical Time)**
4.3. Real extraction test
We also performed a real extraction test. The ETL process generates a star schema data warehouse containing a fact table and two dimensions. In this test, both grouping and joining nodes appear in the extraction graph and they run concurrently: when the grouping node \(\text{GrT}(2)\) produces output tuples, the joining node \(\text{JoT}(30)\) puts them into its internal buffer (memory or a disk file). This test lets us examine the behavior of the buffering techniques when more than one node require a lot of memory resources.
The size of the input data set was 300MB. JVM required 475MB RAM to complete the task using virtual memory, and only 100MB when using application-level data storing. The computer had 256MB RAM. The ETL process using data storing took only 26 minutes, whereas when using the virtual memory, it needed 3 hours to complete only 10% of the whole task (the whole processing could take even 30 hours). Continuation of the test did not make sense, because we could already conclude that in this case the efficiency of the virtual memory was extremely low.

**Fig. 10.** The main part of the extraction graph generating star schema data warehouse. Path \(\text{FE}(1)-\text{FI}(32)\) generates fact table, whereas path \(\text{FE}(1)-\text{FI}(5)\) is responsible for producing one of the dimension tables. Extractor \(\text{FE}(1)\) reads 300MB data file
In our opinion the obtained results come from of the random accesses to the VM swap file. When many nodes keep a lot of data in a virtual memory and access it randomly (because each node runs as an independent thread) the swap file has to be read and written very often from various locations. This does not take place during application-level buffering, the external files are accessed sequentially if only it is possible (depending on the algorithm that is used).
5. Conclusions
This paper presents a concept of describing extraction processes using graphs, the meaning of graph nodes and the graph edges in the extraction process. We focused on a few implementation aspects like interconnections between nodes and the possibility of deadlock occurrence when particular graph structures are used. A method of avoiding deadlocks was also presented and it was described
Grouping and joining transformations in the data extraction process
by mathematical formulas. Next we introduced algorithms for external data queuing, groping and joining.
Although not tested in this paper, the presented data queuing is the efficient method of avoiding deadlocks that may occur in our ETL-DR extraction environment due to the data transferring method we used. The grouping transformation can process data sets of any size, the only limitation is the available temporary disk space. It makes use of the additional tuple stream properties, such as sorted order according to the values of the grouping attributes. The joining transformation can also process an unlimited number of tuples. It can store its slave-input tuples to disk files in a sorted order and then access any tuple in the file in \(\log(n)\) time.
Our research proves that a virtual memory offered by operating systems is not always the efficient solution. Dedicated algorithms of storing data in external files working on the application level are more efficient due to elimination of random accesses to a disk, which is the weakest side of the OS virtual memory. This weakness is especially emphasized in Java applications. A typical JVM prefers allocating new memory blocks to freeing unnecessary ones as soon as possible. This may be very efficient when only physical RAM is in use, but when JVM enters a virtual memory area and a garbage collector tries to recover unused memory blocks from it, the efficiency of a whole application dramatically drops.
References
|
{"Source-Url": "http://journals.umcs.pl/ai/article/download/3050/2246", "len_cl100k_base": 5702, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 27307, "total-output-tokens": 6756, "length": "2e12", "weborganizer": {"__label__adult": 0.0002465248107910156, "__label__art_design": 0.00023829936981201172, "__label__crime_law": 0.0003323554992675781, "__label__education_jobs": 0.00067138671875, "__label__entertainment": 4.416704177856445e-05, "__label__fashion_beauty": 0.0001131892204284668, "__label__finance_business": 0.0003495216369628906, "__label__food_dining": 0.000255584716796875, "__label__games": 0.000308990478515625, "__label__hardware": 0.0010814666748046875, "__label__health": 0.0003666877746582031, "__label__history": 0.0001881122589111328, "__label__home_hobbies": 7.998943328857422e-05, "__label__industrial": 0.0005412101745605469, "__label__literature": 0.00014066696166992188, "__label__politics": 0.0001933574676513672, "__label__religion": 0.0002853870391845703, "__label__science_tech": 0.042938232421875, "__label__social_life": 7.56978988647461e-05, "__label__software": 0.0185394287109375, "__label__software_dev": 0.93212890625, "__label__sports_fitness": 0.0001742839813232422, "__label__transportation": 0.00034308433532714844, "__label__travel": 0.0001569986343383789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26796, 0.01571]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26796, 0.85018]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26796, 0.89107]], "google_gemma-3-12b-it_contains_pii": [[0, 2154, false], [2154, 4463, null], [4463, 6808, null], [6808, 8525, null], [8525, 10842, null], [10842, 13614, null], [13614, 14814, null], [14814, 17156, null], [17156, 19228, null], [19228, 20754, null], [20754, 21869, null], [21869, 24169, null], [24169, 26796, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2154, true], [2154, 4463, null], [4463, 6808, null], [6808, 8525, null], [8525, 10842, null], [10842, 13614, null], [13614, 14814, null], [14814, 17156, null], [17156, 19228, null], [19228, 20754, null], [20754, 21869, null], [21869, 24169, null], [24169, 26796, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26796, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26796, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26796, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26796, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26796, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26796, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26796, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26796, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26796, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26796, null]], "pdf_page_numbers": [[0, 2154, 1], [2154, 4463, 2], [4463, 6808, 3], [6808, 8525, 4], [8525, 10842, 5], [10842, 13614, 6], [13614, 14814, 7], [14814, 17156, 8], [17156, 19228, 9], [19228, 20754, 10], [20754, 21869, 11], [21869, 24169, 12], [24169, 26796, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26796, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
04a1cc5c3cbe82fa23b9949ec77cbab5fced5d40
|
[REMOVED]
|
{"Source-Url": "https://inria.hal.science/hal-01762894v1/document", "len_cl100k_base": 5884, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 24268, "total-output-tokens": 7551, "length": "2e12", "weborganizer": {"__label__adult": 0.000843048095703125, "__label__art_design": 0.0011959075927734375, "__label__crime_law": 0.000885009765625, "__label__education_jobs": 0.1414794921875, "__label__entertainment": 0.00024020671844482425, "__label__fashion_beauty": 0.0004341602325439453, "__label__finance_business": 0.0010471343994140625, "__label__food_dining": 0.001087188720703125, "__label__games": 0.0013484954833984375, "__label__hardware": 0.0014190673828125, "__label__health": 0.0014104843139648438, "__label__history": 0.0006499290466308594, "__label__home_hobbies": 0.0004544258117675781, "__label__industrial": 0.0009617805480957032, "__label__literature": 0.0018863677978515625, "__label__politics": 0.0006422996520996094, "__label__religion": 0.0009002685546875, "__label__science_tech": 0.044189453125, "__label__social_life": 0.0006861686706542969, "__label__software": 0.009368896484375, "__label__software_dev": 0.7861328125, "__label__sports_fitness": 0.0007367134094238281, "__label__transportation": 0.0012912750244140625, "__label__travel": 0.0004127025604248047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29844, 0.05213]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29844, 0.70809]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29844, 0.90234]], "google_gemma-3-12b-it_contains_pii": [[0, 1186, false], [1186, 3657, null], [3657, 6951, null], [6951, 9601, null], [9601, 12880, null], [12880, 16184, null], [16184, 18872, null], [18872, 22326, null], [22326, 25261, null], [25261, 28411, null], [28411, 29844, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1186, true], [1186, 3657, null], [3657, 6951, null], [6951, 9601, null], [9601, 12880, null], [12880, 16184, null], [16184, 18872, null], [18872, 22326, null], [22326, 25261, null], [25261, 28411, null], [28411, 29844, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29844, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29844, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29844, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29844, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29844, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29844, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29844, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29844, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29844, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29844, null]], "pdf_page_numbers": [[0, 1186, 1], [1186, 3657, 2], [3657, 6951, 3], [6951, 9601, 4], [9601, 12880, 5], [12880, 16184, 6], [16184, 18872, 7], [18872, 22326, 8], [22326, 25261, 9], [25261, 28411, 10], [28411, 29844, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29844, 0.12169]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
c59b57a03238e5292e94b3f2251abe377c66eb7c
|
Lean Enterprise Architecture Method for Value Chain Based Development in Public Sector
Conference Paper - October 2018
CITATIONS
0
READS
643
4 authors:
Eero Hosiaisluoma
University of Turku
1 PUBLICATION 0 CITATIONS
Katja Penttinen
University of Jyväskylä
17 PUBLICATIONS 105 CITATIONS
Juha Mustonen
Gofore Plc
2 PUBLICATIONS 12 CITATIONS
Jukka Heikkilä
University of Turku
67 PUBLICATIONS 456 CITATIONS
Some of the authors of this publication are also working on these related projects:
FEAR research project View project
Kirja 2010 - Kirja-alan kehitystrendit View project
All content following this page was uploaded by Katja Penttinen on 27 October 2018.
The user has requested enhancement of the downloaded file.
Lean Enterprise Architecture Method for Value Chain Based Development in Public Sector
Eero Hosiaisluoma¹, Katja Penttinen², Juha Mustonen³ and Jukka Heikkilä¹
¹University of Turku, Finland
²University of Jyväskylä, Jyväskylä, Finland
³Gofore, Helsinki
eero.hosiaisluoma@gmail.com
katja.i.penttinen@jyu.fi
juha.mustonen@gofore.com
jups@utu.fi
Abstract: Enterprise architecture (EA) was first developed in the late 1980’s and has been since promoted as a method that can conquer the problems in aligning the business and information technology (IT). EA has been widely used in the private and public sectors. Finnish government’s EA work started over ten years ago by customising the EA framework, method and governance model. After new versions, the Finnish EA method (based on TOGAF) is still considered as too rigid, and full-scale use requires quite a lot resources – and in some cases benefits of EA are unclear. In this design science research, we propose intertwining EA into organisation’s development work. We call this method Lean EA development (LEAD) since it combines value chain based operating model with an agile EA practice, which focuses on operational level, linking EA directly to business demands and adding customer value. The LEAD can be adapted to any size of a target area such as to a business domain, whole organisation or wider ecosystem. In practice, the LEAD operating model organises capabilities around the value delivery chain. The revised architecture practice is co-operating with other functions, when developing services from ideas to production. Collaboration with different stakeholders and architecture visualization are the most important principles. Usage of an EA visualisation tool is a key enabler component of the LEAD, as every development target is visualised and published continuously. This approach is operated with lean management based on agile principles and enables EA as an important practice in the overall development. We have adopted and used the LEAD in a public organisation, one of the largest cities in Finland, and use this case as an illustration on how the concept can be used.
Keywords: enterprise architecture, lean enterprise architecture, lean management, agile development, value chain, visualisation, public sector
1. Introduction
The ongoing digital transformation requires many different policy areas to be considered simultaneously in an integrated approach (Tan and Pan 2003, Janowski 2015). The need for an integrated approach, forces governments to overcome silo-based structures and to promote cooperation at the different levels of government to develop a whole-of-government strategy (OECD 2017). The significance of information systems and technology is increasing, and still the need for alignment between business and information technology remains a major challenge, especially in the public sector (Rusu and Jonathan 2017). To lead the digital transformation to desired direction, organisations need holistic view on their information assets. This kind of information can be achieved by the appropriate use of EA.
The Finnish government’s EA work started over ten years ago by customising the EA framework, method and governance model. The main goal of the work was to improve interoperability of public organisations’ operations and services. The Finnish Act on Information Management Governance in Public Administration was passed in 2011 (Finlex 2011). The act makes the use of EA mandatory, for example, in central government offices, courts of law and cities when they conduct tasks laid down for them by law. According to the law, public sector organisations in Finland should use the national EA method and its guidelines in EA planning and management. Using a mandatory approach has been successful at improving the European wide interoperability (Gatti et al 2017), the Finnish national EA method, based on TOGAF (2018), is considered as too rigid and difficult to understand. Implementation of the EA method is challenging and its full-scale use requires a lot of resources. Practical guidelines for step by step guidance for fast and light adoption are missing. This has resulted in a situation where the adoption of the EA method has become a problem in practice. These are the main factors that motivate our study. The research question is: What kind of EA approach would provide better solutions for practice?
First, as a solution to the above problems, we emphasise EA’s role at the organisational development. Our LEAD method combines value chain based operating model with an agile EA practice, which focuses on the operational level, linking EA directly to the business demands and adding the customer value by keeping the end user services at the focus. Second, we illustrate LEAD’s practical applicability at one of the largest cities in Finland and use the case study as an example on how the LEAD is used in a real-life setting. This sets our case study in a constructive research approach, where the LEAD is developed at parallel with the practitioners and reflecting the experiences upon recent developments of EA and development methods.
We first connect our study to the existing knowledge base by introducing the research background. Second, we briefly describe the research method. Third, the LEAD is presented. Fourth, we illustrate findings from a case study to show how the Lean EA can be applied in practice. The results form a basis for further research on Lean EA and contribute to the discussion about the need to reconceptualize the current EA methods. Finally, we conclude the work.
2. Research background
At the public sector, policymakers initiate EA programs to improve interoperability, enhance productivity and improve the standard of service systems (Hjort-Madsen 2006, Janssen et al. 2013, Lemmetti and Pekkola 2014). Despite the investments in EA, many government EA programs have performed poorly (e.g. Penttinen et al. 2018) and have failed expectations (Hope et al. 2017, Ojo et al. 2012, Saha 2009). The incapability of EA to fulfil the promises and the challenges of EA, have been researched to some extent (Banaeianjahromi and Smolander 2016, Bui and Levy 2017, Dang and Pekkola 2017, Hauder et al. 2013, Hjort-Madsen 2006, Isomäki and Liimatainen 2008, Kaisler and Armour 2017). Recently, the need for EA to reinvent itself has been discussed (Janssen 2012, Lapalme et al. 2016).
We propose reinventing the EA method, using the principles of Lean and agile, to be able to answer in the requirements of current society and market, we briefly describe the background of Lean management and agile EA. The Lean has origins in the car manufacturing company Toyota and aims at minimization of waste in the production process, by focusing only on things that add value (Holweg 2007, Womack et al. 1990). Later the idea has been adapted to lean management in other areas of business (Womack and Roos 1997). The application of lean thinking in information management means that it can be considered to involve adding value to information by how it is organized, visualized, and represented. This enables information to flow to the end-user through the processes of exchange, sharing and collaboration. (Hicks 2007)
Agile EA is based on agile software development that can be seen as a reaction to traditional methods, which emphasize rationalized, engineering-based approaches (Dybå 2000, Nerur 2005). In traditional approaches, it is claimed that problems can be fully specified and there is an optimal and predictable solution for them Dybå and Dingsøyr (2008). This is similar to traditional EA methods, because it leads to excessive planning and modelling. In contrast, the agile development methods address the challenge of an unpredictable world by relying on people and their creativity instead of planned processes (Dybå 2000, Nerur 2005). There is only a limited body of knowledge on the use of agile in EA. For example, Rouhani et al. (2008) presented an agile EA framework, the use of agile principles in EA is researched by a survey for the EA professionals (Hauder et al. 2014), using agile methods in creating EA deliverables and collaboration between architects and software developers is researched with interviews (Hanschke et al. 2015). At the public sector context Gill et al. (2014) have used an agile EA framework to develop and implement the adaptive cloud technology-enabled EA. Typically, agile EA uses principles of agile methods such as iterations and lean thinking, and the key to successful agile EA is realising that humans are an integral part of the system, not merely just users of the system (Bloomberg 2013). These kinds of new EA practices require revising EA, but due to the limited research and experiences on the subject, further evidence is needed.
3. Research method
EA is a socio-technical artifact (Mumford and Weir 1979, Drechsler 2015) and it should be studied as such. Adopting EA in an organisation is a change intervention, which intersects both social and technical aspects in an organisation, and successful implementation is a process of change that requires responding to social interdependencies (Janssen 2012). The change agents are typically enterprise architects, who are managing the whole with its interdependencies to other activities and processes of the organisation. As we have participated in the development of a revised EA method to offer a more flexible solution to connect EA work to the overall
development in a real-life setting of a Finnish municipality, the researchers cannot be considered outsider observers, but are essential subjects interacting with the organisation under study. The initial version of the LEAD was co-created in a real setting. We used a pragmatic constructive approach in our research and two authors worked in the case organisation as reflective practitioners (Heiskanen and Newman 1997), who cooperate with academics and real-life practitioners to develop new, better suited, socio-technical artifacts and EA methods.
4. Lean enterprise architecture development
The Lean Enterprise Architecture development concept is a combination of the Lean management and agile EA practices. LEAD is a pragmatic enterprise development method that is based on collaboration and visualisation. Those are supported by the practical Lean EA Framework (LEAF), which is enabled by an EA visualisation tool. The LEAF guides the operational development, in which the EA visualisation tool plays an important role in practice. All the development targets are visualised on demand.
4.1 The lean enterprise architecture framework
The basic structure of LEAF is illustrated in the abstraction below (Figure 1). The LEAF is a concrete solution to implement the LEAD in practice.

The LEAD is a customer centric view of the enterprise that integrates organisation’s capabilities around the value stream model, instead of the function- or process-based approaches. This can be achieved with the traditional business architecture approach, but LEAD is not limited to the business perspective only. Our approach is based on several well-known methods for practical improvement of activities and processes (for example Scrum,
Kanban and Service-Driven Approach) that are adapted on demand. Hence, LEAD is a practical concept. The following examples are from the LEAD versioning IT management development method. For example, the Idea to Production value stream (Figure 2) describes an operating model in IT management. Regardless, LEAD can be used as a whole-of-operations model for an organisation. Then also the business capabilities (such as strategy planning and business planning) and additional value streams (such as goal to strategy and strategy to portfolio) are visible.
Figure 2: The idea to production value stream
Comparing the LEAD to the traditional EA development approaches, several differences can be identified. The traditional EA development process is time consuming and resource intense, whereas the LEAD approach is lightweight and agile. Traditional methods consist of several sequential phases (Figure 3), which is an appropriate process mostly in the case of large organisations.
Figure 3: Traditional EA process
The LEAD is suitable also for small and medium enterprises, as the LEAD can be adopted and executed with lesser resources and time. LEAD itself is an agile process without the big planning upfront cycle. The main difference between the LEAD and traditional EA development approaches is that LEAD is tightly integrated into holistic enterprise development, not solely taking the architecture and operational viewpoints. The LEAD approach is focused on delivering the business outcomes that are based on the strategic goals and the customer-driven demands. The LEAD also incrementally produces new data into Architecture Landscape as new development targets flow through the value delivery chain (Figure 4).
Figure 4: LEAD approach
The LEAD architecture landscape is a combination of the traditional EA’s current “as-is” and target “to-be” architectures. The architecture landscape provides the current situation of the organization’s business services, processes and applications, and planned new services etc. With the LEAD, distinct as-is and to-be architectures are not maintained as separate entities in large scale. This makes the architecture modelling work faster and more efficient. Only some specific development targets can be visualized in distinct as-is and to-be views if needed. Methods that can be applied are e.g. the Open Group Architecture Framework (TOGAF 2018) and the ArchiMate modeling language (Open Group 2016). In the LEAD, the EA content is produced and delivered.
continuously into the Architecture Landscape. In addition, portfolios and roadmap can be adjusted according to changing conditions continuously. This approach enables faster development cycles, shorter time-to-market, better reactivity and productivity compared to conventional approaches (Figure 5).

**Figure 5:** Conventional EA work compared to LEAD approach
The applied Lean and agile principles encourage to avoid unnecessary big design up-front and redundant planning activities. However, without planning and governance the organisation’s Architecture Landscape would, according to law of entropy, drift into chaos. It is reasonable to inspect the new requirements against the existing Architecture Landscape, putting more emphasis on managing the alignment with mission, vision, strategy and architectural principles, while learning from experience, as new services are deployed. As a consequence, the value-adding services are not developed in a vacuum, but into the existing organisational environment and reflected upon the actors with feedback to the developers. It is also rational to manage the overall Architecture Landscape with appropriate visualisation tool, in which all the EA content (e.g. organisation’s services, processes and applications) are coherently kept in an organised manner. The LEAF provides the content metamodel and placeholders for the most typical elements of the EA content. This is where the LEAD operating model plays a role.
4.2 The operating model
Traditionally, the EA adoption has required changes in current operating models, regarding IT/IS planning and implementation, project and program management, and IT management (Seppänen et al. 2018). In contrast, LEAD is based on the principle, that existing operating models and capabilities are utilised as much as possible. The operating model may vary in the different cases, but the main principle is to guarantee the right capabilities on demand in the Idea to Production value stream. This makes it easier to understand the development processes. The LEAD Operating Model at high-level is presented in the Figure 6.

**Figure 6:** The LEAD Operating Model at high-level
Following the Idea to Production value stream (Figure 2 above and Figure 6 below), at the design phase a small multidisciplinary demand management team takes care of handling all the incoming demands. The demand management capability is the core capability of the LEAD operating model and the team co-operates in order to find the most suitable solution for the customer’s demand. The team consists of specialists e.g. from customer relationship management, operational development and enterprise architecture management and agile methods and tools (e.g. Scrum and Kanban) are utilized in the process. The development phase contains build or buy activities managed by the project management office (PMO), and detailed service- or business design
activities when necessary. The operations phase covers production capabilities managed by the service management office (SMO). In addition, the idea to production value stream supports portfolio management, thus portfolios for ideas, development, IT services and applications are maintained within LEAF.
The aim is to keep the operating model light and to be able to change it when needed. For these purposes a Lean Manager, is responsible for the management of the whole value chain, making sure continuous improvements are made to the processes. In the LEAD, the architect’s role is to participate in the development processes and give support when needed.
5. Findings form the case
The LEAD has been adopted at city of Vantaa. Vantaa is the fourth biggest city in Finland with over 220 000 inhabitants and located in the southern part of Finland. Responsible organisation for the LEAD was IT department of the city. There were several reasons behind the decision to start planning and implementing new way of reorganising information and communication technology (ICT) development, reasons like the lack of overall insight and visibility of the overall enterprise development, and the siloed organisation culture in which the EA did not have productive or cooperative role to support organisation’s ICT projects. Approach was too IT-centric instead of being customer-centric, and overall organisation structure was containing also some overlapping functions related to EA.
The main challenge was the poor effectiveness of the EA framework in use, which was an adapted version of the Finnish national EA method. City of Vantaa had decided to use this EA method 2011 by the same time the Finnish act was passed, making the use of EA mandatory. Unfortunately, the method was not considered to be suitable for the organisation’s needs and caused a situation where stakeholders were not satisfied with the role of EA. This resulted the management to question the usefulness of the EA practice.
By the end of 2016, the chief information officer (CIO) requested to completely redesign the IT development process. It was decided, that the new development model should be more customer-centric, lean and agile; with practical and cooperative architecture function in it. The essential target with the new development model was to improve the demand management on the interface with internal customers’, and to produce fast and justifiable solution proposals for these customers of the IT department. In the beginning, the new development model was described as the Lean EA, aiming to implement lean and agile way to produce EA. For the effective use of EA, tool support was needed and was operationalised. The tool (QPR Enterprise Architect) is used for the EA visualisation and is provided free of charge by the Finnish government for public sector organisations’. It includes the free use of the open publishing portal for the EA descriptions, that is provided by the Finnish Population Register Centre as software as a service model.
In practice, the first version of the new development method was designed by the architecture team. The idea was to establish a co-creation model in which most of the department’s specialists could participate. All the phases were carefully designed by the team, and based on the plans following steps of the LEAD were introduced:
- The new role “Lean Manager” takes the leading position in the overall development, being the only new role at the organisation.
- New IT capability, multidisciplinary “Demand Management” virtual team is established for handling incoming business requests. Internally the team is called “Solution office”.
- The new lean and agile practices, methods and tools are introduced and adopted in the overall development, such as web-enabled collaboration tools, backlogs, Kanbans, and daily scrums.
- LEAD is deployed, Demand Management as its core capability, and architects are involved.
- The EA team is reorganised. New chief architect is appointed outside the organisation and the new governance model is activated.
- The use of the EA visualisation tool is agreed with the CIO and taken into use immediately.
- The LEAD Operating Model is introduced, and it defines organisational actors, processes and information.
- The LEAF is introduced.
- PMO and SMO are integrated into LEAD.
LEAD performance metrics are introduced and implemented.
Within one-year development period (Figure 7), the IT development work concerning all the main phases of the LEAD method: Design, Development and Operations was reorganised. There are parts of the model that are not fully functional yet, but LEAD’s lean and agile nature enables continuous improvements. For example, Demand Management capability was changed, because it was considered too heavy. The number of participants was reduced, design meeting was shortened and divided into general and technical parts.
To support open government principles and to provide knowledge for other public sector organisations, the LEAD framework was partly published in the Finnish Population Register Centre’s EA modelling service. Since the use of the Finnish national EA method has been challenging in many organisations, there has been a lot of interest in the LEAD work in Vantaa and there are other cities starting the adoption of the LEAD.
Figure 7: The development process in Vantaa
The learnings from this first LEAD adoption can be used to help in the beginning of the work. It is very important to have the support from the management from the start. In Vantaa the CIO’s strong support has abled the acceptance of the new model and the change resistance has been moderate. Some functions were first not light, fast and small enough in Vantaa and are to be redesigned. In the next implementation projects this should be noted, to be able to keep the work as lean and agile as possible. There have been changes in the language used about the development work, with the LEAD the key is to talk about adding the customer value and the need to talk about EA itself is reduced. This is an advantage, since after the long and challenging implementation period of the mandatory EA in the Finnish public sector, the word EA has become almost a swear word in the Finnish public sector (Penttinen et al. 2018). When the EA is an integral part of all the development work, the disconnectedness of the EA work is diminished.
6. Conclusions
The use of the national EA method has been mandated by law in Finland since 2011. In practice, the implementation and use of the method have been challenging (Seppänen et al. 2018, Penttinen et al. 2018) as the method is considered rigid, hard to understand, and its implementation and use requires a lot of resources. There has been a need for an EA method that would allow easy implementation, be flexible and intertwined in the existing development processes. Also, the customer viewpoint has been missing. In this study, we proposed the LEAD as a solution to these practical problems in the use of EA in the public sector.
Using pragmatist-constructive approach, we studied the change as reflective practitioners. We first presented the problem area by arguing that traditional EA methods have not been able to fill up the expectations of
LEAD requires iterations and adaptation to the context of use. Second, we demonstrated the use of LEAD as a Value Chain based operating model and agile practices into EA. The following four lessons are learned. First, LEAD is a co-creation project with enterprise architects, developers, users and management. The use of LEAD requires iterations and adaptation to the context of use. Second, we demonstrated the use of LEAD by describing the findings from the case. From over a year lasting practical experience of using LEAD, we can argue that the concept seems to be working. The key is to combine management, Value Delivery Chain and value chain and Architecture Landscape to achieve targeted value to the customer by utilising agile development. Third, the adoption of LEAD in Vantaa required substantial changes on service development and organisation of the IT department. To succeed in making the changes and continues use, a strong top management interest and support are required. Fourth, the LEAD in Vantaa was initiated as an IT department project, but further development is aspired and needed to make it suitable for more extensive development settings, such as accelerating digitalisation at the organisational level. Nevertheless, more experience, preferably from another cases, is needed. To be able to evaluate thoroughly, more research is needed. The evaluation by comparing the objectives of the LEAD to actual observed results from use of the artifact in the demonstration, is subject to future research.
References
|
{"Source-Url": "https://www.hosiaisluoma.fi/Hosiaisluoma_etal_ECDG2018.pdf", "len_cl100k_base": 5308, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 23493, "total-output-tokens": 7807, "length": "2e12", "weborganizer": {"__label__adult": 0.0006842613220214844, "__label__art_design": 0.008270263671875, "__label__crime_law": 0.0010023117065429688, "__label__education_jobs": 0.0085296630859375, "__label__entertainment": 0.00016605854034423828, "__label__fashion_beauty": 0.0004258155822753906, "__label__finance_business": 0.0283203125, "__label__food_dining": 0.0007710456848144531, "__label__games": 0.0009136199951171876, "__label__hardware": 0.001285552978515625, "__label__health": 0.001068115234375, "__label__history": 0.001277923583984375, "__label__home_hobbies": 0.00025963783264160156, "__label__industrial": 0.003543853759765625, "__label__literature": 0.0007038116455078125, "__label__politics": 0.0011768341064453125, "__label__religion": 0.0009589195251464844, "__label__science_tech": 0.10296630859375, "__label__social_life": 0.0001900196075439453, "__label__software": 0.022003173828125, "__label__software_dev": 0.81298828125, "__label__sports_fitness": 0.0004367828369140625, "__label__transportation": 0.0015382766723632812, "__label__travel": 0.0004363059997558594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32588, 0.04653]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32588, 0.18314]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32588, 0.9205]], "google_gemma-3-12b-it_contains_pii": [[0, 731, false], [731, 5146, null], [5146, 10234, null], [10234, 11997, null], [11997, 14506, null], [14506, 17525, null], [17525, 21877, null], [21877, 24808, null], [24808, 30027, null], [30027, 32588, null]], "google_gemma-3-12b-it_is_public_document": [[0, 731, true], [731, 5146, null], [5146, 10234, null], [10234, 11997, null], [11997, 14506, null], [14506, 17525, null], [17525, 21877, null], [21877, 24808, null], [24808, 30027, null], [30027, 32588, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32588, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32588, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32588, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32588, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32588, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32588, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32588, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32588, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32588, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32588, null]], "pdf_page_numbers": [[0, 731, 1], [731, 5146, 2], [5146, 10234, 3], [10234, 11997, 4], [11997, 14506, 5], [14506, 17525, 6], [17525, 21877, 7], [21877, 24808, 8], [24808, 30027, 9], [30027, 32588, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32588, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
f9c279f5789f6d5c542b553131335cd495deece2
|
Tiki Calculations started out as an advanced rating system, and has since evolved into a powerful general purpose calculations system.
**Advanced Ratings & Calculation Syntax**
**Overview**
Use this page to configure a “rating” system to evaluate tracker items or wiki pages. Introduced in Tiki5, the advanced rating feature allows for more control over the aggregation of scores. You will also see in this documentation page how to use the calculations syntax, which also applies at the Mathematical Calculation Tracker Field.
**To access**
Click the **Ratings** icon 🛡️ on the Admin Panel
or
**Note**
Tiki currently supports sorting through advanced rating in:
- Articles
- Wiki
- Comments
- See also Mathematical Calculation Tracker Field

Advanced Ratings page
<table>
<thead>
<tr>
<th>Setting</th>
<th>Description</th>
<th>Default</th>
</tr>
</thead>
<tbody>
<tr>
<td>Global configuration</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Advanced Rating</td>
<td>Enable the internal rating system, used for calculating values from trackers, articles, or other features.</td>
<td></td>
</tr>
</tbody>
</table>
Setting | Description | Default
--- | --- | ---
Rating recalculation mode: | Determines when and how rating aggregates are recalculated:
* **On vote** (default): indicates that the score for the object should be recalculated every time a vote is performed. This option is suitable for sites with lower volumes and relatively simple calculation methods when ratings are used.
* **Random on load**: will cause a few scores to be calculated on page load on a random basis (odds and count can be configured to adapt to site load). This option is suitable for calculation rules involving time that must be recalculated even if no new votes occurred.
* **Random on vote** is similar to random on load, but will recalculate multiple scores (not necessarily including the current object) when a vote is performed. It is suitable for similar situations. The best option will depend on site load.
* **Periodic**: is the best option for heavy load sites, making sure all calculations are done outside the web requests. A cron job must be set-up manually by the site's administrator. A sample script is available at the end of this page.
Depending on the site load, some options may be better than others; on large volume sites, we recommend crontab. The **Recalculate on vote** recalculation may be inaccurate if rating calculation depends time.
**Before any attempt to re-index the object:** Ties into the Search and List from Unified Index and updates the calculation at index-time.
Recalculation odds (1 in X):
Recalculation count:
**Wiki**
Simple wiki ratings: Enable a simple rating bar at the top of each wiki page.
Wiki rating options: List of options for the simple wiki ratings. 1,2,3,4,5
**Articles**
Enable a simple rating bar at the top of each articles page.
User ratings on articles
Article rating options:
The feature must first be enabled through this same administration panel. Along with the feature, a few options are available. Among them, the score recalculation period must be defined. These are the available options:
On vote (default) indicates that the score for the object should be recalculated every time a vote is performed. This option is suitable for sites with lower volumes and relatively simple calculation methods when ratings are used.
Random on load will cause a few scores to be calculated on page load on a random basis (odds and count can be configured to adapt to site load). This option is suitable for calculation rules involving time that must be recalculated even if no new votes occurred.
Random on vote is similar to random on load, but will recalculate multiple scores (not necessarily including the current object) when a vote is performed. It is suitable for similar situations. The best option will depend on site load.
Periodic is the best option for heavy load sites, making sure all calculations are done outside the web requests. A cron job must be set-up manually by the site's administrator. A sample script is available at the end of this page.
For the random options, the odds of recalculating must be specified as a dice roll. For each occurrence of a recalculations, a limit to how many scores can be calculated must be specified to avoid the hang-up effect on the page load.
The value ranges for each object type can also be specified through the administration panels.
The common sort_mode parameter to lists can be used to activate sorting using advanced ratings. To do so, the sort mode must be set to adv_rating_X_asc or adv_rating_X_desc where X is the ID of the rating configuration. The default sort can also be set to advanced ratings in the administration panel where applicable.
Calculation configuration
From the administration panel, new calculations can be added. Initially, only the name is required. When created, the calculation will contain suitable default values.
For wiki pages:
Thus, visitors can provide feedback like:
- Did this page help you solve the issue?
- Was this page easy to understand?
Sorting items according to advanced rating
Note that the sort mode to use when needing to sort by advanced rating is either adv_rating_xx_asc or adv_rating_xx_desc, where xx is the ratingConfigId.
Set-up
By default, each calculated value is kept for 1 hour (3600 seconds). This limit does not apply when recalculating on vote, but is used for every other technique to avoid recalculating the same scores over and over again.
The calculation is defined as a small piece of code, similar to functional languages, which is very close to mathematical representations. Creating custom formulas is expected to require some mathematical skills. However, this documentation should provide examples for most frequent cases.
The editor in the administration panel performs extensive validation and will make it impossible to save the formula unless it can be evaluated. Checks are performed for:
- Syntax errors
- Unknown functions
- Missing arguments
- Invalid argument values
- Unknown input variables
**Default formula**
```
(rating-average (object type object-id))
```
It can be altered to limit the vote consideration to a limited time span, 30 days for example.
**Recent votes only**
```
(rating-average
(object type object-id)
(range (mul 3600 24 30))
)
```
In the language, spaces do not matter. Only the parenthesis indicate structure. `rating-average` is a function that fetches the ratings for a given object. `type` and `object-id` are standard variables fed when calculating a rating. `object` and `range` are configuration options of the function.
`mul` is a mathematical function. `(mul 3600 24 30)` is equivalent to 3600*24*30.
The functions can be combined in various ways. For example, we could calculate a score that considers the votes from the past month, but gives extra emphasis on the recent ones.
**Combined vote duration**
```
(add
```
Even though the votes are 1-5, the final score can be on an entirely different scale. The language is also extensible if the calculation needs to be combined with other factors or weight. See Rating Language.
All available options are documented in the following section.
**Syntax**
**General Reference**
**Sample and use case**
### add (Sum)
Performs a simple sum accepting multiple input
**Examples**
```
(add 3 4)
-> 7
(add (add 3 4) 5)
-> 12
(add 3 4 5)
-> 12
(add 4 0.5)
-> 4.5
```
### sub (Substract)
Performs a simple substraction accepting multiple input
**Examples**
```
(sub 3 4)
-> -1
(sub (sub 3 4) 5)
-> -6
(sub 3 4 5)
-> -6
(sub 4 0.5)
-> 3.5
```
**div (Divide)**
Performs a simple division accepting multiple input values.
**Examples**
- `(div 3 4)`
-> 0.75
- `(div (mul 3 10) 5)`
-> 6
- `(div 30 5 3)`
-> 2
- `(div 4 0.5)`
-> 8
**mul (Multiply)**
Performs a simple multiplication accepting multiple input values.
**Examples**
- `(mul 3 4)`
-> 12
- `(mul (mul 3 4) 5)`
-> 60
- `(mul 3 4 5)`
-> 60
- `(mul 4 0.5)`
-> 2
**and / or**
**and**
Ensures all elements evaluate to true.
**Examples**
- `(and 3 2 1 2 3)`
-> 1
- `(and 2 3 0 2)`
-> 0
**or**
Ensures that at least one element evaluates to true. Elements are evaluated sequentially until a false element is found. Others are left unevaluated.
**Examples**
- `(or 3 2 1 2 3)`
-> 1
- `(or 2 3 0 2)`
-> 1
- `(or 0 0)`
-> 0
avg
Calculates the average of multiple values. All entries in the list will be flattened if arrays are present.
Examples
(avg 1 2 3)
-> 2
... given list contains [1, 2, 3]
(avg list)
-> 2
clean
Clean accents and special characters from a string of text, replacing the accented characters with the non-accented equivalent where possible. Added in Tiki 18.2
Examples
(clean (str foó barça caña))
-> "foo barca cana"
coalesce
Returns the first non-empty value from the list.
Examples
(coalesce 3 4)
-> -3
(coalesce (sub 3 3) 5)
-> 5
(coalesce 0 0 (str) -10)
-> -10
(coalesce 0 0 0 0)
-> 0
comment
Any comment block is stripped from the formula at parse-time
Examples
(mul
1
2
(comment Simple enough?))
-> 2
concat
Concatenates a string of text. (new in Tiki12)
Note: The quoted string syntax was included in Tiki13.
**Examples**
```lisp
(concat
(str $)
1234
)
```
-> "$1234"
```lisp
(concat
14
(str %)
)
```
-> "14%"
```lisp
(concat 14 "%")
```
-> "14%"
**contains**
Searches for a value within a vector of values (new in Tiki15.0, backported to Tiki14.2 and Tiki12.5).
**Examples**
```lisp
(contains 4 (1,2) )
```
-> 0
```lisp
(contains 4 (2,4) )
```
-> 1
Note that if you are using an argument value in here, you would need to eval and then put brackets around to make it work.
```lisp
(contains 307 (eval(args.values_by_permname.version)))
```
-> 0
```lisp
(contains 305 (eval(args.values_by_permname.version)))
```
-> 1
```lisp
(contains 30 (eval(args.values_by_permname.version)))
```
-> 0
**count**
Returns the total number of entries within an array passed as argument.
**Examples**
```lisp
(count results)
```
-> 5
**Currency**
New in Tiki21: https://sourceforge.net/p/tikiwiki/code/71175
Allows to convert a calculation into currency field. Expects 3 arguments. First is the calculation of the
amount. Second is the source currency - i.e. which currency is the amount in? Third is the currency field.
**Examples**
(currency (cal-of-the-amount) (sourceCurrency) currencyFieldPermName)
If you use the Canadian daily exchange table from the Bank of Canada you can retrieve sourceCurrency string from currencyFieldPermName (ie: FXUSD, CAD) using this formula:
**Get the last 3 char of a value in a tracker**
(substring currencyFieldPermName -3)
**Examples**
(currency (cal-of-the-amount) (substring currencyFieldPermName -3) currencyFieldPermName)
---
**currency-convert**
New in Tiki22: [https://sourceforge.net/p/tikiwiki/code/76059](https://sourceforge.net/p/tikiwiki/code/76059)
This adds 2 things:
1. currency-convert math function which will allows us to convert specific math functions to CAD when you need only CAD values.
2. using default tracker for exchange rates when parsing a math function output like 123USD without a currency field context. The tracker must contain 'exchange rate' in its name in order to be found. This will be until we have a proper concept of system trackers.
---
**date**
New in Tiki14.1: [https://sourceforge.net/p/tikiwiki/code/55964](https://sourceforge.net/p/tikiwiki/code/55964)
Takes two optional arguments, format and timestamp and uses the PHP date function. See here for reference. Format defaults to "U" which is the Unix timestamp, and the timestamp defaults to "now".
**Date Examples**
(date)
> Returns "1438358437" currently
(date (str Y-m-d H:i:s))
> Returns "2015-07-31 17:00:43" currently
(date (str r) theTimeAndTheDate)
> Returns "Tue, 16 Jun 2015 09:23:09 +0100" for instance, where theTimeAndTheDate is the permanent name of a tracker field in the same tracker.
---
**equals**
Compares multiple values.
Examples
(equals 2 (add 1 1) (sub 4 2))
-> 1 (equivalent of 2 == 1+1 && 2 == 4-2)
(equals (add 1 1) 3)
-> 0
for-each
For a list of value pairs, such as the output of `split-list`, evaluates a formula for each set of values, returns the list of results.
Within the formula, variables coming from the list will be used first. Fallback will be on the other variables available in the execution context.
As of Tiki18, list items can be the output of ItemsList tracker field where individual formula fields are the linked fields from the other tracker. See example below.
Examples
... given items contains [{a: 1, b: 2, c: 3}, {a: 2, b: 3, c: 4}]
(for-each
(list items)
(formula (mul a b c)))
-> [6, 24]
... given items contains [{a: 1, b: 2, c: 3}, {a: 2, b: 3, c: 4}]
... and d contains 10
(for-each
(list items)
(formula (mul c d)))
-> [30, 40]
... given trackerDetails is an ItemsList field pointing to another tracker with a field detailsAmount
... and particular tracker item has 2 linked items with detailsAmount = 30 and 40
(add
(for-each
(list trackerDetails)
(formula (concat detailsAmount))))
-> 70
This formula sums the detailsAmount column from the other tracker for all items linked from this tracker's item.
hash
Generates a hash based on multiple values. Used primarily to generate aggregate hashes in the PluginActivityStream. Note that because it is a hash, the exact value coming out does not matter. Only that given the same parameter, it will produce the same value.
Examples
(hash 1)
-> [sha1("1")]
(hash 1 2 3 4)
if
Conditionally evaluates a branch.
Examples
(if (equals 2 2) 42 -1)
-> 42
(if (equals 2 1) 42 -1)
-> -1
IsEmpty
New in Tiki14: https://sourceforge.net/p/tikiwiki/code/53588
Examples
(IsEmpty 1)
-> 0
(IsEmpty 0)
-> 1
You can also use a tracker field permaname as true or false value or as returned value from the tracker item.
Here we combine if, equals and is empty to create a different output for each case else the default output (0 or 1) will be returned.
(if (equals 1 (IsEmpty permaname)) (str empty) (str notempty))
if the field has no value (1) -> empty
if the field has a value (0) -> notempty
IsEmpty may also be used to test if an array is empty.
However in some case you may have an error. coalesce seems to work better for testing and displaying tracker field values.
less / more
less-than
New in Tiki14.1 and Tiki12.5: Checks whether the first value is less than the second.
Examples
(less-than 3 (add 2 2) )
-> 1
(less-than (add 2 2) 3)
-> 0
more-than
New in Tiki14.1 and Tiki12.5: Checks whether the first value is more than the second.
**Examples**
```latex
(more-than 3 (add 2 2))
-> 0
(more-than (add 2 2) 3)
-> 1
```
lower
Converts string to lower case.
**Examples**
```latex
(lower Foo)
-> foo
```
map
Generates a map (or dictionary).
**Examples**
```latex
(map
(key1 1)
(key2 2)
(key3 (str value3))
)
-> {"key1": 1, "key2": 2, "key3": "value3"}
```
Not
New in Tiki14: https://sourceforge.net/p/tikiwiki/code/53590
number-format
Format a number with grouped thousands. See PHP's number_format function for exact arguments. New in Tiki18.
**Examples**
```latex
(number-format 123456.78)
-> "123,456.78"
(number-format 120.334 (str 3))
-> "120.334"
(number-format 120.334 (str 0))
-> "120"
(number-format 123456.78 (str 2) (str ,) (str ))
-> "123 456,78"
```
pad
*New in Tiki15 committed in r57094*
Takes two to four arguments, input string, output length, padding string (defaults to a space) and padding type (defaults to right)
```plaintext
This example right pads prices in a simple products tracker for sorting and filtering
(pad
productsPrice
8
(str 0)
(str left)
)
```
round
Rounds to a specific number of digits (new in Tiki12)
### Examples
```
(round 4.556234342234 2)
-> 4.56
(round 4.556234342234)
-> 5
```
str
Generates a static string when needed and the processor attempts to process the string as a variable. Any arguments will be concatenated using spaces.
**Note:** The quoted string syntax was included in [Tiki13](http://example.com).
### Examples
```
(str hello-world)
-> "hello-world"
(str hello world)
-> "hello world"
(str
hello
world
foobar)
-> "hello world foobar"
(str (mul 2 3) "= 6")
-> "6 = 6"
(str some text with new line~nl~)
-> "some text with new line
```
str-to-time
Parse about any English textual datetime description into a Unix timestamp. See PHP's `strtotime` function.
for more details on valid formats and options. New in Tiki18.
### Examples
```lisp
(str-to-time (str 2017-06-05))
-> "1496610000"
(date (str Y-m-d) (str-to-time (str +1 day) (str 2017-05-29)))
-> "2017-05-30"
```
**Numbers of days between a date in a field and now**
```lisp
(round (div (sub (str-to-time permanameDate) (date)) 86400) 0)
```
---
### str-replace
Replace substring in a string. See PHP's `str_replace` function for exact arguments. New in Tiki18.
**Examples**
```lisp
(str-replace (str foo) (str bar) (str hello foo))
-> "hello bar"
```
---
### split-list
Produces a multi-dimensional array out of a text string. Each line is expected to be an independent value, each line will be split by a separator into the specified keys.
**Examples**
```lisp
... given str contains a list of 3 comma-separated values
(split-list
(content str)
(separator ,)
(keys a b c))
-> [{a: 1, b: 2, c: 3}, {a: 2, b: 3, c: 4}]
```
---
### subtotal
Special function to aggregate data in a table. See Report formatting for more details.
---
### upper
Converts string to upper case.
**Examples**
```lisp
(upper Bar)
-> BAR
```
---
### Advanced Rating-specific Reference
#### rating-average and rating-sum
The rating functions calculate the score from the rating history table. Each rating performed on the site is
kept in the database and can be used to calculate custom ratings on. The various options allow to adapt the score calculation to reflect the importance on the site, whether it is to support quality improvement on documentation or to rank incoming data on a feed aggregator.
- **object**, mandatory and always *(object type object-id)* in this context.
- **range**, to limit how long votes are considered. Argument is provided as a number of seconds.
- **ignore**, with *anonymous* as an argument to only consider votes from registered users.
- **keep**, to only consider one vote per visitor. Unless the option is present, all of the votes are taken into account. The option can be either *latest* or *oldest* to indicate which one to keep.
- **revote** can be specified if **keep** is specified. Indicates the time period required between votes. For example, users could be allowed to vote more than once per day, but only their latest vote each day would be considered, if revote is set to mul(24 3600). If the user voted yesterday as well as today, both votes will be counted.
**article-info**
Pulls information from an article to include in the calculation. The first argument must always resolve to 'article'. If any other value, the calculation will be skipped for the evaluated object, making the formula type-specific.
Available properties:
- rating, the static rating attached to the article
- view-count
- age-second
- age-hour
- age-day
- age-week
- age-month
**Examples**
(article-info type object-id rating)
(article-info (str article) 42 age-month)
**attribute**
Pulls information from the generic object attributes.
**Examples**
(attribute
(object type object-id)
(ignore)
(property tiki.proposal.accept)
)
-> [value for page in a rating calculation]
(attribute
(object (str wiki page) 14)
(property tiki.proposal.accept)
(default 0)
tracker-field
Pulls information from the tracker item. The field value is converted to numeric value automatically. Zero is provided if the value is not found or unapplicable.
**Examples**
```
(tracker-field
(object (str trackeritem) 42)
(field priority)
)
```
-> [value contained in the tracker item field with permanent name ''priority'' from tracker item with object Id 42]
You can pull the value of an item coming from an item link and an item dynamic list field type (not tested on item list). This imply using the permaname of the item link tracker field and the permaname of the field that contain the value (in the other tracker).
```
value form item link item
(tracker-field
(object (str trackeritem) permaname_thistrackerfield)
(field permaname_othertrackerfield)
)
```
category-present
Gives 1 in score of every listed category present on the object.
**Examples**
```
(category-present
(object type object-id)
(list 3 4)
)
```
-> [0, 1 or 2 - Depending on how many of categories 3 or 4 are on the object]
Appendix
When unified search is used, recalculation can be configured to be done during re-indexing, removing the need for this script.
```
Cron job
<?php
chdir('/path/to/tikiroot');
require_once 'tiki-setup.php';
require_once 'lib/rating/ratinglib.php';
```
Sample and use case
Calculate date difference to see if tracker item is new or not
In this use case we compare a field that contains creation date to the actual date and if the difference is lower than 7 days (604800) the field will have the value "new" else "not new".
Examples
\[
(\text{if} (\text{equals} \ (\text{more-than} \ 604800 \ (\text{sub} \ (\text{add} \ (\text{date} \ 0) \ (\text{add} \ \text{dateDeCrAtion} \ 0))) \ 1) \ (\text{str} \ \text{new}) \ (\text{str} \ \text{notnew}))
\]
Using Concat to create a reference made of month and string value from other fields
In this use case we concatenate the month as word from a date field and a value from a text field (can be a dropdown, etc) and we use to create a reference for this item. Then the result, the value, can be used for many things like populating an itemList field, filtering, searching, etc.
Filling a field with value for type and month
\[
(\text{concat} \ \text{trackerPermanameType} \ (\text{str} \ " \ | \") \ (\text{date} \ (\text{str} \ F) \ \text{trackerPermanameDate}))
\]
For example it will set the value of field with : "Credit | February"
Assign a label to a range of values
In this case we combine less-than and more-than in a cascade of conditions to determine what value (a label) should be saved in the field based on a numerical value from another existing field. For example we have appartments and a field with their surface and we want to group them by size (less than 80m2, between 80m2 and 120m2, etc...)
\[
(\text{if} \ (\text{less-than} \ \text{trackerPermanameSize} \ (\text{add} \ 79 \ 0)) \ (\text{str} \ 79) \ (\text{if} \ (\text{less-than} \ \text{trackerPermanameSize} \ (\text{add} \ 119 \ 0)) \ (\text{str} \ 80-119) \ (\text{if} \ (\text{less-than} \ \text{trackerPermanameSize} \ (\text{add} \ 159 \ 0)) \ (\text{str} \ 120-159) \ (\text{if} \ (\text{less-than} \ \text{trackerPermanameSize} \ (\text{add} \ 199 \ 0)) \ (\text{str} \ 160-199) \ (\text{if} \ (\text{less-than} \ \text{trackerPermanameSize} \ (\text{add} \ 239 \ 0)) \ (\text{str} \ 200-239) \ (\text{if} \ (\text{less-than} \ \text{trackerPermanameSize} \ (\text{add} \ 279 \ 0)) \ (\text{str} \ 240-279) \ (\text{if} \ (\text{more-than} \ \text{trackerPermanameSize} \ (\text{add} \ 279 \ 0)) \ (\text{str} \ 280)))))
\]
If used in a search (customSearch, pluginList filter, etc) the label must be unique; Tiki search engine (tested with MySQL search) will consider characters like "_", ",", "+" (may be more) as separators and not character of the string. For example if you search for "20" both "10-20" and "20-30" will be outputted in the results.
To add some safeties we can also test first if there is a value in the field. Here the idea is to check if the trackerPermanamePrice contain a value (price) then to group the items within the same range of prices by giving them a label.
```scheme
(if (equals 1 (IsEmpty trackerPermanamePrice)) (str )
(if (less-than trackerPermanamePrice (add 99999 0))
(str 99999) (if (less-than trackerPermanamePrice (add 299999 0))
(str 100000-299999) (if (less-than trackerPermanamePrice (add 499999 0))
(str 300000-499999) (if (less-than trackerPermanamePrice (add 699999 0))
(str 500000-699999) (if (more-than trackerPermanamePrice (add 699999 0))
(str 700000)))))
)
)
)
```
Simple Wiki Ratings
Related
- Rating
- Rating Revamp
- Mathematical Calculation Tracker Field
- Grouped Data
alias
*Advanced+Rating | AdvancedRating | Advanced Ratings | AdvancedRatings*
|
{"Source-Url": "https://doc.tiki.org/tiki-print.php?display=pdf&page=Calculations", "len_cl100k_base": 6695, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 31397, "total-output-tokens": 8227, "length": "2e12", "weborganizer": {"__label__adult": 0.00027751922607421875, "__label__art_design": 0.0006823539733886719, "__label__crime_law": 0.0003132820129394531, "__label__education_jobs": 0.0011548995971679688, "__label__entertainment": 0.0003044605255126953, "__label__fashion_beauty": 0.00011301040649414062, "__label__finance_business": 0.0011205673217773438, "__label__food_dining": 0.00029850006103515625, "__label__games": 0.0015611648559570312, "__label__hardware": 0.00051116943359375, "__label__health": 0.00025272369384765625, "__label__history": 0.00020194053649902344, "__label__home_hobbies": 0.00013244152069091797, "__label__industrial": 0.0002841949462890625, "__label__literature": 0.0003082752227783203, "__label__politics": 0.0002865791320800781, "__label__religion": 0.0003387928009033203, "__label__science_tech": 0.016082763671875, "__label__social_life": 0.0002713203430175781, "__label__software": 0.298828125, "__label__software_dev": 0.67626953125, "__label__sports_fitness": 0.00020706653594970703, "__label__transportation": 0.0002111196517944336, "__label__travel": 0.0002390146255493164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25053, 0.05978]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25053, 0.33588]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25053, 0.77243]], "google_gemma-3-12b-it_contains_pii": [[0, 1480, false], [1480, 3522, null], [3522, 5473, null], [5473, 7347, null], [7347, 8036, null], [8036, 8797, null], [8797, 9587, null], [9587, 10674, null], [10674, 12459, null], [12459, 14027, null], [14027, 15006, null], [15006, 15845, null], [15845, 17038, null], [17038, 18374, null], [18374, 20249, null], [20249, 21554, null], [21554, 24197, null], [24197, 25053, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1480, true], [1480, 3522, null], [3522, 5473, null], [5473, 7347, null], [7347, 8036, null], [8036, 8797, null], [8797, 9587, null], [9587, 10674, null], [10674, 12459, null], [12459, 14027, null], [14027, 15006, null], [15006, 15845, null], [15845, 17038, null], [17038, 18374, null], [18374, 20249, null], [20249, 21554, null], [21554, 24197, null], [24197, 25053, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 25053, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25053, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25053, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25053, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25053, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25053, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25053, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25053, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25053, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25053, null]], "pdf_page_numbers": [[0, 1480, 1], [1480, 3522, 2], [3522, 5473, 3], [5473, 7347, 4], [7347, 8036, 5], [8036, 8797, 6], [8797, 9587, 7], [9587, 10674, 8], [10674, 12459, 9], [12459, 14027, 10], [14027, 15006, 11], [15006, 15845, 12], [15845, 17038, 13], [17038, 18374, 14], [18374, 20249, 15], [20249, 21554, 16], [21554, 24197, 17], [24197, 25053, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25053, 0.00697]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
0cbf2ebab3809d1d2247e1703489fc0182f0c7ac
|
Extending MMIL Semantic Representation: Experiments in Dialogue Systems and Semantic Annotation of Corpora
Alexandre Denis, Lina Maria Rojas Barahona, Matthieu Quignard
To cite this version:
Alexandre Denis, Lina Maria Rojas Barahona, Matthieu Quignard. Extending MMIL Semantic Representation: Experiments in Dialogue Systems and Semantic Annotation of Corpora. Fifth Joint ISO-ACL/SIGSEM Workshop on Interoperable Semantic Annotation isa-5, Jan 2010, Hong-Kong, China. hal-00481868
HAL Id: hal-00481868
https://hal.archives-ouvertes.fr/hal-00481868
Submitted on 7 May 2010
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Abstract
The MultiModal Interface Language formalism (MMIL) is a modality-independent high-level semantic representation language. It has been used in different projects, related to different domains, and with distinct tasks and interaction modes. MMIL is a metamodel that enables the definition of generic and domain specific descriptors to dialogue management, offering flexibility and high reusability. This paper presents the results of our experimentation with MMIL in diverse projects as well as the recent specifications that cover extensible thematic roles and complex linguistic phenomena.
1 Introduction
The increasing development of natural language processing (NLP) applications, many of them involving several modalities, has highlighted the importance of having an abstract representation language that facilitates the communication among the different modules within the system architecture. Intermediate representation languages, like the one presented in this paper, permit the integration of divergent resources in distributed systems as well as the representation of various levels of linguistic analysis within a single application. MMIL (MultiModal Interface Language) was created as a metamodel, a model that allows developers to define their own model, that provides elements (descriptors) to represent the form and content of linguistic resources in generic dialogue systems (Landragin et al., 2004). For instance, one can use MMIL to represent an utterance syntactically by modeling its surface form. In other cases, one might be interested in representing the semantics or in storing the referring expressions for further discourse processing. In addition, MMIL is ontology-oriented since it makes it possible to associate ontological concepts to its descriptors for the purpose of maintaining the integrity and consistence of both the dialogue and its application domain.
Therefore, MMIL is a language for representing valuable information of linguistic resources. It can be transformed, or translated into other specific formalisms e.g., symbolic formalisms, graphs or domain-specific representations such as flat semantics. Throughout this document, the process of transforming MMIL into other specialized languages is called “projection”. In this paper we describe the usage of MMIL as intermediate representation for language understanding and generation within different NLP applications. However, MMIL can be also used in multimodal dialogue systems and projected into languages for emotions representation and modalities synchronization in Embodied Conversational Agents.
This paper briefly introduces the MMIL language. It describes our experience in using MMIL in different projects, such as the MEDIA campaign (Bonneau-Maynard et al., 2009) and CCCP. Moreover, it presents the recent MMIL characteristics for dealing with thematic roles and complex utterances. Furthermore, it illustrates the application of MMIL in the Portmedia Project for semantic annotation.
2 MMIL Intermediate Representation Language
2.1 Background
Although a variety of languages has been proposed for multimodal dialogue systems, MMIL is an ontology-oriented approach that attempt to cover the maximum number of phenomena at sev-
eral linguistic levels (from lexical up to pragmatics and discourse). It has been used in three European projects – MIAMM (Kumar and Romary, 2002), AMIGO3 and OZONE (Landragin et al., 2004) – each of them having different interaction mode and application domains (multimedia databases retrieval, train reservation and integration of heterogeneous systems). Contrary to other languages, e.g. Multimodal Markup Language (M3L), Multimodal Presentation Markup Language (MPML) (Prendinger et al., 2004) and the Universal Networking Language (UNL, 2000), MMIL is a metamodel that enables to define generic and domain specific descriptors to dialogue management, offering flexibility in the XML syntax and high reusability (Landragin et al., 2004).
2.2 MMIL Meta-model
The MMIL meta-model allows the representation of communicative actions. A communicative action is represented as a component, a structure that gathers the communicative event and its propositional content. It is composed of two main types of entities: events, which are entities anchored in the time dimension, and participants, which are entities not bounded by time. Entities are linked together by relations and are described by sets of features (i.e. pairs of attribute-value). Components, entities, features and relations are called MMIL elements.
Every component has a unique communicative event, which describes the occurrence of a communicative action and its features, namely the time when it occurs, the speaker and the addressee. The communicative event also bears the illocutionary force, represented through the dialogueAct feature, which describes the function applied over the propositional content4.
The propositional content is represented as a main event with its arguments, which can be either events or participants, linked to the communicative event by a relation propContent. The main event is not always present in utterances, especially in the case of performing simple communicative actions, such as Accept, Reject or Opening and Closing. Nevertheless, in utterances with a propositional content, the main event is required, even in the case of ellipsis where an elliptical event is created. In addition, there should exist a path to the main event and its arguments (the other events and participants of the propositional content).
Suppose that Jack whispers to Bill: “John ate the red apple”. In this example (Figure 1, 2), there are two events, the communicative event of whispering, of which the agent is Jack (represented as the feature speaker) and the patient is Bill (represented as addressee), and the event of eating whose agent is John and whose patient is the red apple, both represented as participants of the propositional content. The adjective “red” is represented by the feature modifier inside the participant apple. In this case, the type of the communicative event is Whisper, but other communicative types are possible, for example Show for a gesture or Write for a textual communication.
As mentioned before, within the MMIL framework the agent and patient of the communicative event are not represented as participants, because participants are meant to represent the objects about which something is said and do not extend to the description of the utterance itself. Arguments of other predicates such as adjectives and adverbs are usually represented as participants that have them as modifiers. Nevertheless, nominalization or other linguistic representation of actions can be represented as events for the purpose of resolving a given task. In the large-scale lexical resource FrameNet (Baker et al., 1998) predicates are treated as frame-evoking elements calling different frames containing information about the roles of their arguments. However, sometimes in the context of a specific application, one can establish that some predicates are more important than others. Thus, one can consider only the frame evoking elements that are relevant in the context in question. MMIL permits the representation of the utterance’s information. The distinct representation of predicates is independent of the information stored, which remains available for further processing such as evoking FrameNet frames.
The MMIL meta-model describes all the possible features that events and participant might have and all their possible values. As such, it covers morphology (gender, number, etc.), semantics (objType, evtType, modifier, etc.) and pragmatics (refType, mmilId, etc.). Most of the features have a default value, thus, they can be omitted.5
3http://www.hitech-projects.com/euprojects/amigo/
4Handling multifunctionality may be done by removing the functionality constraint on the dialogueAct feature.
5See the MMIL 1.5 specifications for further de-
2.3 Different instantiations
The MMIL meta-model describes elements and restricts the possible valid structures syntactically. However, it does not describe exactly how to represent a given utterance. The utterance representation depends on how designers intend to use the representation. This means that, the level of detail may vary not only from one system to another, but also from one representation level to another within the same system. Typically, in bottom-up approaches, the system parses the utterance and builds a shallow representation, close to what is expressed explicitly. Afterwards it builds a deep representation of the intention of the speaker. For example the utterances (1) and (2) convey the same intention with two different surface forms. Whereas the surface form is defined standardly in MMIL, the deep intentional form is left free for system designers.
How much does this room cost? (1)
I want to know the price of this room (2)
Shallow instantiation The shallow representation of utterances can be specified using general purpose principles: in general, noun phrases are participants, verbs are events, and modifiers are features. The figures 3 and 4 show the shallow representation of the two utterances (1) and (2).
Figure 2: Graph representation of Jack whispers to Bill: “John ate the red apple”. Events are depicted as squared boxes, participants as ovals and relation as arrows from the source to the target entity.
Figure 3: Shallow representation of "How much does this room cost?"
The important aspect of the shallow instantiation is that it should keep the referring expressions. It is well known that two different ways to express the same intention may have two different effects on the context. In our case, it would be weird (if not impossible) to directly refer to the price by a pronoun in the first utterance “How much does this room cost? * Is it high?” while it would be possible to do it in “I want to know the price of this room. Is it high?”.
Deep instantiation In contrast to the shallow instantiation, the deep instantiation is just a matter of choice from the system designer. It is generally
advisable that two utterances that bear the same intention are represented the same way, however it is not a requirement considering that the MMIL representation might be projected in other frameworks, such as a logical framework, as explained in (Denis et al., 2006). For instance, a possible deep representation of the sentences (1) and (2) after reference resolution could be having a request with the following propositional content: “GivenAttributeOf(Room(room27))”, where Room is a participant and the id of the room is stored as its feature.
2.4 MMIL for semantic annotation
In order to use MMIL for semantic annotation, it is required to map each MMIL element within a given textual content. The most straightforward mapping consists in: given a textual content, linearly segmented as a list of segments $L = (S_1,...,S_n)$, in which segments are sequences of words, a mapping of a component is a function from each of its elements into continuous sublists of $L$, such that, the mapping of any element contains the mapping of its sub-elements. Since mappings are continuous, they can be represented on the basis of their left and right boundaries over the segmentation, annotated with the XML attributes start and end. When these boundaries are omitted for an element, it means it has the same mapping as its parent. The figure 5 illustrates the mapping over a word-level segmentation defined in a TEI (Text Encoding Initiative) compliant format (TEI-P5, 2009).
Figure 5: MMIL for TEI-compliant annotation
3 Recent Usage and Application Domains
MMIL has been used in several NLP applications as an interface language between modules, from which here we present four employments: application queries handling in Prolog, consistency checking in Description Logics, content representation for generation and graph rewriting for interpretation.
In the OZONE dialogue system (Landragin et al., 2004), MMIL has been used as a representation of the messages between modules in a multimodal dialogue system, including the application module, which was implemented in Prolog. Thus, the MMIL components were projected back and forth in Prolog. This was especially useful for the OZONE domain (train reservation) where one can specify some parameters for the request (Prolog constants) whereas other parameters can be let unspecified (Prolog variables). For example the utterance of “When does the train from Paris to Versailles leave?” would be first represented in MMIL, and then would be projected into Prolog, that is $train(paris, versailles, Departure, ...)$. The projection was two-fold. First a pattern-matching on the input component retrieved the type of the query and built a Prolog query skeleton. Then, the
query was filled by Prolog constants when parameters were provided or by Prolog variables when they were not. Eventually, in this example, the Prolog unification provided a set of possible instantiations for variable Departure, which can be represented back into MMIL as a disjunction.
In the MEDIA project (Bonneau-Maynard et al., 2009), the focus was on using MMIL for annotating spoken language utterances in a hotel reservation domain. In contrast to OZONE’s domain, the MEDIA’s domain was more complex and needed to be defined in an ontology (around 220 concepts). MMIL was first projected into description logics. All the types of entities, objType and eviType (for example RESERVATION or HOTEL), all the domain-dependent features (such as ROOMTYPE) and relations were then associated to classes or properties in the ontology. It was then possible, from the projection of a component into an Abox, to eliminate the components that were built from syntactically valid but not semantically sound hypothesis (typically a prepositional phrase modifying the wrong entity). In addition, it was possible to specify relations that were lost during the parsing because of disfluencies. Afterwards, the MMIL components were projected into a sequence of semantic features (i.e. a flat list of attribute-value pairs) aligned with the utterance as detailed in (Denis et al., 2006). The main difficulty of this projection was to flatten the component linearly to match the sequence of words. This was done thanks to the mapping defined in section 2.4.
In the dialogue system presented in (Denis, 2008), MMIL was also used to describe the content that has to be generated by the generation module. While in the OZONE project the generation was template-based, in this dialogue system we used the GenI surface realizer (Gardent and Kow, 2007) to do the generation. Given that MMIL is primarily a representation language, it was possible to easily extract from the components, the parts of the representation that had to be generated and translate them into the flat semantic formalism with variables expected by GenI.
In the latest project, the ongoing CCCP project, in which the task is to profile users in communities of practice, a deep MMIL representation is used to describe the utterance. This deep representation is produced thanks to graph rewriting technique. That is, first the components are projected into a generic graph representation, then a rule-based rewriting process occurs, and the resulting graphs are projected back into MMIL. From both utterances “How much does this room cost ?” and “I want to know the price of this room” we are able to produce the same deep representation by matching entities or sub-structures of the input components translated as graphs and by rewriting them. In this example, “How much does X cost ?” would be transformed into a request about the price of X, while the assertion “I want to know Y” would be transformed into a request about Y, resulting in the same graph representation, which in turn would be projected back into the same MMIL component.
Therefore, MMIL has been projected into different formalisms for several projects as summarized in Figure 6. This demonstrates its usability and flexibility.
4 MMIL Specification Extension
Previous versions of MMIL (Kumar and Romary, 2002) did not define thematic roles clearly. Relations among events and participants were roughly labeled as subject and object. Moreover, the representation of complex utterances such as questions, subordination and coordination, was quite limited. Recently, the specification for MMIL 1.5 extends the metamodel with new syntactic and semantic features. This section explains these features together with the strategy for domain-specific semantic roles labeling to be implemented in the Portmedia project for the purpose of annotating semantically the MEDIA corpus.
4.1 Syntactic Features
Questions
Questions are modeled by the communicative act request and by the interrogative value in the main event’s feature clause type. Closed questions (yes-no questions) query for the truth-value of the propositional content, whilst open questions (wh-
questions) query for a particular value (the target) in a propositional content. To distinguish closed and open questions, the value queried in open questions is represented by a participant that bears the interrogative form in its feature refT ype (See Figure 7). Similarly, interrogative adverbs are represented as open questions, but the adverb is indicated in the relation between the target and the main event (e.g. manner, cause, time, quantity and location).
(a) Request
(b) Request
Figure 7: (a) MMIL representation of the close-question: “Do you study?”, (b) MMIL representation of the open question: “What do you study?”
Subordinate Clauses
Subordinate clauses are represented by using the feature clauseForm and, in some cases, by using a relation called dependency relation. The type of subordination, namely adverbial, relative (i.e. adjective) and noun, is defined in the feature clauseForm of the subordinate event. The relation “dependency” is usually defined among adverbial clauses and the main clause as illustrated in Figure 8. In relative and noun clauses, on the other hand, the dependency relation is not explicitly represented since the existing relations among either subjects or objects of the main and dependent clauses are preserved, as shown in Figure 9. Note that “one” is the patient of both the verb of the subordinate clause and the verb of the main clause.
Coordination
Coordination was not well defined in previous versions of MMIL. Noun phrases were coordinated together by having sub-entities within an entity. Sentences were coordinated by using a relation, however there was no event which gathered together the coordinated entities. Thus, it was not possible to refer to the whole coordination in a referring expression. For these reasons, coordination of noun phrases, adjectives and sentences is now represented by an entity (either an event or participant) which gathers together the coordinated entities and contains information about the type of coordination via the feature coordType. The possible values for this feature are conjunctive, disjunctive, adversative, resultative and purposive, from which conjunctive is the default value. The entities coordinated are linked to the coordination entity by the member relation (Figure 10). In order to keep the order of the coordination, the attribute index can be used.
Thus, coordination entities group together events (even distinct propositional contents under the same dialogue act) and/or participants. Coordination of adjectives and adverbs, on the other hand, is represented inside a special MMIL feature called “modifGroup”, which gathers the modifiers (adjectives and adverbs)(Figure 11).
Figure 10: MMIL graph of the sentence: “I like jogging and swimming”.
4.2 Semantic Features
Thematic Roles
Thematic roles have been used to describe predicate arguments by providing them with a semantic description, which is more detailed than simply numbering the arguments. Although the set of role ranges vary greatly from very specific to very general, the research community has not established a clear criteria for semantic role labeling (Gildea and Jurafsky, 2002). Dowty proposes the agent and patient proto-roles (Dowty, 1991) as a solution to this problem. Broadly, he claimed that when the roles of agent and patient are used in arguments, they might have different degrees of membership, because they are not discrete categories. Despite the lack of consensus, sets of semantic roles have been defined in important domain independent implementations such as PropBank (Palmer et al., 2005), FrameNet (Baker et al., 1998), VerbNet (Kipper, 2005) and Lyrics (Lyrics D4.2, 2007).
In MMIL, roles are represented as a relation among predicates and their arguments, which can be either events or participants, as shown in Figure 2. The general roles of agent, patient and attribute were adopted as common roles for MMIL representations, in which agent and patient refer to the agent and patient proto-roles respectively:
- Agent corresponds to the agent proto-role, it includes Experiencer and Actor.
- Patient corresponds to the patient proto-role, it also includes Theme.
- Attribute refers to properties (attributes) of an entity, for instance “he is happy”.
MMIL allows to extend this generic roles according to the task, for instance, Location, Instrument and Topic. Moreover, the general roles can be re-defined on the basis of any project requirements. Furthermore, whenever the roles for indirect objects are not explicitly defined in the domain, they can be declared as undefined through unnamed relations. This allows freedom when defining thematic-roles on the basis of the specific needs of a project.
Thematic Roles in Portmedia
The thematic roles proposed in the Portmedia (PM) project are related to predicates in the domain of hotel booking reservation. Portmedia-frames (PM-frames) have been defined for the purpose of ameliorating the relations (i.e. semantic roles) labeling process in a deep MMIL instantiation. Each PM-frame defines the roles of the MMIL representation, based on verb predicates and dialogue acts. Whenever an indirect request is uttered, the deep MMIL will represent the underlying direct request. Thereby, roles are not represented according to the utterance’s surface form.
To clarify this issue, let us present the canonical representation for the reserve event, which will be always represented as a request to reserve, regardless the illocutionary force of the utterance. That is to say, it does not matter whether the speaker is informing a desire to reserve politely or is simply giving an order. In the case of
the reserve event, the underlying requested action concerns the hearer helping the speaker with the reservation task and will be internally represented as Request(Reserve(X₁,...,X₇)), where each argument has a specific role. Therefore, if the speaker has just uttered “I would like to reserve”, the deep MMIL would be: Request(Reserve(I)). The PM-frame states that the argument “I” is the proto-agent, because it represents the ultimate beneficiary after the hearer performs the action requested. The hearer, on the other hand, is the proto-agent, because he/she has the obligation to perform the action. The other arguments will have several roles, defined in the knowledge-base, namely the object to reserve, the beneficiary (i.e. the person, not necessarily the same speaker, or people who will use the object reserved), the period of time, the price and the localization of the object reserved.
Consequently, PM-frames are made up of dialogue acts (e.g. request, inform, request acknowledgment, accept, reject), domain-specific events (e.g. reserve, inform, cancel, repeat), semantic roles (either general or domain-specific roles). In addition, PM-frames contain flat semantic chunks (i.e. MEDIA annotation) and lexical units, which can be associated to either the semantic roles or the whole frame. The application of PM-frames in the deep instantiation is reflected by the representation of dialogue-acts, main events and relations among predicates and their arguments. Actually, this deep MMIL will be the new structured semantics of the MEDIA Corpus.
5 Conclusion
We presented in this paper our experience of almost eight years of working with MMIL as an intermediate representation language. Moreover, we described its application in different projects including the ongoing projects CCCP and Portmedia. Each of these projects has different application domains and architectures. Furthermore, MMIL has been applied for different purposes including question answering, dialogue systems and semantic annotation of corpora. The variety of MMIL applications and the way this formalism can be easily projected into other formalisms show the extensibility and high reusability of this representation language.
References
Claire Gardent and Erik Kow. 2007. A symbolic approach to near-deterministic surface realisation using tree adjoining grammars. ACL07.
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00481868/file/MMIL1.5.pdf", "len_cl100k_base": 5337, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 28079, "total-output-tokens": 6470, "length": "2e12", "weborganizer": {"__label__adult": 0.0009083747863769532, "__label__art_design": 0.0022449493408203125, "__label__crime_law": 0.00113677978515625, "__label__education_jobs": 0.005336761474609375, "__label__entertainment": 0.0012483596801757812, "__label__fashion_beauty": 0.00047087669372558594, "__label__finance_business": 0.0007314682006835938, "__label__food_dining": 0.0007605552673339844, "__label__games": 0.002452850341796875, "__label__hardware": 0.0009813308715820312, "__label__health": 0.0012950897216796875, "__label__history": 0.0007677078247070312, "__label__home_hobbies": 0.00015151500701904297, "__label__industrial": 0.0008974075317382812, "__label__literature": 0.01512908935546875, "__label__politics": 0.0009388923645019532, "__label__religion": 0.001232147216796875, "__label__science_tech": 0.44482421875, "__label__social_life": 0.000484466552734375, "__label__software": 0.0465087890625, "__label__software_dev": 0.4697265625, "__label__sports_fitness": 0.0003995895385742187, "__label__transportation": 0.0010919570922851562, "__label__travel": 0.00029349327087402344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28634, 0.03068]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28634, 0.54427]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28634, 0.91557]], "google_gemma-3-12b-it_contains_pii": [[0, 1120, false], [1120, 4370, null], [4370, 9139, null], [9139, 11294, null], [11294, 14018, null], [14018, 18199, null], [18199, 20569, null], [20569, 23870, null], [23870, 28634, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1120, true], [1120, 4370, null], [4370, 9139, null], [9139, 11294, null], [11294, 14018, null], [14018, 18199, null], [18199, 20569, null], [20569, 23870, null], [23870, 28634, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28634, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28634, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28634, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28634, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28634, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28634, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28634, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28634, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28634, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28634, null]], "pdf_page_numbers": [[0, 1120, 1], [1120, 4370, 2], [4370, 9139, 3], [9139, 11294, 4], [11294, 14018, 5], [14018, 18199, 6], [18199, 20569, 7], [20569, 23870, 8], [23870, 28634, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28634, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
708069d9cfdd91a7f5ab34dfc781f8cb1c2336e5
|
Review
- Concurrent statements
- Conditional and selected signal assignments
- Cannot be placed inside a process
- Equivalent to some process
- Assert statement
- Debugging
VHDL in Action
Chapter 3
Chapter 5, Section 5.1.4
Data Types and Attributes
This week
- Basic Data types
- Object declarations
- Data type attributes
Lexical Elements
- Elements that cannot be divided by spaces, tabs, <CR>s
- Identifiers
- Comments
- Delimiters (<=)
- Literals
Literals
- Character Literal
- '1' 'A'
- Character String Literal
- "This is a string."
- Bit String Literal
- B"00110110"
- X"36"
- B"0011_0110"
More Literals
- Abstract Literal
- Integer
- 21
- 2134.5641
- 2E10
- Real
- 2.0
- 3.2E4
Data Types
- **Scalar**
- values have only one component
- example: integers and naturals
- **Composite**
- types are complex objects
- example: arrays or records
- **Access**
- types that provide access to other types
- **File**
- types that provide access to files
Scalar Types
- Enumeration (discrete)
- Integer (discrete, numeric)
- Physical (numeric)
- Floating point or real (numeric)
Predefined Enumeration Types
- type Bit is ('0', '1');
- type Boolean is (FALSE, TRUE);
- type Character is (NUL, ..., 'A', 'B', ..., 'a', 'b', ..., DEL)
User Defined Enumeration Types
- type COLOR is (Red, Orange, Yellow, Green, Blue);
- type STATE is (S1, S2, S3);
- type STD_ULOGIC is ('U', 'X', '0', '1', 'Z', 'W', 'L', 'H', '-');
- type HORSE is (Mare, Stallion, Gelding, Colt, Filly);
SCALAR TYPES
**SCALAR TYPES**
**Predefined Enumeration Data Types**
- type COLOR is (Red, Orange, Yellow, Green, Blue);
**User Defined Enumeration Data Types**
- type COLOR is (Red, Orange, Yellow, Green, Blue);
**Example:**
```
signal MARKER: Color; -- in declaration
signal CAR: Color := Yellow;
-----------------------------
MARKER <= Blue; -- within the body of an architecture
```
SCALAR TYPES
**SCALAR TYPES**
**Subtype**
- Constrains the values of a type to be in the subtype range.
- Does not define a new type
- All subtypes of a given type have the same base type
Predefined Integer-related Types
- Integer type:
- Subset of whole numbers ... -3, -2, -1, 0, 1, 2, ...
- At minimum, the full 32-bit signed range is supported:
- -2,147,483,647 to 2,147,483,647
- Natural type:
- 0, 1, 2, 3, ...
- subtype Natural is Integer range 0 to Integer'high
- Positive type:
- 1, 2, 3, 4, ...
- subtype Positive is Integer range 1 to Integer'high
Integers (cont’d)
- Declaration of integer-related types:
- signal ANY_INT: Integer;
- variable ANY_POS1, ANY_POS2: Positive;
- signal ANY_NAT: Natural := 5;
- New types:
- type COUNTER is range 0 to 100;
- type FOOTSIZES is range 5 to 100;
VHDL is a Strongly Typed Language
- type APPLES is range 0 to 75;
- type ORANGES is range 0 to 75;
- variable A: APPLES := 25;
- variable B: ORANGES := 50;
A := B; -- Is this legal?
if (A>B) then ....; -- Is this legal?
Physical Data Types
- Capture real-world measurable quantities
- currents, torque, length, etc.
- Only one pre-defined physical type: TIME:
type Time is range <implementation dependent>
Units
fs — femtoseconds
ps = 1000 fs — picoseconds
ns = 1000 ps — nanoseconds
us = 1000 ns — microseconds
ms = 1000 us — milliseconds
sec = 1000 ms — seconds
min = 60 sec — minutes
hr = 60 min — hours
END Units;
Physical (cont’d)
- Creating your own physical type:
type RESISTANCE is range 0 to Integer'high
units
nohm — nano ohms
uohm = 1000 nohm — micro ohms
mohm = 1000 uohm — milli ohms
ohm = 1000 mohm — ohms
kohm = 1000 ohm — kilohms
megohm = 1000 kohm — megohms
END Units;
Floating Point Type
- Similar to integer types:
signal A_FLOAT: Real;
- Declare customized REAL types:
type ANGLE is range -90.0 to 90.0;
type TESTSCORE is range 100.0 downto 0.0;
type PROBABILITY is range 0.0 to 1.0;
Composite Types
- An object that holds more than one value: ARRAYS
- Predefined unconstrained array types:
- type Bit_Vector is array (Natural range <>) of Bit;
- type String is array (Positive range <>) of Character;
Using these predefined types:
- signal DATA_BUS: Bit_Vector(15 downto 0);
- signal INST_OP: Bit_Vector(5 to 7);
User Defined Composites
- Constrained composites
type BCD_BUS is array (3 downto 0) of Bit;
type REGFILE is array (5 downto 0) of Bit_Vector (7 downto 0);
type ARRAY2D is array (3 downto 0, 8 downto 0) of Bit;
- Unconstrained composite
type ARRAY2DB is array (Natural range <>), (Natural range <>) of Bit;
Composite Values
- Initializing and assigning values to composites:
signal DATA_BUS: Bit_Vector(31 downto 0) := B"0111_0110_0101_0100_0011_0010_0001_0000";
signal ADDR_BUS: Bit_Vector(15 downto 0) := X"7654";
ADDR_BUS <= DATA_BUS(20 downto 5);
More Composite Values
- Initializing and assigning values to array composites:
type REGISTER is array (2 downto 0) of Bit_Vector(7 downto 0);
signal F: REGISTER := (X"12", X"DF", X"9A");
F(2) <= B"1010_0101";
Aggregates
- A structured collection of values used to initialize a signal or variable with composite type
Example: A two-dimensional array of characters
```
<table>
<thead>
<tr>
<th></th>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>C</td>
<td></td>
</tr>
</tbody>
</table>
```
Type CHAR2D is array (0 to 1, 0 to 1) of Character;
Variable CM1:CHAR2D := ("A","B","C","C"); -- positional
Variable CM2:CHAR2D := (1=>0, 1=>"C");
---
Composites with Enumeration Indexes
Type COLOR is (Red, Orange, Yellow, Green, Blue);
type COLOR_COUNT is array (COLOR range <>) of Natural;
Signal SUBPRISM : COLOR_COUNT (Orange to Green) := (15, 20, 30);
SUBPRISM(Orange) <= 32;
SUBPRISM(Yellow) <= 3;
**Composites Examples**
Type `COLOR` is (Red, Orange, Yellow, Green, Blue); type `ARRAY_OF_COLORS` is array (Natural range =>) of `COLOR`;
Signal `RAINBOW : ARRAY_OF_COLORS(3 to 5)`;
**Unary Operator Table**
Type `MVL4` is ('X', '0', '1', 'Z'); type `MVL4_TAB1D` is array (MVL4) of MVL4;
Constant `INV : MVL4_TAB1D` := ('X', '1', '0', 'X');
Variable `A, B : MVL4`;
`A := INV(B);`
**Binary Operator Table**
Type `MVL4D_TAB2D` is array (MVL4, MVL4) of MVL4;
Constant `AND_MVL4 : MVL4_TAB2D` :=
```
('X', '0', 'X', 'X'),
('0', '0', '0', '0'),
('X', '0', '1', 'X'),
('X', '0', 'X', 'X')
```
Signal `X, Y, Z : MVL4`;
`Z <= AND_MVL4(X, Y);`
**Records**
- Composites with heterogeneous elements
- Similar to C/C++ structures
Type `MONTH_NAME` is (JAN, FEB, MAR, APR, MAY, JUN, JUL, AUG, SEP, OCT, NOV, DEC);
Type `DATE` is record
```
DAY : Positive range 1 to 31;
MONTH : MONTH_NAME;
YEAR : Natural range 0 to 9999;
```
End record;
**Records (cont’d)**
- Using the `DATE` record:
Signal `BIRTHDAY, HOLIDAY : DATE`;
信号 `BIRTHDAY <= (16, AUG, 1943);`
HOLIDAY <= (25, DEC, 2000);
BIRTHDAY.DAY <= 0;
BIRTHDAY.MONTH <= JUN;
BIRTHDAY.YEAR <= 2000;
**Access Types**
- Also known as pointer types
- Provide a means to access dynamic objects (objects which are created and destroyed during simulation).
- Not used in this course
File Types
- Access to external data
- Uses:
- to input test vectors and to output results,
- to record messages from simulation, and
- to initialize models (example, RAM)
- Covered later
Signal Attributes in VHDL
- Example:
```vhdl
signal CLK: Bit;
```
Referred to as
a "tick"
Spoken as:
"clock tick event"
Not: signal is of
type BIT, attribute
returns type
Boolean
Interpretation:
Boolean function
which is true when a
change of value is
occurring on this
signal.
VHDL Signal Attributes
- In general:
```vhdl
Signal_name'attribute
```
Can return a variety of types
Many built-in attributes
- Part of the language
- Can create your own attribute
Data Type Attribute
- 'pos
```vhdl
Type_name'pos
```
Function that returns the position number of a specific value from a list of values. The first value in any enum type has a position number of 0.
```vhdl
A := COLOR'pos(GREEN);
```
Data Type Attribute
- 'val
```vhdl
Type_name'val
```
Function that returns the value at a specific position number in a list of values (val(0) returns the first item).
```vhdl
B := COLOR'val(4);
```
Data Type Attribute
- 'left
```vhdl
Type_name'left
```
Predefined constant that specifies the left-most value in a list of values.
```vhdl
A := COLOR'left;
```
### Data Type Attribute
**'right**
Type name'right
Predefined constant that specifies the right-most data value in a list of values.
A := COLOR'right;
---
**'low**
Type name'low
Predefined constant that specifies the value associated with the lowest position number in a list of values.
A := COLOR'low;
---
### Subtype Examples
**COLOR**
is
(Red, Orange, Yellow, Green, Blue)
**LONGWAVE**
is
COLOR range COLOR'left to Yellow
**SW**
is
COLOR range COLOR'right downto Orange
A := SW'right;
-- What is the value of A?
B := SW'high;
-- What is the value of B?
C := SW'val(2);
-- What is the value of C?
D := SW'pos(Green);
-- What is the value of D?
### More Attributes
**'high**
Type name'high
Predefined constant that specifies the value associated with the highest position number in a range.
A := COLOR'high;
---
**'event**
SIG_NAME'event
Function that returns a Boolean value that is TRUE if there is an event on signal SIG_NAME during the current simulation cycle.
signal CLK : Bit := '0';
---
**Determining if a rising edge in CLK has occurred in a process:**
DFFAC: process (CLK, RESET)
begin
if RESET = '1' then
Q <= '0';
elsif CLK'event and CLK = '1' then
-- if CLK just changed value, and is now '1', then
Q <= D;
end if;
end process;
**More Useful Attributes**
- **active**
```vhdl
SIG_NAME'active Function that returns a Boolean value that is TRUE if there is a transaction on signal SIG_NAME during the current simulation cycle.
```
```vhdl
signal CLK : Bit := '0';
assert not CLK'active
report "note: CLK has just been assigned";
```
- **stable(n)**
```vhdl
SIG_NAME'stable(n) A signal of type Boolean that is TRUE only if signal SIG_NAME has NOT changed value (has no events) for time n.
```
**Relationship Between 'stable and 'event**
<table>
<thead>
<tr>
<th>Time (ns)</th>
<th>S</th>
<th>S'table</th>
<th>S'event</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>T</td>
<td>F</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>T</td>
<td>F</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>T</td>
<td>F</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>T</td>
<td>F</td>
</tr>
<tr>
<td>4</td>
<td>1</td>
<td>T</td>
<td>F</td>
</tr>
<tr>
<td>5</td>
<td>1</td>
<td>T</td>
<td>F</td>
</tr>
<tr>
<td>6</td>
<td>1</td>
<td>T</td>
<td>F</td>
</tr>
<tr>
<td>7</td>
<td>1</td>
<td>T</td>
<td>F</td>
</tr>
<tr>
<td>8</td>
<td>1</td>
<td>T</td>
<td>F</td>
</tr>
</tbody>
</table>
**D Flip-Flop Timing Requirements**
- **SETUP TIME (SUT)** - D input must be stable for 10 ns prior to the rising edge of clock.
- **HOLD TIME (HT)** - D input must be stable for 2 ns after the rising edge of clock.
- **MINIMUM PULSE WIDTH (MPW)** - The clock must be high for at least 8 ns.
**Timing Assertion Development**
- Develop an expression that is true when the error occurs.
- Logically negate the expression to obtain the required assert condition.
- DeMorgan’s Theorem may be useful.
Detecting Setup Time Violation
signal STV: Boolean;
D
CLK
\\\( STV \Leftarrow \text{CLK}' \text{event and } \text{CLK}=\text{'1'} \text{ and (not } \text{D}' \text{stable}(\text{SUT})\text{);} \)
assert (not STV) report "Setup violation on D input."
Detecting Hold Time Violation
signal D: Bit;
signal STV: Boolean;
\\( STV \Leftarrow \text{CLK}' \text{event and } \text{CLK}=\text{'1'} \text{ and (not } \text{D}' \text{stable}(\text{SUT})\text{);} \)
assert (not STV) report "Setup violation on D input."
More Useful Attributes
SIG_NAME'quiet(n) A signal of type Boolean that is TRUE only if signal SIG_NAME has no transactions for time n.
More Signal Attributes
SIG_NAME'delayed(n) A signal of type equal to the base type of signal SIG_NAME
Attributes for Signals of Array Type
SIG_NAME'left Returns the left most value of the index range of SIG_NAME
### Attributes for Signals of Array Type
**'right**
- `SIGNAL_NAME.right` Returns the right most value of the index range of `SIGNAL_NAME`.
- ```
signal DBUS : Bit_Vector ( 5 downto 0);
...
DBUS(DBUS'right) <= '0'; -- which value is assigned?
```
**'high**
- `SIGNAL_NAME.high` Returns the upper bound of the index range of `SIGNAL_NAME`.
- ```
signal DBUS : Bit_Vector ( 5 downto 0);
...
DBUS(0) <= DBUS(DBUS'high);
```
**'low**
- `SIGNAL_NAME.low` Returns the lower bound of the index range of `SIGNAL_NAME`.
- ```
signal DBUS : Bit_Vector ( 5 downto 0);
...
for I in DBUS'high downto DBUS'low loop
```
**'ascending**
- `SIGNAL_NAME.ascending` Returns a Boolean `TRUE` if the declaration of `SIGNAL_NAME` is an ascending range.
- ```
if DBUS.ascending then
for I in DBUS'left to DBUS'right loop
end loop;
end if;
```
**'length**
- `SIGNAL_NAME.length` Returns the number of elements in array `SIGNAL_NAME`.
- ```
signal DBUS : Bit_vector ( 5 downto 3);
...
for i in DBUS'length-1 downto 0 loop
```
**'last_event**
- `SIGNAL_NAME.last_event` Returns the TIME since the last event on `SIGNAL_NAME` occurred.
- ```
signal CLK : Bit := '0';
...
if CLK'last_event < 100 ns then
end if;
```
### Signal Attributes
Signal Attributes
- **'last_active**
`SIG_NAME.last_active`
Returns the TIME since the last transaction on `SIG_NAME`.
```vhdl
signal CLK : Bit := '0';
--
if CLK'last_active > CLK'last_event
assert false report
"CLK assigned the same value at least twice";
```
- **'last_value**
`SIG_NAME.last_value`
--Returns the previous value assigned to `SIG_NAME`.
```vhdl
signal TRAFFIC : COLOR := red;
process begin
if TRAFFIC'last_value = red and TRAFFIC = green
then
```
More More Attributes
- **SIG'leftof / SIG'rightof**
- **SIG'succ / SIG'pred**
- **SIG'pos**
- **SIG'vel**
- **SIG'range / SIG'reverse_range**
- for i in SIG'range loop
Is that all?
- No. Plenty more attributes.
- Plus: *user-defined* attributes.
|
{"Source-Url": "http://www.faculty.ece.vt.edu/tlmartin/ece4514/lectures/4514-09-Data-Types-03.pdf", "len_cl100k_base": 4480, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 32136, "total-output-tokens": 5280, "length": "2e12", "weborganizer": {"__label__adult": 0.0004267692565917969, "__label__art_design": 0.00045680999755859375, "__label__crime_law": 0.0003123283386230469, "__label__education_jobs": 0.0022563934326171875, "__label__entertainment": 7.909536361694336e-05, "__label__fashion_beauty": 0.00017011165618896484, "__label__finance_business": 0.00015342235565185547, "__label__food_dining": 0.0003323554992675781, "__label__games": 0.0007171630859375, "__label__hardware": 0.004451751708984375, "__label__health": 0.000461578369140625, "__label__history": 0.0002617835998535156, "__label__home_hobbies": 0.00015795230865478516, "__label__industrial": 0.0009551048278808594, "__label__literature": 0.00020945072174072263, "__label__politics": 0.0002624988555908203, "__label__religion": 0.0006084442138671875, "__label__science_tech": 0.048370361328125, "__label__social_life": 0.00011920928955078124, "__label__software": 0.0070648193359375, "__label__software_dev": 0.9306640625, "__label__sports_fitness": 0.0003914833068847656, "__label__transportation": 0.000713348388671875, "__label__travel": 0.00017309188842773438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14020, 0.02684]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14020, 0.48626]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14020, 0.66201]], "google_gemma-3-12b-it_contains_pii": [[0, 744, false], [744, 2133, null], [2133, 3959, null], [3959, 5686, null], [5686, 7028, null], [7028, 8392, null], [8392, 9680, null], [9680, 11133, null], [11133, 11997, null], [11997, 13275, null], [13275, 14020, null]], "google_gemma-3-12b-it_is_public_document": [[0, 744, true], [744, 2133, null], [2133, 3959, null], [3959, 5686, null], [5686, 7028, null], [7028, 8392, null], [8392, 9680, null], [9680, 11133, null], [11133, 11997, null], [11997, 13275, null], [13275, 14020, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14020, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14020, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14020, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14020, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14020, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14020, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14020, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14020, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14020, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 14020, null]], "pdf_page_numbers": [[0, 744, 1], [744, 2133, 2], [2133, 3959, 3], [3959, 5686, 4], [5686, 7028, 5], [7028, 8392, 6], [8392, 9680, 7], [9680, 11133, 8], [11133, 11997, 9], [11997, 13275, 10], [13275, 14020, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14020, 0.03004]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
d3fb4ab07295de5e4dac71073c3e5490101f1fa9
|
ROMEO: Conversion and Evaluation of HDL Designs in the Encrypted Domain
Charles Gouert, Nektarios Georgios Tsoutsos
{cgouert, tsoutsos}@udel.edu
University of Delaware
Abstract
As cloud computing becomes increasingly ubiquitous, protecting the confidentiality of data outsourced to third parties becomes a priority. While encryption is a natural solution to this problem, traditional algorithms may only protect data at rest and in transit, but do not support encrypted processing. In this work we introduce ROMEO, which enables easy-to-use privacy-preserving processing of data in the cloud using homomorphic encryption. ROMEO automatically converts arbitrary programs expressed in Verilog HDL into equivalent homomorphic circuits that are evaluated using encrypted inputs. For our experiments, we employ cryptographic circuits, such as AES, and benchmarks from the ISCAS’85 and ISCAS’89 suites.
I. INTRODUCTION
As corporations and individuals produce an ever-increasing amount of sensitive data, a high demand has arisen for outsourcing these vast data sets to the cloud. Storing large amounts of data locally incurs high monetary and time costs in order to develop and maintain the necessary hardware infrastructure. Even though outsourcing large data sets to the cloud can be expensive, the benefits outweigh the disadvantages in many situations. Nevertheless, there are glaring problems with this approach: the security of the outsourced data is entirely dependent on the cloud service providers (CSPs), and curious CSPs can view the sensitive data stored on their servers.
As more users outsource their data to the cloud, attackers devise new methodologies to compromise the CSP servers hosting sensitive information. In fact, many research efforts have proposed and demonstrated viable attacks in this area [1]–[4]. Several unique solutions have been proposed to combat various attacks on cloud servers [5], [6], but have not seen widespread adoption. As the security of outsourced data lies entirely in the hands of the CSP, users need to take measures to ensure the confidentiality of their data.
A natural solution to these major problems outlined above is encryption, which can protect data at rest and in transit. For example, secure database frameworks such as Arx [7] and CryptDB [8] utilize encryption to protect stored data. Encryption prevents CSPs from viewing plaintext data and ensures privacy even if the cloud servers are compromised by attackers. However, these benefits come with a serious drawback: if the outsourced data is dynamic and should change over time, standard encryption remains limited. Indeed, to update and perform computations with the outsourced encrypted data, the data must first be pulled from the cloud, decrypted, used for computation, re-encrypted, and then re-uploaded to the cloud. This lengthy and computationally intensive process defeats the purpose of outsourcing in the first place.
To allow the cloud to carry out operations on encrypted data, it is necessary to utilize special algorithms that protect data in use. Fully homomorphic encryption (FHE), often referred to as the “holy grail” of cryptography [9], enables arbitrary computation on encrypted data and can eliminate the lengthy process previously discussed. FHE allows the cloud to carry out meaningful computations while remaining completely oblivious to details about the plaintext data [10].
While open-source homomorphic encryption libraries are readily available today, they are prohibitively difficult to use for non-crypto savvy programmers. Various parameters must be set properly to ensure sufficient levels of security, complicated objects and variables must be initialized (and later properly deleted), and a deep understanding of the library’s API is required to properly carry out computations on ciphertexts. Also, for many libraries, ciphertext noise must be continuously monitored to determine when special noise-reduction steps are required to ensure successful decryption.
In this work, we present ROMEO: a novel framework that eliminates the steep learning curve of FHE by automatically converting arbitrary Verilog programs to equivalent homomorphic programs compatible with the state-of-the-art TFHE library [11]. Security parameters, key generation and management, ciphertext generation, and freeing memory are handled transparently and abstracted away from the user. Specifically, our contributions can be summarized as follows:
- Automated conversion of algorithms expressed in Verilog into equivalent homomorphic circuits that enable privacy-preserving processing of encrypted data on the cloud.
- A novel compiler that translates combinational and sequential netlists into standard C++ code implementing equivalent fully homomorphic operations.
- A versatile execution engine that enables homomorphic evaluation of state machines and sequential algorithms using encrypted clock signals.
The remainder of the paper is organized as follows: Section II provides a brief background on FHE, modern implementations, and the TFHE library employed in this work. Section III presents an overview
of our ROMEO framework, while Section IV presents our experimental evaluation. Section V offers comparisons with related works, and Section VI presents our concluding remarks.
II. PRELIMINARIES
A. Basics of Homomorphic Encryption
Homomorphic encryption (HE) allows users to perform operations on encrypted data without ever exposing the plaintext. In particular, for an arbitrary function \( F \) on plaintexts “a” and “b” there is an HE-equivalent function \( G \) on the encryptions of “a” and “b” so that \( F(a, b) = \text{Dec}(G(\text{Enc}(a), \text{Enc}(b))) \) (i.e., decrypting the value of \( G \) on ciphertexts yields the value of \( F \) on plaintexts). Since HE schemes never expose plaintext data while carrying out computations, this form of encryption enables companies and individuals to outsource sensitive data to untrusted third parties, such as the cloud, and dictate them to perform homomorphic operations on that data.
Various “partially” HE schemes, such as RSA [12], Paillier [13], and ElGamal [14], have existed for several decades and support only certain homomorphic operations (such as addition or multiplication, but not both). In addition, there exist “leveled” HE schemes that allow evaluating Boolean circuits up to a certain depth [15]; the latter lacks a mechanism to deal with noise accumulated in ciphertexts after each operation. In fact, as more operations are performed on ciphertexts, they could eventually become non-decryptable and completely useless. Thus, the circuit depth must be restricted to keep the ciphertext noise within acceptable levels.
In 2009, Gentry proposed a groundbreaking FHE scheme that enables evaluating Boolean circuits of arbitrary depth without noise problems [10]. Specifically, Gentry was able to reset ciphertext noise using a technique called bootstrapping, which entails evaluating a ciphertext decryption circuit homomorphically. Surprisingly, this technique reduces ciphertext noise to safe levels and allows unlimited computations. Gentry’s method paved the way for the first generation of FHE.
The implementations of first generation FHE schemes were much slower than today’s state-of-the-art libraries. For example, one bootstrapping operation took between 30 seconds with weak security parameters and approximately 30 minutes with strong security parameters using one of the first available FHE libraries [16]. In 2012, it was also demonstrated that the AES circuit could be evaluated homomorphically within 36 hours [17]. At that time, homomorphic encryption was infeasible for use outside of the academic sphere due to its slow speeds and low memory efficiency.
Over time, new FHE schemes have been proposed that drastically improved the speed of bootstrapping and other homomorphic operations. Gentry, Sahai, and Waters started this trend with their seminal 2013 paper proposing a scheme known as the GSW cryptosystem, which reduced the execution time of
homomorphic addition and multiplication by transforming them into matrix addition and multiplication respectively [18]. Notably, this scheme also does not require the untrusted third party carrying out computations on ciphertexts to have an evaluation key. A novel scheme introduced in 2014 called FHEW [19], built upon GSW to create a library that could execute bootstrapping procedures in less than one second. Initially, FHEW supported only the FHE equivalent of a NAND operation with bootstrapping; this was chosen because it is a functionally complete operation (i.e., it can implement any arbitrary function). Building upon the principles of FHEW, in 2017 a new open source library called “TFHE: Fast Fully Homomorphic Encryption over the Torus” has been proposed [11].
B. Homomorphic Encryption Libraries
To date, several open-source homomorphic encryption libraries are available. HElib [16], the first publicly available HE library, performs mathematical operations on multi-bit ciphertexts and can compute any polynomial function of arbitrary degree. However, this library has several drawbacks that make it impractical for general purpose computation. First, bootstrapping speeds and evaluation times remain high compared to newer libraries. In addition, HElib exposes a complex API that requires users to tune multiple security parameters, as well as manually keep track of ciphertext noise and determine when bootstrapping should be applied.
In 2018, Microsoft released their own homomorphic library called SEAL [20]. While this library provides users with a simpler API that allows conducting additions and multiplications on ciphertexts, it is not capable of FHE in its current state. Instead, SEAL provides leveled homomorphic encryption, which does not offer a bootstrapping function and therefore allows for only a finite number of operations on ciphertexts. While this may be suitable for some applications, it is not sufficient for general-purpose computation (as in ROMEO) that requires support of circuits of arbitrary depth.
FHEW, as described previously, initially implemented only NAND evaluations on encrypted bits, while in 2017, increased functionality was added to the library, including NOR, OR, AND, and NOT evaluations. While FHEW is fully homomorphic and provides fast bootstrapping speeds compared to prior schemes, its successor, TFHE, boasts even faster speeds [21]. TFHE is a fast FHE library first released in 2017, which is a successor to FHEW and operates exclusively on Boolean circuits. All ciphertexts are encrypted as binary values: plaintext data is converted to binary, encrypted bit by bit, and stored in a ciphertext array that has a size of approximately $2.2kB \times N$, where $N$ is the number of bits in the plaintext. The TFHE library offers the ability to carry out any logic gate function on ciphertexts and handles bootstrapping automatically after each gate evaluation (except the NOT gate that does not need bootstrapping). Since TFHE supports evaluation of all types of logic gates (i.e., it offers multiple functionally-complete sets
Fig. 1. **ROMEO Outline.** Verilog designs are converted to netlists and then passed to the ROMEO compiler. The compiler administers keys, receives inputs from the user, and generates an encrypted circuit to the cloud for outsourcing. When the cloud finishes the circuit evaluation, the resulting ciphertext is sent to the user.
of operations), it supports arbitrary computation on encrypted data. This property, as well as the fact that it can evaluate circuits of arbitrary depth, classify it as fully homomorphic. TFHE provides very competitive bootstrapping speeds and gate evaluation times. Thus, TFHE is an ideal candidate for use with ROMEO.
### III. THE ROMEO FRAMEWORK
ROMEO offers the following functionality: it consumes Verilog programs and outputs a homomorphic circuit operating on encrypted data, which can be evaluated by an untrusted remote party. To accomplish this, the first step is to use synthesis to convert Verilog programs to netlists consisting of logic gates and primitive memory structures like flip flops. Next, the generated netlist serves as an input to ROMEO’s special compiler that parses the circuit, determines the correct execution order of the gates, and generates an equivalent and efficient homomorphic program. An outline of our framework is illustrated in Figure 1.
#### A. RTL Synthesis
To handle synthesis, ROMEO’s back-end uses the Yosys Open SYnthesis Suite [22], which is an open source toolchain performing RTL synthesis along with basic circuit optimization functionality. Our framework receives Verilog source code files as input and instructs the Yosys back-end to apply the following:
1) perform optimizations including removing unused wires and replacing process blocks with flip-flops;
2) map cells to standard logic gates and small multiplexers;
3) write resulting netlist as an EDIF (Electronic Design Interchange Format) file.
B. Combinational Circuit Conversion
Once an EDIF netlist is generated by Yosys, ROMEO’s compiler transforms it into a standard C/C++ program composed of homomorphic operations exposed by TFHE’s API. First, the EDIF netlist is scanned and the compiler identifies all gates and wires in the circuit. On a second pass, connections between gates and wires are made and the circuit detailed in the EDIF file is now fully constructed. Finally, the C/C++ source file is created and all ciphertext structures required for the circuit (i.e., one ciphertext per wire) are initialized.
To begin conversion, our compiler takes plaintext inputs from the user in binary and generates C++ code that calls TFHE functions to encrypt them. The now encrypted inputs are loaded into their corresponding input wires in the HE circuit using TFHE’s copy gate functionality, which introduces negligible overhead. Next, ROMEO constructs a Directed Acyclic Graph (DAG) to determine the execution order of all gates in the HE circuit. This is necessary as homomorphic gate evaluations are serialized and, for each gate, all dependent gate evaluations must be completed before the current gate’s input wires are assigned the correct ciphertext values. The DAG construction is outlined in Algorithm 1: the graph is traversed until all gate operations have been written consecutively to the generated C++ file. Finally, the ciphertexts corresponding to output wires are saved in a file and all ciphertext structures are destroyed.
C. Sequential Circuit Conversion
Evaluating sequential circuits in the encrypted domain requires a more involved approach than purely combinational circuits. For one, the incorporation of a clock signal poses an important challenge for homomorphic evaluation: before using the clock signal as an input to an encrypted domain function, the current clock state must be encrypted. It is not possible, however, to mix plaintext clock signals with ciphertexts, and there are two approaches to address the requirement of encrypted clock signals: the user could either encrypt a large number of 0’s and 1’s prior to circuit evaluation and upload these values to the cloud, or instruct the cloud to encrypt these values as needed on-the-fly. In this work, we employ the latter approach in order to minimize the computation on the user side, as well as reduce the communication overhead between the user and remote cloud server.
In addition to the encrypted clock challenge, TFHE does not offer support for sequential circuit components such as flip flops (FFs). Thus, to incorporate FF functionality into homomorphic circuits, ROMEO implements a gate re-evaluation technique illustrated in Algorithm 2. First, we begin by instructing the cloud to generate an encrypted clock signal that initializes to ’0’ (and inverts after every complete pass through the circuit). Then, the cloud proceeds to evaluate the circuit like a combinational circuit; when a FF is reached, the data input to the FF is stored for the next round and the output takes on the FF
Algorithm 1: Determine Order of Gate Evaluations
```python
for gate in circuit do
for wire in gate.inputs do
if wire is output from other gate then
gate.dependsOn += wire.originator;
while unevaluated gates remain do
if gate.evaluated == True then
continue;
for gate in circuit do
if gate.dependsOn == "" then
gate.evaluated = True;
write_gate_to_file(gate);
else
ready = True;
for prevGate in gate.dependsOn do
if prevGate.evaluated == False then
ready = False;
if ready == True then
gate.evaluated = True;
write_gate_to_file(gate);
return;
```
input from the previous round. On subsequent passes through the circuit, only gates that depend on the output of FFs and gates upon which FFs are dependent are re-calculated. Purely combinational logic networks separated from sequential components are only executed on the initial pass as their outputs will not change over time.
Notably, the cloud remains oblivious to the number of clock cycles necessary to finish a circuit evaluation. This stems from the fact that the cloud has no knowledge about the plaintext values assigned to wires and signals in the circuit. Thus, users can define in advance how many clock cycles are necessary for the circuit to complete its evaluation. While ROMEO’s compiler is generating the homomorphic circuit for outsourcing, it will prompt the user for the number of timesteps required during evaluation. The compiler will re-evaluate the necessary logic gates for each additional timestep. In ROMEO, combinational circuits are treated as sequential circuits with a single timestep.
Algorithm 2: Optimized Circuit Re-evaluation
Function re-eval(gate):
if gate precedes flip-flop then
flag gate for re-evaluation;
for prevGate in gate.dependsOn do
re-eval(prevGate);
else if gate follows flip-flop then
flag gate for re-evaluation;
for nextGate in gate.next do
re-eval(nextGate);
return;
D. Circuit Verification using Debug Mode
The ROMEO framework provides users with a convenient method for testing the correctness of a homomorphic circuit before outsourcing to a third party. This saves users from the cost and time required to deploy potentially faulty code to the cloud. To add debugging functionality, the generated TFHE C++ code can contain additional verification elements: the user’s private key is read in by the program to assist with decryption and users are prompted to directly input plaintext values that are immediately encrypted with the private key and loaded into the circuit’s input wires. Once the circuit evaluation has completed, the private key is used to decrypt all output wires and to print the corresponding plaintext outputs.
To rapidly verify the accuracy of the circuit in debug mode, ROMEO can encrypt circuit inputs using the evaluation key instead of the private key. Normally, the former key is used to encrypt non-sensitive constant values for computation with sensitive encrypted ciphertexts and TFHE treats such ciphertexts generated with the evaluation key as “trivial”, assuming that both the third party and the user know the corresponding plaintext values. The executing overhead for FHE gates processing these “trivial” ciphertexts is very fast at approximately 10 microseconds per gate evaluation. This is three orders of magnitude faster than the typical FHE gate evaluation speed of 13 ms [11]. Using this feature, users can evaluate correctness of FHE circuits very efficiently. We remark that ROMEO’s debug mode can only be used locally, as it is insecure to encrypt data with the evaluation key while outsourcing to the cloud.
IV. Experimental Evaluation
The ROMEO framework was used to convert all combinational and sequential circuits from the ISCAS ’85 [23] and ISCAS ’89 [24] benchmark suites to the encrypted domain. In addition, we converted
five encryption benchmark circuits to demonstrate the robustness of our framework. These benchmarks were chosen due to their widespread use, the broad range of circuit sizes, and the inclusion of both combinational and sequential circuits. All experiments were performed on an Ubuntu 18.10 host with 8 GBs of RAM and an i7-8650U CPU. The TFHE security parameter ($\lambda$) was set at the default value for 110 bits of security. Lastly, the reported times were averaged over 10 executions per circuit and each execution was assigned one exclusive processor core.
A. ISCAS Combinational Circuits
The homomorphic circuit evaluation times for the ISCAS ’85 combinational benchmarks are presented in Figure 2. Our results show an approximately linear increase in execution time with the number of evaluated gates. Nevertheless, the evaluation time for different gates are not the same: for instance, inverters are evaluated much faster than other logic gates because no bootstrapping is required for this operation. As illustrated in the graph, the c5315 circuit incurs longer evaluation times than the two largest circuits despite its smaller size. This deviation from expected behavior is attributed to the proportion of inverter gates to the overall number of gates in the circuit. Indeed, the two largest circuits contain approximately 34% inverters while c5315 contains about 25% inverters.
B. ISCAS Sequential Circuits
The results for the ISCAS ’89 sequential circuit benchmarks are presented in Figure 3. These numbers show the amortized execution cost per cycle (i.e., one complete circuit evaluation). This cost was amortized over ten clock cycles. As with the combinational results, a roughly linear increase in execution time is observed with increasing numbers of gates as anticipated. However, more variance is observed due to the varying number of gates that need to be re-evaluated for each cycle. This is entirely dependent on the circuit configuration.
Fig. 3. Amortized evaluation time per cycle (over 10 cycles) for encrypted circuits from the ISCAS ’89 benchmark suite.
C. Encryption Circuits
To further illustrate the robustness of the ROMEO framework, we tested its performance using five circuits implementing the following well-known encryption algorithms: DES [25], AES [26], PRESENT [27], SIMON and SPECK [28]. The last three algorithms are lightweight block ciphers and their circuits are suited for homomorphic evaluation. In more details, PRESENT has an 80-bit key and a 64-bit block size, while SIMON and SPECK ciphers [28] support a variety of block and key sizes (in this work, we implemented the 128/128 variants with 128-bit block size and 128-bit key size). Moreover, DES uses 56-bit keys (with 8 parity bits added for a total of 64 bits) and a 64-bit block size. Finally, AES, the most widely used encryption cipher today, uses a 128-bit key and a 128-bit block size [26]. Our experimental results in Figure I show that the homomorphic evaluation of PRESENT was the fastest, with SIMON and SPECK being slightly slower. Conversely, the homomorphic evaluation of DES took approximately 24 minutes and AES required 13.5 minutes due to the complexity and larger size of these circuits. In the case of DES, we attribute the slow speed due to the substitution step, which is implemented with look-up tables; since it is not possible to branch on encrypted data, all possible outputs must be computed for each look-up table evaluation.
<table>
<thead>
<tr>
<th>Cipher</th>
<th>Evaluation Time (s)</th>
<th>Cycles</th>
<th>Gate Evaluations</th>
<th>Input Wires</th>
<th>Output Wires</th>
</tr>
</thead>
<tbody>
<tr>
<td>PRESENT</td>
<td>107.35</td>
<td>31</td>
<td>12256</td>
<td>144</td>
<td>64</td>
</tr>
<tr>
<td>SIMON</td>
<td>129.28</td>
<td>68</td>
<td>13698</td>
<td>256</td>
<td>128</td>
</tr>
<tr>
<td>SPECK</td>
<td>152.70</td>
<td>32</td>
<td>17821</td>
<td>256</td>
<td>128</td>
</tr>
<tr>
<td>DES</td>
<td>1461.29</td>
<td>16</td>
<td>167058</td>
<td>120</td>
<td>64</td>
</tr>
<tr>
<td>AES</td>
<td>810.65</td>
<td>10</td>
<td>61113</td>
<td>256</td>
<td>128</td>
</tr>
</tbody>
</table>
D. Scheme Hopping on Cloud Servers
The lightweight ciphers in Section IV-C enable practical applications of encrypted computation, such as scheme-hopping. With scheme-hopping, users first encrypt their sensitive data with a symmetric encryption algorithm (e.g., compute SIMON ciphertexts that are much smaller than TFHE ciphertexts) and then upload these encryptions to a cloud server; in turn, the cloud server encrypts for a second time each bit of these ciphertext with TFHE. The users also encrypt each bit of their symmetric key (i.e., the SIMON key) with TFHE and upload these encryptions to the cloud server as well. Using ROMEO, the cloud server can generate and evaluate the FHE circuit corresponding to symmetric decryption (e.g., SIMON decryption) using the TFHE ciphertexts of the symmetric key and the user data. This process “peels off” the symmetric encryption and result in a TFHE ciphertext on the cloud server. Depending on the size of the initial plaintext, this can drastically reduce the communication overhead between the user and cloud, as uploaded user data are symmetrically encrypted (only the key bits are encrypted with FHE).
To demonstrate this method, we utilized an Amazon EC2 instance with 48 vCPUs and 384 GiB of memory to perform scheme-hopping using SIMON. The local host computed a Simon ciphertext for a 128-bit plaintext, as well as the TFHE encryption of SIMON’s key (this resulted in a 128 * 2.2 KB ciphertexts). The TFHE-encrypted Simon key and the 128-bit Simon ciphertext were uploaded to the EC2 instance (this step took 2.1 seconds) and the Amazon server was able to “peel-off” the symmetric encryption and compute a TFHE ciphertext corresponding to the original 128-bit plaintext. This evaluation took 19.63 seconds on the EC2 server, and minimized upload overhead of the local host.
E. User Overhead for TFHE Encryption and Decryption
From the user’s perspective, there is a one time cost to generate a keypair (which can be used for multiple circuits) and encrypt inputs with the secret key. On average, key generation takes approximately 770 milliseconds with 110 bits of security and the cost of encryption is 22 microseconds per bit of plaintext. The decryption operation time is negligible at less than 1 microsecond per bit.
V. RELATED WORKS
While fully homomorphic encryption has garnered a great deal of attention in the years since its inception, the majority of research efforts in this field focus on acceleration, improving existing schemes, and specific applications of homomorphic encryption. For instance, recent works have explored the potential of neural network training and inferencing in the encrypted domain [29] [30]. To the best of the authors’ knowledge, there is no framework that supports complete conversion of arbitrary HDL
designs to encrypted circuits. However, past research efforts have been made to make homomorphic encryption more usable for the average programmer.
The E³ framework [21] provides users with an API that allows them to flag sensitive variables as “secure” in C/C++ programs. These variables are homomorphically encrypted and each program statement involving these variables will generate a corresponding homomorphic circuit. In addition, E³ offers users the choice of HElib, FHEW, or TFHE 1.0. However, this approach requires users to modify their source code and does not support arbitrary functionality (e.g., can’t process conditionals on encrypted data).
The Cingulata compiler toolchain [31] allows for conversion of C/C++ programs to homomorphic circuits and provides similar functionality to E³ with some caveats. It requires users to modify their programs to work with the toolchain and, while providing a simpler API than many homomorphic encryption libraries, it requires significant effort on behalf of the user to understand the nuances of the Cingulata library and its associated structures and data types. Conversely, ROMEO abstracts this complexity and enables automated conversion of HDL code into C++ executables.
VI. CONCLUSION
In this work, we have proposed a novel framework for automated conversion from arbitrary synthesizable Verilog HDL designs to encrypted circuits for privacy outsourcing applications. First, Verilog designs are converted to netlists through the process of synthesis. Next, the ROMEO custom compiler creates an internal construction of the circuit outlined in the netlist and determines the correct execution order for the homomorphic gate evaluations. The resulting homomorphic circuit is written to a C++ source code file that employs the TFHE library and can be sent to the cloud for evaluation along with encrypted inputs. For the user’s peace of mind, ROMEO provides a debug mode capable of fully simulating the homomorphic circuit locally to verify correct operation.
We tested ROMEO with circuits from the ISCAS ’85 and ’89 benchmark suites as well as five well-known cryptographic circuits. In all cases, we observed a roughly linear increase in encrypted circuit evaluation time with a growing number of gate evaluations. On a final note, it is possible for users to enhance the usability of this framework further by incorporating high level synthesis (HLS) tools into the toolchain. This would allow for assisted conversion from high level languages such as C/C++ to homomorphic circuits. The ROMEO framework is open source and is available at the following repository: https://github.com/TrustworthyComputing/Romeo.
REFERENCES
|
{"Source-Url": "https://eprint.iacr.org/2022/825.pdf", "len_cl100k_base": 6107, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 30104, "total-output-tokens": 8238, "length": "2e12", "weborganizer": {"__label__adult": 0.0006632804870605469, "__label__art_design": 0.0004968643188476562, "__label__crime_law": 0.0009365081787109376, "__label__education_jobs": 0.0004608631134033203, "__label__entertainment": 0.00013315677642822266, "__label__fashion_beauty": 0.00025653839111328125, "__label__finance_business": 0.00040030479431152344, "__label__food_dining": 0.0004992485046386719, "__label__games": 0.0008082389831542969, "__label__hardware": 0.005756378173828125, "__label__health": 0.00104522705078125, "__label__history": 0.0004010200500488281, "__label__home_hobbies": 0.0001659393310546875, "__label__industrial": 0.001209259033203125, "__label__literature": 0.000274658203125, "__label__politics": 0.000507354736328125, "__label__religion": 0.0008897781372070312, "__label__science_tech": 0.277099609375, "__label__social_life": 0.00011754035949707033, "__label__software": 0.00922393798828125, "__label__software_dev": 0.69677734375, "__label__sports_fitness": 0.0004611015319824219, "__label__transportation": 0.00116729736328125, "__label__travel": 0.0002570152282714844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33859, 0.02663]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33859, 0.56759]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33859, 0.89982]], "google_gemma-3-12b-it_contains_pii": [[0, 2112, false], [2112, 5140, null], [5140, 8085, null], [8085, 11184, null], [11184, 13073, null], [13073, 16124, null], [16124, 17940, null], [17940, 20192, null], [20192, 22162, null], [22162, 24304, null], [24304, 27109, null], [27109, 29784, null], [29784, 32937, null], [32937, 33859, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2112, true], [2112, 5140, null], [5140, 8085, null], [8085, 11184, null], [11184, 13073, null], [13073, 16124, null], [16124, 17940, null], [17940, 20192, null], [20192, 22162, null], [22162, 24304, null], [24304, 27109, null], [27109, 29784, null], [29784, 32937, null], [32937, 33859, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33859, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33859, null]], "pdf_page_numbers": [[0, 2112, 1], [2112, 5140, 2], [5140, 8085, 3], [8085, 11184, 4], [11184, 13073, 5], [13073, 16124, 6], [16124, 17940, 7], [17940, 20192, 8], [20192, 22162, 9], [22162, 24304, 10], [24304, 27109, 11], [27109, 29784, 12], [29784, 32937, 13], [32937, 33859, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33859, 0.04795]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
0cab67b5036d1229f31c1b20c7173ca011a6dad7
|
Extraction of Connected Region Boundary in Multidimensional Images
David Coeurjolly, Bertrand Kerautret, Jacques-Olivier Lachaud
To cite this version:
HAL Id: hal-01112943
https://hal.archives-ouvertes.fr/hal-01112943
Submitted on 4 Feb 2015
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Distributed under a Creative Commons Attribution - NonCommercial 4.0 International License
Extraction of Connected Region Boundary in Multidimensional Images
David Coeurjolly\(^1\), Bertrand Kerautret\(^2\), Jacques-Olivier Lachaud\(^3\)
\(^1\) CNRS, LIRIS UMR5205, Université de Lyon
\(^2\) LORIA, Université de Lorraine
\(^3\) LAMA, Université de Savoie
Abstract
This paper presents an algorithm to extract the boundary of a connected region(s) using classical topology definitions. From a given adjacency definition, the proposed method is able to extract the boundary of an object in a generic way, independently of the dimension of the digital space.
Source Code
The implementation of the algorithm is available through the DGtal library\(^1\). The source code and the demonstration are based on a special version of DGtal containing no external dependencies. They are both available on the IPOL web page of this article\(^2\).
Supplementary Material
The DGtalTools\(^3\) project gives several additional tools exploiting the proposed algorithms. These tools are defined to process both 2D and 3D images.
Keywords: Nd connected components, discrete geometry, topology
1 Introduction
The aim of this work is to present an algorithm which extracts the boundary of connected region(s) using classical topology definitions. It can be useful in particular for algorithms which need a discrete contour as input. Such contours can be extracted from grayscale images like the ones displayed in Figure 1.
These digital contours are level set contours in the displayed grey-level image, specified by a threshold parameter (i.e. the level set), a reference point and a maximal distance. In order to describe in a generic way the object on which the contour is extracted, the user must specify a predicate on volume elements which specifies whether or not a given element belongs to the region(s) of interest.
\(^1\)DGtal: Digital Geometry tools and algorithms library, http://libdgtal.org
\(^2\)http://dx.doi.org/10.5201/ipol.2014.74
\(^3\)https://github.com/DGtal-team/DGtalTools
Extraction of Connected Region Boundary in Multidimensional Images
Figure 1: Example of contour extraction (b) from source image (a).
This information is sufficient to extract the interpixel boundaries of this or these region(s). The boundaries are extracted by tracking along the frontier between the region and its complementary. The user may choose between two adjacency definitions for object(s): the interior and the exterior adjacency. The extracted set of connected surface elements satisfies the choice of adjacency. In dimension 2, the surface elements can be ordered to form contour(s) composed of 4-connected points in the half-integer plane. Predicates on volume elements can be very simple such as the example described in Figure 1: the predicate is just a composition of thresholds on the image values.
Complexity note: extracting all the contours for a given predicate takes $O(MN)$ calls to the predicate, if $M$ and $N$ are respectively the width and height of the image. In arbitrary dimension, the computational cost is proportional to the size of the predicate domain (number of grid points). Extracting a single contour given a starting boundary element takes $O(C)$ calls to the predicate, if $C$ is the number of points of this contour.
Implementation note: although the examples and applications are given in 2D and 3D, the surface extraction algorithm is presented and implemented for arbitrary dimension. The applications of Section 3 can be reproduced from the demo version given IPOL web page of this article.
2 Algorithm
2.1 Digital Surfaces for Boundary Extraction
Different definitions of digital surfaces can be found in the digital topology domain (refer to the paper by Kong et al. [2] for a survey). A first group of approaches was introduced by Rosenfeld in the 70s [5, 6] which consists in defining a digital surface as a subset $S$ of $\mathbb{Z}^n$ with the property to have $\mathbb{Z}^n \setminus S$ composed of two $\alpha$-connected (with for instance $\alpha = 4$ or $\alpha = 8$ for the square pixels of an 2D image) components and $S$ is thin (i.e. if any point of $S$ is removed, the preceding property does not hold). Even if such a definition can be used in 2D, it appears more difficult to exploit in higher dimension. A second type of definition considers surfaces as $n - 1$ dimensional cubical complexes [3]. This representation is convenient to describe the object but is not adapted to process geometric information of the object boundary. To extract the boundary of a digital surface, we shall use another definition which relies on the set of $n - 1$-cells with some specific adjacency (in the same spirit as Herman [1] or Udupa [7]). Note that we consider here only the implementation for regular grids.
\[\text{http://dx.doi.org/10.5201/ipol.2014.74}\]
Digital surface as a set of \( n-1 \)-cells. Formally, the elements of a digital space \( \mathbb{Z}^n \) are called spels (and often pixels in 2D and voxels in 3D). A surfel is a couple \((u, v)\) of face adjacent spels. A digital surface is a set of surfels. A spel is thus a \( n \)-cell in a cellular grid decomposition of the space, while a surfel is clearly some oriented \( n-1 \)-cell which is incident to the two \( n \)-cells (see figure 2).

From the implementation point of view, the set of \( n-1 \)-cells can be obtained with an incidence relation from the given spels. In order to be able to extract object boundaries in a consistent way, we also consider cells with a specific orientation (positive or negative). By convention, spels lying in the interior of the object of interest are given a positive orientation. In the DGtal library framework, the orientation can be specified in the KSpace class (see user documentation\(^5\)):
```cpp
#include "DGtal/helpers/StdDefs.h"
...
using namespace DGtal::Z3i;
...
KSpace K;
// An initial 3D signed spel defined for instance with positive orientation
SCell v = K.sSpel( Point( 0, 0, 0 ), KSpace::POS );
SCell sx = K.sIncident( v, 0, true ); // surfel further along x
```
We can now obtain the digital surface that lies in the boundary of some digital shape \( S \subset \mathbb{Z}^n \) as the set of oriented surfels between spels of \( S \) and spels not belonging in \( S \). Algebraically, \( S \) is the formal sum of its positively oriented spels, and its boundary is obtained by applying the boundary operator on \( S \). Figure 3 illustrates the linear boundary operator applied on the set of spels of the object. By linearity, the operator is applicable spel by spel and when opposite cells appear they cancel each other (see case (c,d) and (h,i)).
Algorithm 1 exploits this definition to extract, given an input shape, the set of surfels that constitutes the shape boundary independently of its dimension. A \( n \)D digital Khalimsky space (the class KhalimskySpaceND\(^6\) in the DGtal library) is used to represent the cubic grid complex, whose oriented cells are defined as an array of integers (see the paper by Lachaud \([4]\) for more details). In particular, the method \uSpec(Point p)\ creates a cell of maximal dimension from a point with coordinates in \( \mathbb{Z}^n \). Inversely, the coordinate of a Cell can be recovered from a Cell with the method \uCoords(Cell c).
\(^5\)http://libdgtal.org/doc/nightly/dgtal_cellular_topology.html
\(^6\)http://libdgtal.org/doc/nightly/classDGtal_1_1KhalimskySpaceND.html
Algorithm 1: Extract the (unstructured) set of surfels that forms the boundary of the digital shape defined by the predicate $Pred$.
(Method $\text{DGtal::Surfaces::detectBoundarySurfels}$ in the DGtal framework)
\begin{verbatim}
input: KhalimskySpaceND $K$; // Any Khalimsky space
input: Point lowerB; // Lowest point in the space
input: Point upperB; // uppermost point in the space
input: PointPredicate $Pred$; // A predicate Point -> bool
output: SurfelSet $aBoundary$; // The detected set of surfels, i.e. the boundary.
Integer $k$; // Current dimension for searching surfels
bool $inHere$, $inFurther$;
for $k = 0$ to $K$.dimension - 1 do
Cell $kLowerB = K.uSpel( lowerB )$; // lowest spel along $k$
Cell $kUpperB = K.uGetDecr( K.uSpel( upperB ), k )$; // uppermost spel along $k$
Cell $p = kLowerB$; // current spel
while $p$ in bounds $kLowerB$ and $kUpperB$ do
Cell $pNext = K.uGetIncr( p, k )$; // next spel along $k$
$inHere = Pred( K.uCoords( p ) );$
$inFurther = Pred( K.uCoords( pNext ) );$
if $inHere$ != $inFurther$ then
// boundary element, add it to the set
$aBoundary$.insert( $K.sIncident( K.signs( p, inHere ), k, true )$ );
$p = $ first spel after $p$ in bounds $kLowerB$ and $kUpperB$
return $aBoundary$
\end{verbatim}
The boundary operator algorithm is given by the method `DGtal::Surfaces<TKSpace>::detectBoundarySurfels` from the directory `DGtal/topology/helpers`. Images (a,f) of Figure 4 illustrate the result of this method applied on a set of pixels (a) and on a set of voxels (f) by calling the same method `detectBoundarySurfels`.
Once the digital set of surfels is defined, the relation between surfels needs to be determined to transform the digital surface into a graph.
**Digital surface as a graph: adding adjacencies between surfels** To apply this transformation we have to connect surfels that share \( n - 2 \)-cells. The resulting adjacency relation are called bel adjacencies in the terminology of Herman [1], Udupa [7] and others. Generally an \( n - 2 \)-cell is shared by two \( n - 1 \)-cells, except in ”cross configuration” which are illustrated in the figure on the right.
The interior bel adjacency makes the choice to connect \( a \) to \( d \) and \( c \) to \( b \) while the exterior bel adjacency connects \( a \) to \( b \) and \( c \) to \( d \). This choice has to be made along each possible pair of directions when going \( n \)D. In DGtal, it is encoded through the class `SurfelAdjacency`. Images (b,g) in Figure 4 illustrate such possible choices for a given surfel in 2D and 3D. More precisely the green (resp. red) surfel associated to the choice of exterior (resp. interior) adjacency is obtained as follows:
```cpp
SurfelAdjacency<Dim> adjInt( true ); // Interior surfel
SurfelAdjacency<Dim> adjExt( false ); // Exterior surfel
// surfel is a given surfel boundary.
SurfelNeighborhood<KSpace> sNeighExt;
sNeighExt.init( &ks, &sAdjExt, surfel );
// Axes along which we search for neighbors.
Dimension i = *(ks.sDirs(surfel));
SCell surfelFollowerInt;
SCell surfelFollowerExt;
sNeighInt.getAdjacentOnDigitalSet( surfelFollowerInt, aSet, i, true);
sNeighExt.getAdjacentOnDigitalSet( surfelFollowerExt, aSet, i, true);
```
A more “classical topology” way of interpreting adjacencies is to consider an \( \epsilon \)-offset to the set of cells (see Figure 5). In 2D, an outward \( \epsilon \)-offset to the boundary cells (or equivalently, the boundary of the Minkowski sum of the set of spels and an \( \epsilon \)-ball) defines a 1-dimensional surface whose topology corresponds to the exterior adjacency. Similarly, an inward \( \epsilon \)-offset to the boundary cells defines a 1-dimensional surface whose topology corresponds to the interior adjacency. Unfortunately, this is no more true in 3D. Indeed, an outward \( \epsilon \)-offset in the great diagonal configuration (two diagonally opposite spels in \( 2 \times 2 \times 2 \) cube) connects the surface cells, but they are not connected by the digital exterior adjacencies. However, the inward \( \epsilon \)-offset to the set of cells still defines the interior adjacency.
### 2.2 Tracking Digital Surfaces
Once the surfels separating interior spels from exterior spels have been extracted by Algorithm 1, the final step of tracking can be applied. We propose Algorithm 2 which is based on the more generic method `getAdjacentOnPointPredicate`, which only requires a predicate on a digital point. A digital set \( S \) is then simply defined as the predicate returning true on points belonging to \( S \). This
algorithm is independent of the chosen dimension. This method can be used indifferently for an open or closed surface (with or without boundary). If we know that the surface is closed, the faster variant \texttt{trackClosedBoundary} performs the scan by following only direct adjacent surfels.
For the specific aim of extracting an ordered sequence of boundary elements in 2D, we use a variant of Algorithm 2, \texttt{track2DBoundary} which constructs the ordered sequence from one direction and takes into account the case of open contour by reversing the sequence and by continuing in the other direction.
### 2.3 Overall Description
The main steps of the surface extraction algorithm are the following:
1. \( B \leftarrow \) detect boundary surfels (Algorithm 1, method \texttt{Surfaces::detectBoundarySurfels} in DGtal framework)
2. pick a surfel in \( B \)
Algorithm 2: Tracking a boundary component in nD.
(method DGtal::Surfaces::trackBoundary in the DGtal framework)
\begin{verbatim}
input: KhalimskySpaceND K; // Any Khalimsky space
input: SurfelAdjacency surfelAdj; // A surfel adjacency
input: Surfel startS; // the surfel where the tracking is initiated
input: PointPredicate Pred; // A predicate Point -> bool
output: SurfelSet surface; // The boundary component that contains startS
1 Surfel b; // the current surfel
2 Surfel bn; // the neighboring surfel
3 SurfelNeighborhood SN; // An object that extracts neighboring surfels, initialized from K, surfelAdj and startS,
4 Queue<Surfel> Q; // Queue of surfels, the ‘‘head’’ of the tracking.
5 Q.push( startS );
6 surface.insert( startS );
7 while Q is not empty do
8 b = Q.front();
9 Q.pop();
10 SN.setCurrentSurfel( b ); // Position neighborhood around b
11 for All tracking directions trackDir around b do
12 // Over a surfel there are n−1 possible axis tracking directions.
13 // 1st pass with positive orientation
14 if SN.getAdjacentOnPointPredicate( Out bn, Pred, trackDir, true ) then
15 if surface.find( bn ) == surface.end() then
16 surface.insert( bn );
17 Q.push( bn );
18 // 2nd pass with negative orientation
19 if SN.getAdjacentOnPointPredicate( Out bn, Pred, trackDir, false ) then
20 if surface.find( bn ) == surface.end() then
21 surface.insert( bn );
22 Q.push( bn );
23 return surface;
\end{verbatim}
3. S ← track the boundary component that contains B (Algorithm 2, method Surfaces::trackBoundary in DGtal framework)
4. remove S from B (line 10-13 of Algorithm 3).
5. go back to 2 until B is empty.
These steps are gathered in Algorithm 3, which extracts for each connected component its set of surfels. In the DGtal library the algorithm is given in the method extractAllConnectedSCell included in class Surfaces (file DGtal/topology/helpers/Surfaces.h). This algorithm can process digital objects given in nD, and the resulting sets of surfels are not given in a particular order. To obtain specifically a set of 2D contours (which can be represented as sequences), we just adapt the algorithm by modifying the tracking (line 8 of Algorithm 3) with the specific 2D tracking (variant
of Algorithm 2, method `Surfaces::track2DBoundary` in the DGtal library). Such variant of Algorithm 3 is available in DGtal with the method `DGtal::Surfaces::extractAll2DSCellContours`.
**Algorithm 3**: Given a 2D predicate describing a 2D digital shape implicitly, extracts all boundary components as a vector of 2D contours. Each 2D contour is a sequence of surfels. (method `DGtal::Surfaces::extractAllConnectedSCell` in the DGtal framework)
```cpp
input : KhalimskySpaceND K ; // Any Khalimsky space
input : SurfelAdjacency surfelAdj ; // A surfel adjacency
input : PointPredicate Pred ; // A predicate Point -> bool
output: vector< vector<Surfel> > bdryComponents // A vector containing all the
vector of connected surfels
1 wholeBoundary ← detectBoundarySurfels( K, Pred, K.lowerBound(), K.upperBound() ) ;
// Call to Algorithm 1
2 while wholeBoundary is not empty do
3 vector<Surfel> aVector ; // initialize a vector of surfel
4 Surfel b = first element of the set wholeBoundary;
5 aVector ← track2DBoundary( K, surfelAdj, b, Pred ) ; // Call Algorithm 2
6 allBoundaries.push_back( aVector ) ;
7 // removing cells from boundary
8 for int i = 0 to aVector.size() -1 do
9 wholeBoundary.erase( aVector[i] ) ;
10 return allBoundaries ;
```
3 Examples of Applications
We present two applications of these algorithms, first for 2D contour extraction, and second for 3D connected components extraction.
3.1 Contour Extraction on 2D Grayscale Images
The algorithm of contour extraction was applied on the grayscale images of Figure 6 to extract level-set contours. The two adjacency definitions were tested (images (b and c)), and several thresholds and filtering were used to extract the digital contours in (d,e). Figure 7 presents complementary results to measure the robustness to noise. These results were obtained from the console command `pgm2freeman` (from directory `FrechetAndConnectedCompDemo/demoIPOL_ExtrConnectedReg`). Note that the ordered set of surfels was transformed into a set of pointels by calling method `extractAllPointContours4C`.
The obtained contours are represented with a freeman chaincode: it is a word whose letters are codes defining the direction when going from a point to another (0 is right, 1 up, 2 left, 3 down). Such a contour is illustrated on the previous floating figure with the starting point A and with the chain code 101212323300. All these experiments can be reproduced from the demonstration source by using the command lines that follows (from the build directory).
Figure 6: Visualization of the level set contours extracted with the proposed algorithm (given from \texttt{pgm2freeman} tool). Images (a-c) illustrates the setting of the interior or exterior adjacency parameter. Image (e) shows the extraction of contours from the Lena image with a threshold step equal to 20 and with minimal contour size equal to 20. Image (f) shows a filtering of such contours from a reference point and a minimal distance (respectively equal to (150,150) and 50).
- **Extraction of contours with different adjacency definitions.** (results given on Figure 6 (a-c) with specific colors to highlight each contour.)
The set of contours can be displayed with the source image in background:
Figure 7: Level set contours extraction on gradient image with (c,d) and without noise (a,b).
- Extraction of contours given by a threshold range. Extraction of all digital contours from 0 to 256 by step of 20 gray levels with a minimal size equal to 20 (result given in Figure 6 (d,e)):
```
./demoIPOL_ExtrConnectedReg/pgm2freeman -image ../Images/lena.pgm -thresholdRange 0 20 128 -min_size 20 > lenaContourSet.fc
```
The set of contours can be displayed with the source image in background:
```
./demoIPOL_ExtrConnectedReg/displayContours -fc lenaContourSet.fc -outputFIG lenaContourSet.fig -backgroundImageXFIG ../Images/lenaBG.gif 256 256
demoIPOL_ExtrConnectedReg/images/lenaBG.png
```
- Selection of a set contours from a reference point. (result given in Figure 6 (d,f))
```
./demoIPOL_ExtrConnectedReg/pgm2freeman -image ../Images/lena.pgm -thresholdRange 0 20 128 -min_size 20 --contourSelect 150 150 50 > lenaContourSetSelected.fc
```
```
./demoIPOL_ExtrConnectedReg/displayContours -fc lenaContourSetSelected.fc -outputFIG lenaContourSetSelect.fig -backgroundImageXFIG ../Images/lenaBG.gif 256 256
```
The experiments of Figure 7 are obtained with the same command lines (with images circularGradient.png and circularGradientNoise.png).
### 3.2 Surface Tracking on Digital 3D Objects
We also illustrate the surface extraction for digital 3D objects. First, a simple set of 3D voxels was generated on Figure 8 to display the connected components obtained with different values of adjacency. Each color represents one connected component. As expected, the exterior adjacency connects components which were disconnected with the interior adjacency. Figure 9 shows other surface extraction (with interior adjacency) from a set of voxels obtained by thresholding image (a).
These experiments can be obtained from the demonstration source with the following command that extracts a set of surfels with interior adjacency and export it in a 3D mesh representation (.off format):
Figure 8: Illustration of surface tracking on a digital 3D object. Each color represents a connected component. For image (b) (resp. (c)) the connected components are obtained with interior (resp. exterior) adjacency.
Figure 9: Illustration of connected surfel extraction (b) from a set of voxel (a). Each color represents a connected component defined according to the interior adjacency.
The threshold can also be set to particular values as for the lobster 3D image of Figure 9:
4 Minimal Code to Include the Contour Extraction in a C++ Program
To extract the contour directly in your own C++ program you have just to use the following code:
- Include the following header files:
```cpp
// To use image and import:
#include "DGtal/images/ImageContainerBySTLVector.h"
#include "DGtal/io/readers/PNMReader.h"
// To extract connected components
#include "DGtal/topology/helpers/Surfaces.h"
```
- Import an image:
```cpp
typedef DGtal::ImageContainerBySTLVector< Z2i::Domain, unsigned char > Image;
Image image = PNMReader< Image >::importPGM( "circleR10.pgm" );
```
- Define the threshold with Binarizer object:
```cpp
typedef IntervalThresholder< Image::Value > Binarizer;
Binarizer b(0, 128);
PointFunctorPredicate< Image, Binarizer > predicate(image, b);
```
- Extract all contours:
```cpp
Z2i::KSpace ks; // Khalimsky space in 2D
ks.init( image.domain().lowerBound(), image.domain().upperBound(), true );
SurfelAdjacency<2> sAdj( true ); // Interior adjacency in 2D
std::vector< std::vector< Z2i::Point > > contours;
// extraction of all the contours
Surfaces< Z2i::KSpace >::extractAllPointContours4C( contours, ks, predicate, sAdj );
```
The extraction in 3D images can also be processed in a similar way:
- Include the supplementary header files for the 3D images:
```cpp
// To import volume images:
#include "DGtal/io/readers/VolReader.h"
// To export the display
#include "DGtal/io/Display3D.h"
```
• Import a 3D volume image and define a binarizer:
```cpp
typedef ImageSelector < Domain, int >::Type Image;
Image image = VolReader<Image>::importVol("sample3D1.vol");
typedef IntervalThresholder<Image::Value> Binarizer;
Binarizer b(1, 255);
PointFunctorPredicate<Image,Binarizer> predicate(image, b);
```
• Extract all the sets of connected surfels:
```cpp
// We just have to update the dimension to 3:
Z3i::KSpace ks;
ks.init(image.domain().lowerBound(), image.domain().upperBound(), true);
SurfelAdjacency<3> sAdj( badj );
vector<vector<SCell> > vectConnectedSCell;
// Extract of the sets of connected surfel set:
Surfaces<KSpace>::extractAllConnectedSCell(vectConnectedSCell, ks, sAdj, predicate, false);
```
### Image Credits
All images created by the authors except:
- VolVis distribution of SUNY Stony Brook, NY, USA.
- Standard test image.
### References
7[http://labs.cs.sunysb.edu/labs/vislab/volvis-gallery](http://labs.cs.sunysb.edu/labs/vislab/volvis-gallery)
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01112943/file/article.pdf", "len_cl100k_base": 6088, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 35145, "total-output-tokens": 7799, "length": "2e12", "weborganizer": {"__label__adult": 0.0004284381866455078, "__label__art_design": 0.0022182464599609375, "__label__crime_law": 0.0005331039428710938, "__label__education_jobs": 0.0009331703186035156, "__label__entertainment": 0.00014352798461914062, "__label__fashion_beauty": 0.0002739429473876953, "__label__finance_business": 0.0002486705780029297, "__label__food_dining": 0.00045108795166015625, "__label__games": 0.0008287429809570312, "__label__hardware": 0.002223968505859375, "__label__health": 0.0008015632629394531, "__label__history": 0.0006909370422363281, "__label__home_hobbies": 0.00022923946380615232, "__label__industrial": 0.0008463859558105469, "__label__literature": 0.0004153251647949219, "__label__politics": 0.000347137451171875, "__label__religion": 0.0007810592651367188, "__label__science_tech": 0.341796875, "__label__social_life": 0.00015020370483398438, "__label__software": 0.0178680419921875, "__label__software_dev": 0.62646484375, "__label__sports_fitness": 0.0003457069396972656, "__label__transportation": 0.0007338523864746094, "__label__travel": 0.00030684471130371094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26210, 0.02755]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26210, 0.66026]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26210, 0.77497]], "google_gemma-3-12b-it_contains_pii": [[0, 1125, false], [1125, 3130, null], [3130, 5950, null], [5950, 8682, null], [8682, 10019, null], [10019, 13326, null], [13326, 14193, null], [14193, 16503, null], [16503, 19062, null], [19062, 19774, null], [19774, 21769, null], [21769, 22253, null], [22253, 23693, null], [23693, 25765, null], [25765, 26210, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1125, true], [1125, 3130, null], [3130, 5950, null], [5950, 8682, null], [8682, 10019, null], [10019, 13326, null], [13326, 14193, null], [14193, 16503, null], [16503, 19062, null], [19062, 19774, null], [19774, 21769, null], [21769, 22253, null], [22253, 23693, null], [23693, 25765, null], [25765, 26210, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26210, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26210, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26210, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26210, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26210, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26210, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26210, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26210, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26210, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26210, null]], "pdf_page_numbers": [[0, 1125, 1], [1125, 3130, 2], [3130, 5950, 3], [5950, 8682, 4], [8682, 10019, 5], [10019, 13326, 6], [13326, 14193, 7], [14193, 16503, 8], [16503, 19062, 9], [19062, 19774, 10], [19774, 21769, 11], [21769, 22253, 12], [22253, 23693, 13], [23693, 25765, 14], [25765, 26210, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26210, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
f7c9da582ad3dacad61075736fece28159762848
|
Prolog Server Faces –
A Declarative Framework for Dynamic Web Pages
Christian Schneiker\textsuperscript{1}, Mohamed M. Khamis\textsuperscript{2}, and Dietmar Seipel\textsuperscript{1}
\textsuperscript{1} Department of Computer Science,
University of Würzburg, Am Hubland, D – 97074 Würzburg, Germany
\{christian.schneiker|dietmar.seipel\}@uni-wuerzburg.de
\textsuperscript{2} Department of Computer Science,
German University in Cairo, Egypt
mkhamis89@gmail.com
Abstract. With Prolog Server Faces, we provide a stateful and event driven framework for dynamic web applications written in \textsc{prolog} and \textsc{xml}. Following the MVC concept, the view of web pages is fully specified in a compact \textsc{xml} definition with statements for processing backend logic in \textsc{prolog}. Our framework provides an extensive, and easy to extend, tag library for compact \textsc{xml}, which will be expanded to \textsc{xhtm}l with \textsc{ajax} support, and an \textsc{http} server implementation for the backend logic processing. Moreover, it is possible to use existing \textsc{jsf}-\textsc{xml} files with \textsc{psf}.
1 Introduction
In the last years, web applications have been a breakthrough in client-server computing: they are \textit{cross-platform compatible}, since they operate through a web browser, they require \textit{very little disk space} on the client side, and they can be \textit{upgraded} and \textit{integrated} with other web procedures easily. A web application takes an \textsc{html} website from static pages that only display content to interactive, dynamically generated \textsc{html} web pages.
The \textsc{http} protocol is very simple and mostly \textsc{tcp}–based: the client sends a request, the server replies with a response. Both request and response are text-based messages, each message contains a header, and sometimes a body. The message exchange is done without saving or storing any information, making it a stateless protocol.
\textsc{http} excels at delivering static websites. For interactive web applications, however, it will be tedious and tiring to parse headers, understand them, then reply in another header following the required format. For this reason, the need of server pages arose: server-side code is stored on the web server; when a client requests a dynamic page, the request is parsed, the requested page is processed, and the server replies with an \textsc{html} page, that can be understood by the client; these are pages generated by the server-side code, which can change dynamically according to the web application’s needs. Several technologies have been introduced for server-side scripting, such as \textsc{asp}, \textsc{php}, and \textsc{jsp}.
Scripting languages are easy to learn, and they can be written in the same files with the \textsc{html}, with interleaving \textsc{html} and scripting code. However, this flexibility comes at the expense of a well-designed application, maintainability, security, and sometimes
it is slower, because the code is interpreted and not compiled. This is where frameworks come into the picture: they facilitate the development process, and they make the code neater and better organized.

**Figure 1.** A web dialog generated from a database table and foreign key constraints.
The aim of this paper is to use PROLOG for server side coding by implementing Prolog Server Faces, a server faces technology that uses XML-based tag libraries [3, 16]. The library elements are transformed into standard XHTML pages using the PROLOG library FNQUERY for querying and transforming XML documents and data containers [12, 13]. JavaScript methods using AJAX with PROLOG predicates are implemented. An HTTP web server is implemented using an HTTP library for SWI-PROLOG. PSF separates the front-end and back-end, making design patterns like Model-View-Controller (MVC) or Facade easily viable.
The paper is structured as follows: the next section gives a short overview of the known web frameworks for implementing stateful applications in Java, followed by the approaches of combining PROLOG with web applications. Section 3 describes our implemented Prolog Server Faces technology. It shows the transformations of XML elements to write short and reliable code. We will also discuss how to use AJAX for combining XML with nested PROLOG statements, as well as the integration with relational databases. Section 4 explains PSF with the help of a case study, that shows the easy implementation of applications according to the MVC concept. The last section gives a short conclusion and shows possible future work.
## 2 Related Work
The following section describes JavaServer Faces, a Java EE web framework based on *serlets* and JSP technology, for developing dynamic web pages with object-oriented backend engines [6, 7, 15]. We will also talk about on Prolog Server Pages, an approach that combines HTML with PROLOG for web-based scripting, and explain two different techniques which have been developed over the last years [5, 14]. The combination of functional logic programming and web interfaces has been discussed in [4].
In general, with Server Faces it is possible to separate the view of web applications from their model and controller, according to the MVC concept. Server Pages, in contrast, just allows the developer to implement the controller within the HTML code, as with common web scripting languages like PHP.
### 2.1 JavaServer Faces (JSF)
As a framework for server-side user interface components, Sun Microsystems and other companies initially released JavaServer Faces in 2004. JavaServer Faces use XML for implementing the view of web pages according to the MVC concept. In contrast to static HTML pages or JSP, JSF provides stateful web applications, page templating or even AJAX support and gives the ability to develop server applications within the object-oriented programming language Java.
JSF allows processing client-generated events, to alter states of components, making them event-oriented. It includes backing beans, which synchronize Java objects to user interface components. Unlike desktop programs, web-based applications are expected to be accessed from different client types, such as desktop browsers, cell phones, and PDAs. JSF provides a flexible architecture allowing it to display components in altered ways, and it also offers many validation techniques.
As a server-side technology, all pages requested by the client are preprocessed by the server. Via HTTP, every requested XML document is transformed to standard XHTML, and nested calls to Java objects, which are specified in an expression language, are processed. The following example shows a JSF-XML element, which is transformed to standard XHTML. The selectOneMenu element has an additional attribute value with a Java expression for setting the right value, which is read from a data container, a Java Bean.
```xml
<h:selectOneMenu id="selectCar" value="#{carBean.currentCar}">
<f:selectItems value="#{carBean.carList}" />
</h:selectOneMenu>
```
In this example, a list of cars is read from the Bean and according to the values, a set of option elements is generated. The selectOneMenu element is transformed to a normal XHTML select element, and necessary attributes like name and id are added. The resulting valid XHTML page is transferred to the client and rendered by a browser.
```xml
<select id="selectCar">
<option value="corolla">Corolla</option> ...
</select>
```
The framework uses standard Java classes to transform the documents with common Java component tree operations. Even when the work with object-oriented programming languages and XML tree operations is hard to read and to debug, it makes it possible to extend the core libraries for the transformations (core taglib) by writing new classes and adding them to the library.
2.2 Prolog Server Pages (PSP)
For combining the features of logic programming and web based applications, some approaches have been developed over the last years. These PROLOG and HTML scripting techniques allow the implementation of dynamic web pages; nevertheless it is not possible to separate the program logic from the user interface and it makes the code hard to read. In this paper we want to discuss two major implementations of Prolog Server Pages, a technique very similar to JavaServer Pages, which allows inline PROLOG scripting in HTML documents.
**PSP Chunk Programming.** In the first PROLOG implementation of Server Pages, a programmer has to write HTML elements within the PROLOG source code files while chunks of PROLOG code are encapsulated like `<?psp Chunk ?>` [14]. A chunk can consist of a sequence of PROLOG rules followed by a sequence of PROLOG directives issuing PROLOG goals. This implementation forces the developer to call write predicates whenever something is desired as an output, even the HTML elements. Standard PROLOG goals are just interpreted as usual. Additionally, the querying is mixed with the declaration; this makes design and maintainability harder, and the developer will have to duplicate the code if a PSP chunk needs a predicate defined in a previous chunk.
The PSP server passes HTML tags to standard output and interprets PROLOG code within the PSP elements in the standard way of PROLOG. In the following Hello World example, the predicate `greeting_message` defines the string ‘Hello World!’ and writes it to standard output.
```
<html> <body>
<?psp
greeting_message('Hello World!').
?- greeting_message(X), write(X). ?>
</body> </html>
```
The result is a standard HTML document.
```
<html><body>Hello World!</body></html>
```
**PSP with General Server Pages.** In the second implementation by Benjamin Johnston, the aim of the Prolog Server Pages web-based scripting language was to implement dynamic web applications using PROLOG, avoiding manual parsing of common HTML elements within the source code [5].
The syntax for combining HTML and PROLOG scripting elements in this implementation is comparable to that of PHP, ASP and JSP, using tags like `<`, `?~, and `?>`, which is standardized in the General Server Pages approach. This allows having HTML and PROLOG code in the same file; however, this might come at the expense of design problems, especially if the developer wants to maintain a design pattern throughout the structure of his web application.
The following is another Prolog Server Pages example for Hello World. The predicate `greeting_noun` holds the string World, and it is defined outside of the HTML.
section. The begin and the end of the HTML code have to be marked with /* and */,
respectively, which is necessary for the PROLOG compiler and treats the element like
normal comments. Within this block, it is now possible to write PROLOG goals within
<?, Goal, ?> which the server will execute. Here X is bound to 'World!'. The re-
sult can be written to standard output within<?, Term, ?> tags. If Term is bound, then
its value is written, otherwise just the word Term is written to standard output. In this
example, the string 'Hello ' is written followed by 'World!' from the PSP code.
```prolog
greeting_noun('World!').
/*
<html> <body>
<?, greeting_noun(X), ?> Hello<?, X, ?>
</body> </html>
*/
```
The result generated by the PROLOG server is similar to the previous example.
## 2.3 FNQUERY and FNTRANSFORM
For the transformations in our framework, we extensively use the XML query, transfor-
mation and update language FNQUERY [12, 13], which is fully interleaved with SWI-
PROLOG. Like with XPATH, it is possible to query complex structures with path expres-
sions and axes. As an extension of XPATH, it is possible to select branches over deeply
nested structures. The sublanguage FNTRANSFORM, which extends XSLT, allows to
transform XML elements in PROLOG using normal syntax.
FNQUERY uses triples for representing XML documents. E.g., for the association
list As = [color:red, model:civic] of attribute/value pairs, cars:As:Es repre-
sents an XML element with the tag cars; the content Es can be a (possibly empty)
list of such triples.
The path language FNPATH of FNQUERY is very similar to XPATH. Compound
terms with the functor / are used for selecting subelements of an element. The functor @
is used for selecting attribute values. E.g., the binary predicate := in the call
```prolog
?- M := doc(cars.xml)/car@model.
```
selects the value for the attribute model from the element car in the XML document
cars.xml below and binds the result to M.
```xml
<cars>
<car id="corolla" model="Corolla" />
<car id="civic" model="Civic" />
<car id="city" model="City" />
</cars>
```
It is even possible to query with multiple location paths. The following expression
selects the attributes id and model and forms pairs [id, M] of the results:
```prolog
?- Pair := doc(cars.xml)/car-[@id, @model].
```
The library FNTRANSFORM is used for implementing transformations. In Sec-
tion 3.1, we will use calls X ---> Y for transforming FN triples X to other FN triples Y.
3 The Framework Prolog Server Faces (PSF)
PSF is a stateful and event-driven framework that integrates logic programming in modern web applications. We are combining the different PSP approaches described in Section 2.2 for mixing common PROLOG with XHTML to develop dynamic web pages with the advantages of JSF for writing condensed XML. This XML will be expanded to normal XHTML with connection to XML documents and relational databases for data handling. We provide an application programming interface for combining an extended HTTP server implemented in SWI-PROLOG with a huge and easy to extend tag library for defining web pages in a compact XML structure. For the transformations of XML elements, we use FNTRANSFORM.
3.1 Standard PSF Transformations
Like in JSF, nearly every XHTML element can be written in a compact form with additional attribute values, which read the data from complex data structures like term structures, XML documents, or even relational databases. In our PSF framework, we have implemented the core tag library, which consists of tags like HTML form, the different input element types, and of course radio buttons and select menus.
We want to exemplify the work with PSF-XML files with the following code of a single select menu, whose data are stored in an additional XML file. The PSF-XML page contains only two elements for defining the type of the select menu as well as an element with an FNPATH expression, which handles the data for the different option types, in this case the different car models.
```xml
<h:selectOneMenu id="selectCar">
<f:selectItems value="#{doc(cars.xml)/car-[@id, @model]}" />
</h:selectOneMenu>
```
The data can be read from either an XML document or from PROLOG data structures. The transformation itself is handled by FNTRANSFORM, which is integrated in our framework. When a client requests such a file, the server automatically transforms it to XHTML with one of its request handlers. The following code shows such a transformation from selectOneMenu to select elements:
```prolog
X ---> Y :-
X = 'h:selectOneMenu':As_1:[Item],
Y = select:As_2:Items.
% attributes
{ Id := X@id ; Id = '' },
As = [id:Id, name:Id, size:1],
fn_association_lists_union(As, As_1, As_2),
% subelements
{ Expression := Item@value ; Expression = '' },
psf_evaluate_expression(Expression, Pairs),
{ foreach([V, M], Pairs), foreach(I, Items) do
I = option:[value:V]:[M] }.```
Firstly, the attribute list is extended by the attributes \texttt{id}, \texttt{name}, and \texttt{size}; if these were already present, then the old values are kept. The \texttt{id} is taken from \texttt{X}; if \texttt{X} does not have an \texttt{id}, then it is set to the empty string as a default value. Secondly, the \texttt{option} subelements are generated based on the path expression in the attribute \texttt{value} of \texttt{Item}. In our example, the list \texttt{Pairs} given by
\[
\texttt{[}[\texttt{corolla}, \texttt{\textquote{Corolla}}], \texttt{[civic, \textquote{Civic}]}, \texttt{[city, \textquote{City}]\texttt{]}]
\]
is derived, since the path expression selects the attributes \texttt{id} and model of the car elements in the file \texttt{cars.xml}. Finally, each pair \texttt{[V,M]} yields an \texttt{option} subelement \texttt{I}. \texttt{FNTRANSFORM} works bottom-up, and there is no transformation rule for \texttt{selectItems} elements. Thus, these elements remain unchanged first. However, depending on the context – in our case \texttt{selectOneMenu} – they are transformed to other elements.
The output of the transformation is valid XHTML code, which can be rendered by the browser to a select menu with the different option elements.
\[
\textless \texttt{select id=\textquoteright selectCar\textquoteright name=\textquoteright selectCar\textquoteright size=\textquoteright 1\textquoteright} \\
\texttt{\textless option value=\textquoteright corolla\textquoteright Corolla\texttt{\textgreater}} \\
\texttt{\textless option value=\textquoteright civic\textquoteright Civic\texttt{\textgreater}} \\
\texttt{\textless option value=\textquoteright city\textquoteright City\texttt{\textgreater}} \\
\texttt{\textgreater \textless /select\textgreater}
\]
### 3.2 Database Support
Web interfaces are often connected with a database. Therefore, we have extended the valid lists of PSF attributes to specify the additional \texttt{type} of elements; here, \texttt{type} is set to \texttt{dialog} to generate a user dialog automatically from the database structure. In such a case, an attribute \texttt{value} defines the database name we want to connect to and the tables needed for the dialog definition.
For a dialog like in Figure 1, the transformation generates a select menu with the table names of the database tables specified in the attribute \texttt{value}. Each different selection of one of these tables in the select menu forces the \texttt{PROLOG} server to read the database schema and automatically generate a form element with different input types according to the schema; normal attributes result in single text field for inputs, foreign key constraints construct other select menus. For these foreign key select menus, the referenced tables are read and only valid values are set to the menu; the user cannot enter wrong data. Of course, it is also possible to select further values from the referenced table, other than just the different foreign key values, and to display them in the menu to make the generated dialog more readable.
### 3.3 AJAX-Based User Interaction
To implement the \texttt{controller} – the backend logic of the Prolog Server Faces – we need to handle user interactions from the web interface. In PSF, it is possible to use \texttt{AJAX} by calling \texttt{PROLOG} predicates from JavaScript. PSF comes with some predefined JavaScript functions, which can easily be included in the XML document with a common \texttt{script} element. The two main functions for combining native \texttt{PROLOG} with
JavaScript and AJAX are `sendRequestPL(arg0, arg1, ..., argN)` for sending values from form elements to the server and `sendRequestXML(arg0, arg1, ..., argN)` for complete XML elements. The argument `arg0` is the PROLOG predicate to be called by the server. The subsequent arguments `arg1, ..., argN-1` are the parameters. The last argument `argN` specifies the XML id, which is refreshed with AJAX after the servers send the response. The second JavaScript function sends complete XML elements to the server. Similar to the JavaScript function above, the first argument is the predicate to be called, while the last argument is the XML id to be refreshed.
For transmitting the different parameters, we use special XML envelopes. E.g., a message of the type `send` is used for sending a predicate with its parameters:
```xml
<message type="send">
<predicate>...</predicate>
<parameter>...</parameter> ... <parameter>...</parameter>
</message>
```
The server processes the message and responds with a newly generated XHTML element, and the browser can now update using a JavaScript `xhr` function.
## 4 Using the MVC Concept for a Sudoku Solver
We have implemented a Sudoku solver based on PSF and two different open source implementations of the backend logic in PROLOG and CLP, respectively. Although the application can also be developed with JSF, with PSF it is possible to benefit from logic programming, and it is possible to change the backend logic during runtime.

**Figure 2.** A sudoku solver web application developed with Prolog Server Faces
According to the MVC concept, the implementation will be divided into three main parts. The model holds the default values of the different text fields of the user interface, and it is updated during each processing step for storing the entered values. On page load, the data container is loaded, and the initial values are compiled into the generated XHTML web page. Each element can be transformed using `fntransform` in PROLOG or even in PSF-XML elements.
The second part is the view: the graphical user interface of the application. It is a well-formed PSF-XML page with the regular elements; the body has few PSF elements, which will be expanded to XHTML during the transformation. We use different namespaces - like h and f - to distinguish them from the regular (X)HTML.
```xml
<h:form>
<f:tableGrid columns="9" rows="9" value="#{doc(sudoku_data.xml)/cell@value}"
ondblclick="sendRequestPL('sudoku_hint', this.id, this.id)"
size="1"
onclick="sendRequestXML('solve_sudoku', 'view', 'view')"/>
</f:tableGrid>
This PSF code will generate a table grid with 81 text fields, like in Figure 2, the values are imported from the XML container mentioned above by providing an expression in the attribute value, or if it is not desired to do so, one can exclude the attribute. The XHTML document which is generated from the PSF-XML file consists of more than 400 lines of code; thus, PSF makes it possible to generate complex web pages from short and compact XML code.
As it can be seen from the implementation, it is easy to use PROLOG to solve problems and puzzles, and with PSF, it is possible to have a neat interface, and even a web application. At the same time, PSF preserves well-formed code that can be logically divided into Model, View and Controller layers, which makes maintenance much easier.
The usage of MVC proves more powerful than an extra layer for the solver, and the implementation of the solver can be changed with very little work of integration. Since we have decided to use the JSF syntax, it is possible use the same XML documents for JSF and PSF; only the AJAX calls to Java methods have to be changed.
5 Conclusions and Future Work
We have introduced Prolog Server Faces, a framework for stateful, event-driven web application with AJAX support and PROLOG backend logic. Our concept is fully integrated in SWI-PROLOG, and it provides a huge tag library for XML element transformations from PSF-XML to standard XHTML. The tag library can be easily extended to fit the developer's needs to implement reliable and easy to read user interfaces. We have also introduced methods for combining the stateless XHTML pages with XML documents for storing data or even accessing internal PROLOG term structures or databases. While PSF uses the same XML elements as JSF, it is possible to use already developed JSF interfaces and enhance them with the power of PROLOG and CLP.
PROLOG is a good choice when there is an aspiration for a short and concise code. PSF has added an interface for this powerful language: in addition to making it applicable on the Internet, PSF makes it possible to use PROLOG engines in web applications. Following the MVC concept, the separation of the view and the backend logic, the controller can be changed easily, even during runtime of the web application.
In another project, we are combining the XUL framework [9] with SWI-PROLOG. A next step is to automatically convert user interfaces between these two technologies for providing a platform-independent framework for graphical applications in the field of logic programming. It is even possible to parse natural text and generate the PSF-XML structure for the web interface automatically [10]. Future work will consider adding further functionality to PSF, such as cookies and session management predicates, and even developing validation tools.
References
|
{"Source-Url": "http://www.medien.ifi.lmu.de/pubdb/publications/pub/schneiker2010wlp/schneiker2010wlp.pdf", "len_cl100k_base": 5505, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 28792, "total-output-tokens": 6822, "length": "2e12", "weborganizer": {"__label__adult": 0.000274658203125, "__label__art_design": 0.00022530555725097656, "__label__crime_law": 0.0002332925796508789, "__label__education_jobs": 0.0002961158752441406, "__label__entertainment": 5.143880844116211e-05, "__label__fashion_beauty": 0.00010502338409423828, "__label__finance_business": 0.00012314319610595703, "__label__food_dining": 0.00023603439331054688, "__label__games": 0.00035071372985839844, "__label__hardware": 0.0005331039428710938, "__label__health": 0.0002620220184326172, "__label__history": 0.00013589859008789062, "__label__home_hobbies": 5.114078521728515e-05, "__label__industrial": 0.00026869773864746094, "__label__literature": 0.00013625621795654297, "__label__politics": 0.000152587890625, "__label__religion": 0.00031375885009765625, "__label__science_tech": 0.00521087646484375, "__label__social_life": 5.370378494262695e-05, "__label__software": 0.005321502685546875, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0002038478851318359, "__label__transportation": 0.00030922889709472656, "__label__travel": 0.00015211105346679688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26937, 0.01169]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26937, 0.50001]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26937, 0.83708]], "google_gemma-3-12b-it_contains_pii": [[0, 3004, false], [3004, 5160, null], [5160, 7897, null], [7897, 10594, null], [10594, 13110, null], [13110, 15543, null], [15543, 19116, null], [19116, 21186, null], [21186, 24084, null], [24084, 26937, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3004, true], [3004, 5160, null], [5160, 7897, null], [7897, 10594, null], [10594, 13110, null], [13110, 15543, null], [15543, 19116, null], [19116, 21186, null], [21186, 24084, null], [24084, 26937, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26937, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26937, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26937, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26937, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26937, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26937, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26937, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26937, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26937, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26937, null]], "pdf_page_numbers": [[0, 3004, 1], [3004, 5160, 2], [5160, 7897, 3], [7897, 10594, 4], [10594, 13110, 5], [13110, 15543, 6], [15543, 19116, 7], [19116, 21186, 8], [21186, 24084, 9], [24084, 26937, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26937, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
15d9d8f2a2078934137b36b90c833c4e1fffc757
|
[REMOVED]
|
{"len_cl100k_base": 6695, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 40522, "total-output-tokens": 9251, "length": "2e12", "weborganizer": {"__label__adult": 0.0004870891571044922, "__label__art_design": 0.0005869865417480469, "__label__crime_law": 0.0004963874816894531, "__label__education_jobs": 0.0009756088256835938, "__label__entertainment": 0.0001074671745300293, "__label__fashion_beauty": 0.00024819374084472656, "__label__finance_business": 0.0004863739013671875, "__label__food_dining": 0.0006313323974609375, "__label__games": 0.0007839202880859375, "__label__hardware": 0.0011272430419921875, "__label__health": 0.0010881423950195312, "__label__history": 0.0004448890686035156, "__label__home_hobbies": 0.00022733211517333984, "__label__industrial": 0.0010890960693359375, "__label__literature": 0.0004725456237792969, "__label__politics": 0.0004367828369140625, "__label__religion": 0.0007925033569335938, "__label__science_tech": 0.141357421875, "__label__social_life": 0.0001595020294189453, "__label__software": 0.006710052490234375, "__label__software_dev": 0.83935546875, "__label__sports_fitness": 0.0004854202270507813, "__label__transportation": 0.00127410888671875, "__label__travel": 0.00029969215393066406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31352, 0.02535]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31352, 0.42622]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31352, 0.86205]], "google_gemma-3-12b-it_contains_pii": [[0, 2447, false], [2447, 5279, null], [5279, 7392, null], [7392, 9166, null], [9166, 12218, null], [12218, 14614, null], [14614, 17038, null], [17038, 20092, null], [20092, 21711, null], [21711, 24739, null], [24739, 25091, null], [25091, 27597, null], [27597, 30364, null], [30364, 31352, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2447, true], [2447, 5279, null], [5279, 7392, null], [7392, 9166, null], [9166, 12218, null], [12218, 14614, null], [14614, 17038, null], [17038, 20092, null], [20092, 21711, null], [21711, 24739, null], [24739, 25091, null], [25091, 27597, null], [27597, 30364, null], [30364, 31352, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31352, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31352, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31352, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31352, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31352, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31352, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31352, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31352, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31352, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31352, null]], "pdf_page_numbers": [[0, 2447, 1], [2447, 5279, 2], [5279, 7392, 3], [7392, 9166, 4], [9166, 12218, 5], [12218, 14614, 6], [14614, 17038, 7], [17038, 20092, 8], [20092, 21711, 9], [21711, 24739, 10], [24739, 25091, 11], [25091, 27597, 12], [27597, 30364, 13], [30364, 31352, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31352, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
fafe18678f01d1dbf3085387525a780937437a04
|
Abstract
Until now, reinforcement learning has mostly been implemented using high-level programming languages such as Java and C. Our goal was to implement reinforcement learning in a simple graphical environment. We achieved this by programming algorithms in the graphical programming environment, Scratch. We built an algorithm that averaged successful distances to calculate the ideal actions needed to capture a ball in a simple single-variable game. After using the algorithm, an agent computed ideal actions, and was eventually able to play the game perfectly.
We also built a reinforcement learner for the classic computer game, Copter, in which a keyboard controlled helicopter dodges scrolling blocks. Our algorithm considered future game states to compute the value of actions in particular game states. After significant tuning of relevant variables, the Copter learner played almost perfectly. Our research shows that reinforcement learning can be simply implemented and therefore has the potential to become commonplace in everyday life.
1 Introduction
Computer science has evolved over the last 50 years from a scientific field involving simple calculations computed by vacuum tube monstrosities, to a world-wide presence that controls infrastructure, provides entertainment, and drives scientific advancement. Reinforcement learning is still in its early stages, but has the potential to change the way the world works at a rate never before seen in history.
Reinforcement learning is a subset of machine learning in which a computer learns to solve a problem efficiently by interpreting information about its environment and its experiences. The goal of our research is to implement reinforcement learning in simple environments and use algorithms to make the agent quickly learn to solve problems as efficiently as possible. The two environments we chose for this project are a Harry Potter Quidditch game, and a helicopter game. Both of these games were written in a programming language called Scratch, which was developed by MIT for its simplicity and ease of use. The open source nature of all Scratch applications was also a major factor in our decision to use Scratch.
The first game we picked was based off the Harry Potter sport of Quidditch. In Quidditch, a player known as the seeker flies on a broomstick and tries to capture a ball called the Snitch. The Harry Potter Quidditch game, created by Michael Littman, was picked due to its lack of environmental
variables (factors that change while playing the game) and the single control input that is required to play the game. This game consists of a seeker that scrolls across the screen in a single dimension, and a Snitch that scrolls in the same dimension at a slower speed. The goal is to press the space bar as the player approaches the Snitch in order to catch it.
The helicopter game, designed by a user named Bungle, is a Scratch version of the classic game, Copter. It is significantly more complicated than the Harry Potter game because it has two possible control inputs, and many factors that affect whether the player wins or loses. Helicopter is essentially a scrolling world where rectangular obstacles appear every few seconds and move towards the player's helicopter. The player must dodge each obstacle by pressing the up arrow to fly the helicopter higher, or by releasing the up arrow to allow the helicopter to drift lower. The player must also avoid flying too high or too low, so that the copter will not crash into the ceiling or the floor. The game continues to scroll indefinitely until the helicopter crashes, so performance can be measured by calculating the distance the helicopter travels through the world.
Our programs are unique because reinforcement learning has never, to our knowledge, been implemented in Scratch. This is important because Scratch offers a simplified programming environment for games and other visual applications. This means that reinforcement learning is more accessible than ever before, which will allow a new demographic to understand and develop it. Development of reinforcement learning could lead to advances in neuroscience, intelligent machines, and eventually a world where machines will be independent.
2 Background
2.1 Reinforcement Learning Agents
Reinforcement learning (RL) is a process in which a computer program called an agent interacts with its environment and compiles data based on its experience. This data allows it to solve a problem on hand. The problem in RL is defined by a numerical reward that the agent receives from its environment as shown in Figure 2.1. The learner, known as the agent, attempts to maximize this reward by performing actions that will result in the greatest long term reward. The function that assigns reward to the learner is preset to represent the goals of the agent. For example, a chess game may have a positive reward assigned to winning the game and taking the opponents pieces, or similarly, a program designed to drive a car may give huge negative rewards for crashes [1]. The reward function essentially defines the agent's problem and goals.
Figure 2.1: This shows the agent/environment interactions. An agent receives a state and reward from the environment, performs an action on the environment, and receives the next state and reward from the environment.
To accomplish this goal and maximize accumulated reward, the agent must discover a policy that maximizes its long-term reward starting from the current state of the environment. In RL,
A policy is defined as a set of actions to take based on the state of the environment. The policy may take the form of a table, a set of rules, or any other mathematical expression that assigns an optimal action to an environmental state. The environmental state, or simply, the state, is a collection of all the data that the reinforcement-learning agent extracts from the environment at a given moment. Thus, the state space refers to all of the variables in the environment that the agent receives as inputs (shown in Figure 2.1). States are important to policies because they define when a given action can and should be taken. These policies, which are often represented as probabilities of future reward, are constantly updated as the agent learns, and should eventually converge to an optimal policy which expresses the best possible set of actions to take given certain states.
The optimal policy is defined as the policy which represents the optimal value function. Value is a measure of long-term total reward, and the value function is an expression which represents this quantity. It is often expressed formally as
$$V^\pi(s) = E^s_-\sum_{t=0}^{\infty} \gamma^t r_{t+k+1} \mid s_t = s$$
where the value of policy \( \pi \) is equal to the discounted sum of the rewards at states defined by the time interval \( t \) and state number \( k \) [1]. Basically, value is described by the sum of the rewards applied with a discount factor \( \gamma \).
Essentially, the value function is the core of RL because an optimal value function represents a perfect learner. This means that using algebra and mathematics to solve a value function will accomplish the same goal as an RL algorithm. This equivalence between RL and mathematics is key in deciding which problems are most efficiently solved by RL. For example, the sport of archery can be modeled at a basic level using physics to relate the archer’s bow angle, the target distance, the target height, and the wind speed. If we wanted to create an optimal archer, it would be much more efficient to program it with physics equations, instead of an RL algorithm that learns these relationships on its own. Therefore, archery is a problem where RL methods would not be an ideal solution. In contrast, a game such as backgammon has about \( 10^{20} \) different possible board configurations, and would require solving what are known as the Bellman optimality equations for an incredible number of variables. This would take years even on today's fastest computers, and is clearly not feasible [1]. Backgammon, however, is relatively feasible in an RL situation because it has discrete states and actions.
Q learning is an RL algorithm based largely off of the value function. Specifically, Q-learning is based off of what are known as action-values, which are represented by the variable Q. State action-values simply refer to the value of an action “a” given a state “s”. The Q-learning algorithm is:
$$Q[s, a] := (1 - \alpha)Q[s, a] + \alpha \left( r + \gamma \max_{a'} Q[s', a'] \right)$$
where \( \alpha \) is the learning rate, \( \gamma \) is the discount factor, and \( \max \) refers to the maximized value of Q [2]. \( \alpha \) controls how quickly a new experience affects Q values by weighting a known experience and a new experience inversely. \( \gamma \) is important because it controls how much the agent considers future rewards. If \( \gamma \) is high, then the agent will become foresighted, weighing the consequences of future states heavily into its decisions. If \( \gamma \) is low, the agent will be shortsighted,
only considering very immediate rewards. This is one of the key factors of Q-learning, and allows an agent to make decisions based on the probabilities of encountering good and bad states, not only in the next state, but also further into the future.
Greediness is another key factor in an RL agent. Naturally, in order to learn, an RL agent must spend a portion of its time exploring all possible actions so that the best ones can be discovered. At the same time, optimal decisions should be made as often as possible, so that the agent can maximize its reward. This is known as the explore/exploit dilemma, and is an important consideration in any reinforcement learner. Often this dilemma is solved by forcing the agent to take a random action a certain percentage of the time. This percentage is slowly tapered off as the agent converges to an optimal policy. Some situations, like our Snitch game, can be infinitely greedy because future states are not impacted by current actions. This is because capturing or missing the Snitch has no effect on the Snitch or the seeker’s positions, which are the only indicators of state. In this case, a random exploration period was implemented in the beginning, and was completely shut off once five Snitches were caught. The helicopter game required a clever workaround to the standard explore/exploit ratios, because an unnecessary crash could not be tolerated once a good policy was achieved.
2.2 Differences in RL Problems
In addition to all of the factors that make up agents, there are also many aspects to RL problems. There are two essential categories of RL problems: Markov decision processes and non-Markov decision processes. Markov decision processes (MDPs) consist of states that all have the Markov property. The Markov property means that each state contains all of the information necessary to make an optimal decision. Chess is an example of an MDP because each board state contains all the information the player needs to make the best move possible. This ignores the other player’s behavior and tendencies, which are non-Markov. An example of a non-MDP is the card game Go Fish. In Go Fish you draw cards from either the deck or another player in order to make pairs with all the cards in your hand. The current state space contains the cards in your hand and any pairs that players have placed down. Because not all of the information needed to make an optimal decision is in the current state space, Go Fish is not considered an MDP. RL is most often implemented in MDPs and near-MDPs because it is much simpler to write algorithms for them since they only need to take current and future states into account. Some algorithms (most notably model-building algorithms) can handle non-MDP reinforcement learning, but this paper deals solely with MDPs.
MDPs can also be categorized by other distinguishing criteria. All RL problems are either episodic tasks or continuing tasks. Episodic tasks have terminal states, where they stop and reset to a starting configuration. These tasks usually do not implement discounted reward because the reward must be maximized over the discrete period of time present in each episode of the task. Sometimes reward is assigned at the end of each task only, and the reward is therefore assigned to the total set of decisions that were made within the episode. An example of an episodic task
is Tic Tac Toe. An agent plays through each game individually, and is assigned reward based on whether it won, lost, or tied. The set of experience over the course of many games of Tic Tac Toe is used to find an optimal policy [1].
Continuing tasks are the opposite of episodic tasks. They consist of a single episode that continues to play out infinitely, or until it is interrupted. This means that rewards within the episode must be discounted so that more imminent rewards are weighted higher in the decision making process than rewards in the more distant future. The act of balancing a pole is an example of a continuing task given by Sutton and Barto. Balancing a pole can go on indefinitely, and requires future states to be discounted so that the immediate next state is considered with the most weight. A pole balancing agent should either be assigned positive reward for not dropping the pole over a given interval of time, or a negative reward for dropping the pole. Either way, the maximum reward comes from balancing the pole indefinitely [1]. Both the Snitch and Copter games researched in this paper are examples of continuing tasks.
Another small differentiating factor in RL problems is whether state transitions are stochastic or deterministic. Stochastic state transitions have a probability that a certain next state will occur, given a current state and action. This makes for a significantly more complicated game than one with deterministic (fixed) state transitions, where an action in a particular current state will lead only to one next state. Solving these games mathematically can be highly complex, but with RL algorithms can become simpler, and so are often used in RL.
2.3 Brief History of Reinforcement Learning
RL as we know today started in the 1980s through the union of two main ideas: learning by trial and error and optimal control. Learning by trial and error has its roots in the study of psychology and animal learning while optimal control concerns the use of value functions and dynamic programming in order to define long term learning. Trial-and-error learning in psychology led to the development of the Law of Effect, which involves selecting actions, comparing their consequences, and associating them with particular situations. RL uses the need for greater reward or satisfaction to foster favorable actions and reject other undesirable ones. Optimal control problems were solved by dynamic programming, using the Bellman Optimality equation, which was created during the 1950s. This equation defines the policy that returns the most reward. Dynamic programming is more efficient than most other methods, although its computational time grows exponentially with more states, accounting for its widespread use. RL solves problems related to optimal control problems, especially MDPs [1].
3 Method/Design Decisions
Feasibility was a major factor in the decision to implement RL in the Harry Potter and Copter games. Some initial ideas regarding applications of RL included games like foosball, Guitar Hero, and minesweeper. However, after some thought, we realized that the best option given the project time constraints was to pursue an algorithm that would teach an agent without the use of complex hardware or environments. Ideally, this environment would be pre-designed so that most of our efforts
could be focused on programming the agent, instead of designing a proper game.
3.1 Environment Decisions
Foosball, in particular, was rejected due to the complexity of building a robot that could not only make contact with the ball, but also see and recognize ball movements accurately. Also, the number of states in foosball is enormous, and state transitions follow very complex patterns based on subtle geometry and physics that would not be picked up by a simple RL agent.
Guitar Hero would have been reasonably simple to implement, due to pre-existing work that has been done with Guitar Hero playing robots. A Guitar Hero robot called DeepNote can play Guitar Hero perfectly by reading the television screen for notes using five photodiodes (discrete electronic components that sense the presence of light) [4].
Using photodiodes would be perfect for RL because they output discrete digital signals which would define a simple state space. The game also outputs a sound, which could serve as an input to the agent as feedback for when it makes a mistake. The reason Guitar Hero was not chosen as an environment for RL is because Guitar hero is not a typical RL problem. It can be played by simply adding a latency constant to the photodiode outputs in order to hit notes on time. Therefore, RL adds a substantial amount of calculation and coding that is not necessary in a game as straightforward as Guitar Hero.
Minesweeper was not used as an environment because it is a partially observable MDP and also a complex problem in theoretical computer science [3]. A partially observable MDP is an environment that contains only Markov states, but the entirety of each state cannot be read by the agent. This makes designing an RL algorithm highly complex, and beyond the scope of our project. Designing a minesweeper agent has been proven possible by researchers from U.C. Berkeley, but it is highly complex, and therefore doesn’t fit within the goals and constraints of our work [3].
Finally, we considered the possibility of having multiple agents. One example would be to have multiple exploring agents report back to a main robot which would accomplish the given task. This hierarchical structure’s implementation was discussed in the context of a Warcraft-like game where a main agent, that simulates the human player, controls lesser agents, which would perform simple functions such as gathering wood, gold, and building structures. To control the subordinate agents, the main agent would manipulate reward functions in order to assign tasks. The problem with multiple hierarchical agents in the Warcraft-like game was that they would require dynamic reward functions. This would have confused the lesser agents because every time a reward function got changed, the policies would no longer describe an optimal value function. This would cause the agent to behave irrationally and the agent would need to relearn its task.
Besides the time and complexity constraints, programming languages also played an important role in deciding which problems were feasible. For the Warcraft-like game, Java and Matlab were considered as they both provided an easy way to create two-dimensional arrays. The simplicity of array functions in Matlab was an integral attribute that
allowed its simple implementation. Because the Warcraft-like game needed dynamic rewards, the idea was abandoned. In addition, our lack of experience with Java and the relatively short time constraint made it difficult to implement a successful algorithm. After some research and exploration, we found Scratch, which proved to be an easily learned interface that would still allow us to work with complex visual environments, and implement different types of agents.
3.2 Scratch Games
After a cursory search for Scratch games online, we arrived at a game for which we could easily implement a simple algorithm.
The Harry Potter Snitch game formed a good foundation for learning to program an agent in Scratch. The only attribute of the state that was reported to the agent was its distance away from the Snitch. This made for a direct correlation between the state value and the optimal action. Computing a value function or using traditional algorithms such as Q-learning was not necessary for such a simple problem. We solved the Snitch game by computing an optimal policy through the running average of successful Snitch captures. The agent would first attempt random catches in an “explore” mode until it made five successful catches. It would then use the average state value of all the successful Snitch captures as the only state in which to attempt to capture the Snitch. This allowed the agent to capture as many Snitches as possible.
At this point, we wondered if indeed we could implement a more complex algorithm in a game using Scratch. We searched for a game with a more dynamic environment and a vast number of possible states. We also wanted to make sure that the agent was required to take more than one action. A popular game that satisfies these constraints is Copter, a famous Flash-based game recently adopted for Scratch. In this game, a helicopter navigates a cave where there are several rectangular obstacles hindering its path. When the helicopter comes in contact with an obstacle or hits the floor or ceiling of the cave, it crashes and the game is ended.
In order to make the actual game “agent-friendly”, we flattened the floor and ceiling of the environment to simplify the game, and defined quantitative states. Initially, we identified the parameters that an agent would find useful for determining its state in the game. We called these parameters relative distance x, relative distance y, close to top, and close to bottom. The relative distance parameters defined the state of the environment with a 3x3 grid. Relative distance x told if the helicopter was far, close, or very close to the wall in the x direction. Relative distance y told whether the helicopter was above, in line with, or under the wall in the y direction. The “close to” parameters told the agent if the helicopter was about to hit the floor or ceiling of the cave.
Originally, we designed the reward function for Copter so that the helicopter would receive a reward of -10 for crashing and zero reward for surviving. This function, however, did not yield good results. The helicopter had tendency to stay very close to the floor, and frequently crashed in the meantime, as it tried to avoid the walls. Because of this, we decided to apply a larger negative reward (-50 as opposed to the usual -10) for crashing into the floor or ceiling. In order to allow the helicopter to run autonomously for hours, the game also needed to be modified so that crashes did not reset the game. This was essential in allowing us to efficiently allow the agent to learn.
The agent still did not learn to avoid walls and ceilings very effectively, regardless of the tuned reward function. We decided that we needed to re-define the states because some states turned out to be identical. For example, the parameter “relative distance y” did not specify the magnitude of the vertical distance the block was from the helicopter and thus distorted the values of certain states. We first defined three more levels for this parameter so that there would be six total levels of “relative distance y” which could now better approximate the vertical distance of the helicopter from the wall. We then abandoned that system in favor of an absolute coordinate system that would define the distance of the wall and helicopter objectively relative to the environment. Because on occasion the helicopter would be the same relative distance away from the wall but would be in a completely different position in the cave, the agent was led to several erroneous decisions that failed to take into account the proximity of the floor and ceiling. For the absolute coordinate system, we discarded “relative distance y” and used the new parameters “helicopter y” and “wall y.” These new absolute parameters would define the y positions of the helicopter and wall, with each parameter being broken down into three levels. We later found that three levels were still not enough to properly define the states and subsequently increased the number of levels for these parameters to six, and finally twelve.
In order to promote exploration,
we had to devise a clever method of convincing the helicopter to explore new states, without causing it to crash unnecessarily. The nature of Copter doesn’t allow for a typical explore/exploit ratio because random exploration would result in unnecessary crashes once a good policy is found. This problem was solved by initializing all of the Q values at a high number (one), relative to the estimated optimal Q values. This means that, as negative rewards are obtained, the helicopter will tend to choose actions it has not yet tried because they will have higher Q values. This encourages maximum exploration in the beginning, which tapers off as an optimal policy is learned.
Initially, we found the agent sinking limitlessly into the floor as it was exploring the effects of various actions in this state. The agent was always helpless in this scenario, as both up and down actions would be equally bad in the short-term. This meant that the agent did not learn how to escape. Thus, we limited how far the helicopter could sink into the cave after crashing.
With the game finally defined, we were able to create a learning algorithm that could determine a near-optimal policy for the helicopter. We decided to proceed with Q-learning, a simple model-free algorithm that could lead to such a policy. The parameters that we used to define the learning algorithm were \( \alpha \) (the learning rate) of 0.4 and \( \gamma \) (discount factor) of 0.8. The agent was programmed to identify its current state and take the action (up or down) that had the higher Q-value for the particular state.
Later, we tried to get the helicopter to learn faster by tweaking the alpha and discount values. We originally tried modifying a fixed alpha value, but found that a high fixed alpha would have the helicopter forget useful past experience while a low fixed alpha would not have the helicopter learn at all. Thus, we adopted a cooling alpha that would decrease exponentially, allowing the helicopter to learn a lot at first and then gradually base its actions solely on its previous experiences.
We used the Q-learning equation
\[
Q[s, a] := (1 - \alpha)Q[s, a] + \alpha \left( r + \gamma \max_{a'} Q[s', a'] \right)
\]
and computed the value of the states in two tables: the Q-value for the up action and the Q-value for the down action. We assigned every combination of variables defining each state a number which corresponded to the quantitative representation of the state. The state’s number was used to index it in the Q-tables and when the agent was faced with a situation, it used the variable information coming from the environment to compute the state’s number and determine which action in the current state had the greater Q-value and take that action. It would then take the reward for its next state and use that to update the value of the previous state.
Copter still needed some more tuning as taking no action in this game was equivalent to falling due to “gravity.” The helicopter was always forced to push the up key several times in order to remain at the same altitude, causing it to appear very jittery and often hindering its performance.
We realized adding a short delay after deciding on an action would cause the helicopter to commit to the action for a period of time and make the helicopter’s actions more meaningful. Instead of each up action moving the helicopter by only a few pixels, taking an action would now cause the helicopter to “jump”. This allows the
helicopter to change states when it takes actions, ensuring that it knows moving up or down will result in a better reward.
In order to test our variables, we created a list of the cumulative reward at every two hundredth iteration (we used a modulus of two hundred) of the program. We had a variable that stored the cumulative reward and added that value to the list every time a counter variable was divisible by two hundred. We were then able to export and graph the list. Variables that quickened the processing time of the algorithm converged to a smaller negative cumulative reward. We used these graphs to determine the best combination of variables that most efficiently solved the Q-learning algorithm.
4 Results
The agent in the Snitch game created a policy for capturing the Snitch relatively quickly. After approximately thirty failed attempts, it reached the quota (five successful attempts) that we had set for it and stopped exploring random catches. These five attempts were averaged to form an optimal catching distance for the agent, which consequently never missed another Snitch. This exceeded our expectations because averages tend to waver significantly when calculated from few elements of a set. Evidently, the spectrum of possible catching distances was quite narrow, leading to a relatively precise average distance even after a small number of successful catches.
In Copter, the cumulative reward variable and algorithm allows us to experiment with variables and determine which combinations are beneficial to the Q-learning algorithm. We first experimented with the size of the walls, seeing if their thickness had any effect on the agent’s learning. Figure 4.1 shows our results.

4.1: This chart shows that over time the cumulative reward for the thick wall and thin wall are similar, but that the thick wall reaches an optimal policy more quickly. (Iterations is an arbitrary value that defines the passage of time)
For the first hundred values, the curve for the algorithm is steep because the agent does not know the consequences of its actions and is exploring all the states. However, the change in cumulative reward decreases as time passes as the agent learns which actions to take.
Surprisingly, Figure 4.1 shows that the cumulative reward for the thicker wall is smaller than that for the thinner wall. We expected the copter to hit the thick wall longer with each crash and get a larger negative reward, but this doesn’t seem to be the case. It appears, however, that due to the helicopter’s bobbing motion, it sometimes randomly hits or misses the wall while in the same state. This causes the agent to learn sporadically and decreases the accuracy of the Q values. The helicopter will not dodge the thick wall randomly, due to its depth, so it correctly learns that the particular state is undesirable.
We were also able to plot the
difference in the cumulative reward between the algorithm with a delay and the algorithm that omitted delay in Figure 4.2.

**Figure 4.2**: This graph shows the massive difference in cumulative reward for when the helicopter delays its actions slightly versus when it does not delay. It is apparent that delaying the actions of the helicopter improves its performance drastically. (Iterations is an arbitrary value that defines the passage of time)
Figure 4.2 shows that the cumulative reward values for the algorithm with no delay are drastically worse than those for the algorithm with a delay. The curve for the algorithm with delay drops sharply initially, but eventually approaches a horizontal asymptote. This shows that the negative reward per iteration is approaching zero and that the Copter agent has found a near-optimal policy. The curve for the algorithm without the delay is almost linear and the slope is steep. This means that the negative reward per iteration isn’t changing, and that the agent is not learning effectively.
5 Related Work
Before Governor’s School we did research from a variety of sources; we all watched an introduction to reinforcement learning by Satinder Singh and read chapters of *Reinforcement Learning: An Introduction* by Richard S. Sutton and Andrew G. Barto and Michael Littman’s thesis, “Algorithms for Sequential Decision Making.”
We were also presented with many different RL problems that have already been studied and experimented with. The first game that we were shown was called Taxi. It was essentially a grid with a few walls, colored blocks, a circle, and a movable colored block. All the agent knew was the functions that it could attempt and that the goal of the game was to exit out of the screen. The agent had to learn, through random actions, the task of transporting the circle from one colored block to another...
and how it should accomplish it.
Other smaller applications of RL include the classic game of Pitfall, a robotic dog that had to learn how to climb over rocky terrain, and an RC helicopter that learned how to fly upside down despite a person trying to turn it right side up.
Among the challenges faced by RL developers today is tackling the prisoners’ dilemma. This is a multiple agent game played by two people, where one prisoner could either “rat out” the other prisoner or help him out. It was a difficult situation because if both prisoners helped the other out they would get a reasonable amount of reward, but if one ratted the other out the tattletale would get a great reward and the prisoner who had been told on would receive a bad reward. Then again, if both prisoners try ratting the other out then each would receive a horrible reward. The game represented the problem of trying to maximize reward without knowing the other prisoner’s actions. The prisoners’ dilemma is similar to problems developers face when they attempt to have multiple agents either cooperate or work against each other. Occasionally, the agents are faced with conflicting decisions in which sometimes they choose to make choices that actually yield much less reward. Finding a solution to this problem is a challenge that RL developers are trying to tackle.
## 6 Conclusion
In essence, the successes and failures of our experiences with Scratch originated from two basic concepts in RL: proper state space and action space definitions. In the Harry Potter game, these concepts were largely overlooked due to the simplicity of the game. The state space of the game was simplified because the agent could only move along one axis and could only press the space bar. The game could have been made more complex by allowing the agent and the “Snitch” to move freely in two dimensions or by adding a second “Snitch” which moved at a different velocity. This is similar to the Copter game because of the dynamic nature of the helicopter and the more complex environment. In the Copter game, we only defined twelve relative levels of height in the environment. We could have easily defined hundreds more which would have refined the decisions of the helicopter, but that would have required sacrificing a significant amount of time for learning. With more powerful computers, we could design a “Daredevil Copter” that would fly as close as possible to each obstacle and fly away at the last instant for stylistic purposes. In terms of gaming applications, our experiences have shown that in the future, it will be possible to design a “perfect opponent” for virtually any non-NP video game. In future homes, RL can be used to program a computer that learns users’ preferences and takes actions to assist the users in their daily tasks. The innovative applications of RL can be applied in several fields through a variety of programming languages and can impact the world in ways never seen before.
## 7 Acknowledgements
We would like to thank our project mentors Carlos Diuk, Thomas Walsh, and Mike Littman and our advisor Jameslevi Schmidt for their help and guidance. We would also like to thank Donald M. Brown the Director and Blase Ur the Program Coordinator.
and Deans Dr. Yogesh Jaluria and Dr. Thomas Farris of the Rutgers University School of Engineering for making the NJ Governor’s School of Engineering and Technology a possibility. We would also like to thank our sponsors Rutgers University, the Rutgers University School of Engineering, the Motorola Foundation, Morgan Stanley, PSEG, Silver Line Building Products, and the families of 2001-2008 program alumni for providing us with this enrichment opportunity. Last but not least, we would like to thank Curtis Giddings for allowing us the use of his computer for collecting data for this project.
References
Appendix A.
Snitch Game Code
Seeker Code
when ⬐ clicked
forever
move 15 steps
wait 0.05 secs
if x position < -295
set x to 300
when ⬐ clicked
go to x: -20 y: -18
set grabbed to 0
when I receive grab
set grabbed to 0
switch to costume reach
wait 0.1 secs
switch to costume grab
wait 0.5 secs
switch to costume grab
wait 0.1 secs
switch to costume fly
Stage Code
Snitch Code
when clicked
point in direction -90°
go to x: 97 y: -82
when clicked
forever
next costume
wait 1 secs
when clicked
forever
move 5 steps
wait (0.05) secs
if x position < -240
set x to 255
Appendix B.
Copter Game Code
Wall Code
when
set x to 320
set size to 30
go back 1 layers
forever
move 0 speed steps
set dist_x to x position of Wall + 300
if dist_x > 420
set rel_dist_x to 2
else
if dist_x > 150
set rel_dist_x to 1
else
set rel_dist_x to 0
end if
set Wall_y to round y position of Wall
set dist_y to round y position of Wall
set dist_x to round y position of Wall
set last_state to state
set state to 1 + rel_dist_x + Wall_y + 3 + heli_y + 0
if touching edge
set z to 0
set y to pick random 160 to 170
switch to costume costume
else
set rel_dist_x to 0
end if
replace item last_state of 0,0,0 with 0.6 item last_state of 0,0,0 + 0.4 reward + 0.6 item state of 0,0,0
Copter Code
when [ ] clicked
reset timer
switch to costume helicopter
set y to 0
set cum_reward to 0
set move_y to 5
set speed to 1
set axiom to 5
wait 0.2 secs
forever
set coop_y to y position
if up = 1
set y to y position + move_y
else
if y position > -1.6
set y to y position - move_y
if touching color red
set reward to -10
else
if touching color green
set reward to -10
if timer < best
set best to timer
broadcast crashed and wait
else
set reward to 0
set cum_reward to cum_reward + reward
when [ ] clicked
delete all of rewards
set ct to 0
forever
if ct mod 200 = 0
add cum_reward to rewards
set ct to ct + 1
|
{"Source-Url": "http://oldsoe.rutgers.edu/sites/default/files/gset/RL.pdf", "len_cl100k_base": 7840, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 40448, "total-output-tokens": 8799, "length": "2e12", "weborganizer": {"__label__adult": 0.0007596015930175781, "__label__art_design": 0.0006895065307617188, "__label__crime_law": 0.0009236335754394532, "__label__education_jobs": 0.0023059844970703125, "__label__entertainment": 0.000270843505859375, "__label__fashion_beauty": 0.0003888607025146485, "__label__finance_business": 0.0004532337188720703, "__label__food_dining": 0.0009784698486328125, "__label__games": 0.00896453857421875, "__label__hardware": 0.0023441314697265625, "__label__health": 0.0014524459838867188, "__label__history": 0.0006976127624511719, "__label__home_hobbies": 0.00028896331787109375, "__label__industrial": 0.0011911392211914062, "__label__literature": 0.0006003379821777344, "__label__politics": 0.0005507469177246094, "__label__religion": 0.0007829666137695312, "__label__science_tech": 0.2489013671875, "__label__social_life": 0.0001970529556274414, "__label__software": 0.007061004638671875, "__label__software_dev": 0.7177734375, "__label__sports_fitness": 0.000946044921875, "__label__transportation": 0.0012712478637695312, "__label__travel": 0.0004012584686279297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38934, 0.03705]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38934, 0.70376]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38934, 0.95695]], "google_gemma-3-12b-it_contains_pii": [[0, 2491, false], [2491, 5554, null], [5554, 9159, null], [9159, 12550, null], [12550, 15909, null], [15909, 19191, null], [19191, 21279, null], [21279, 24300, null], [24300, 27792, null], [27792, 30759, null], [30759, 32663, null], [32663, 35913, null], [35913, 37062, null], [37062, 37416, null], [37416, 37630, null], [37630, 38318, null], [38318, 38318, null], [38318, 38934, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2491, true], [2491, 5554, null], [5554, 9159, null], [9159, 12550, null], [12550, 15909, null], [15909, 19191, null], [19191, 21279, null], [21279, 24300, null], [24300, 27792, null], [27792, 30759, null], [30759, 32663, null], [32663, 35913, null], [35913, 37062, null], [37062, 37416, null], [37416, 37630, null], [37630, 38318, null], [38318, 38318, null], [38318, 38934, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38934, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38934, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38934, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38934, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38934, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38934, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38934, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38934, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38934, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38934, null]], "pdf_page_numbers": [[0, 2491, 1], [2491, 5554, 2], [5554, 9159, 3], [9159, 12550, 4], [12550, 15909, 5], [15909, 19191, 6], [19191, 21279, 7], [21279, 24300, 8], [24300, 27792, 9], [27792, 30759, 10], [30759, 32663, 11], [32663, 35913, 12], [35913, 37062, 13], [37062, 37416, 14], [37416, 37630, 15], [37630, 38318, 16], [38318, 38318, 17], [38318, 38934, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38934, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
a90a18f6b91cdb9e906b70835a8c5462df5c2c6d
|
Towards Testing Model Transformation Chains
Using Precondition Construction in
Algebraic Graph Transformation
Elie Richa$^{1,2}$, Etienne Borde$^{1}$, Laurent Pautet$^{1}$
Matteo Bordin$^{2}$, and José F. Ruiz$^{2}$
$^1$ Institut Telecom; TELECOM ParisTech; LTCI - UMR 5141
46 Rue Barrault 75013 Paris, France
firstname.lastname@telecom-paristech.fr
$^2$ AdaCore, 46 Rue d’Amsterdam 75009 Paris, France
lastname@adacore.com
Abstract. Complex model-based tools such as code generators are typically designed as chains of model transformations taking as input a model of a software application and transforming it through several intermediate steps and representations. The complexity of intermediate models is such that testing is more conveniently done on the integrated chain, with test models expressed in the input language. To achieve a high test coverage, existing transformation analyses automatically generate constraints guiding the generation of test models. However, these so called test objectives are expressed on the complex intermediate models. We propose to back-propagate test objectives along the chain into constraints and test models in the input language, relying on precondition construction in the theory of Algebraic Graph Transformation. This paper focuses on a one-step back-propagation.
Keywords: testing, model transformation chains, algebraic graph transformation, weakest precondition, ATL
1 Introduction
Tools used in the production of critical software, such as avionics applications, must be thoroughly verified: an error in a tool may introduce an error in the critical software potentially putting equipment and human lives at risk. Testing is one of the popular methods for verifying that such tools behave as specified. When testing critical applications, a primary concern is to ensure high coverage of the software under test (i.e. ensure that all features and different behaviors of the software are tested). The recommended way to achieve this is to consider each component separately, identify its functionalities, and develop dedicated tests (unit testing). This guideline is therefore reflected in industrial software quality standards such as DO-330 [13] for tools in the avionics domain.
However, with complex model transformation tools such as code generators, applying unit testing is very costly and impractical. In fact, such tools are often...
designed as a chain of several model transformations taking as input a model developed by the user in a high-level language and transforming it through several steps. Unit testing then boils down to testing each step of the chain independently. In practice, intermediate models increase in detail and complexity as transformations are applied making it difficult to produce test models in the intermediate representations: manual production is both error-prone because of the complexity of the languages and tedious since intermediate languages do not typically have model editors [1]. It is often easier for the tester to create a model in the input language of the chain, with elements that he knows will exercise a particular feature down the chain. Existing approaches [7,8,11] can automate the production of tests for model transformations, thus producing unit tests. However, when a test failure uncovers an error, analyzing the complex intermediate representations is difficult for the developer.
Given these factors, we propose an approach to the testing of model transformation chains that aims to ensure test coverage of each step while preserving the convenience of using test models in the input language. First we rely on existing analyses [7,8,11] to generate a set of so-called test objectives that must be satisfied by test models to ensure sufficient coverage. Then we propose to automatically propagate these test objectives backward along the chain into constraints over the input language. The back-propagation relies on the construction of preconditions in the theory of Algebraic Graph Transformation (AGT) [5].
Within this general approach, we focus in this paper on the translation of postconditions of one ATL transformation step to preconditions, which is a key operation in the propagation of test objectives. We thus propose a first translation of the ATL semantics into the AGT semantics where we use the theoretical construction of weakest preconditions [9] . We illustrate our proposal on a realistic code generation transformation using a prototype implementation based on the Henshin3 and AGG4 frameworks. This first prototype allowed us to backpropagate test objectives across one transformation step.
In the remainder of the paper, section 2 gives an overview of the testing approach, explaining the role of precondition construction. Section 3 recalls the main concepts of ATL and AGT. Section 4 introduces an example of ATL transformation that will serve to illustrate (i) the translation of ATL to AGT in section 5 and (ii) the construction of preconditions in section 6. Finally, we present our prototype in section 7 and conclude with our future plans in section 8.
2 General Approach
As highlighted in [1], one of the major challenges in achieving thorough testing is producing test models that are relevant, i.e. likely to trigger errors in the implementation. Several approaches address this challenge for standalone transformations. In [7], [8] and [11], the authors propose to consider a transformation
\footnote{3 The Henshin project, http://www.eclipse.org/henshin}
\footnote{4 The Attributed Graph Grammar development environment, http://user.cs.tu-berlin.de/~gragra/agg}
and analyse one or more of (i) the input metamodel, (ii) the transformation specification and (iii) the transformation implementation. This analysis results in a set of constraints, each describing a class of models that are relevant for finding errors in the transformation. We refer to such constraints as test objectives in the remainder of the paper. Constraint satisfaction technologies such as the Alloy Analyzer\(^5\) and EMFtoCSP [6] are then used to produce model instances such that each test objective is satisfied by at least one test model.
Fig. 1: Transformation of Postcondition to Precondition
Let us now consider a transformation chain \( M_i \xrightarrow{T_i} M_{i+1} \) for \( 0 \leq i < N \) where an input model \( M_0 \) is processed by \( N \) successive transformation steps \( T_i \) into intermediate models \( M_i \) and ultimately into the final output model \( M_N \). Focusing on an intermediate transformation \( T_i \) such that \( i > 0 \), we can apply the above approaches to obtain a set of test objectives \( \{to_{i,j} \mid 0 \leq j \} \) ensuring the thoroughness of the testing of \( T_i \). Each test objective \( to_{i,j} \) is a constraint expressed over the input metamodel of \( T_i \). At this point we want to produce a model \( M_0 \) at the beginning of the chain, which ultimately satisfies \( to_{i,j} \) after being processed by the sequence \( T_0 ; \cdots ; T_{i-1} \). We propose to automate this operation by transforming \( to_{i,j} \) into a test objective \( to_{i-1,j} \) at the input of \( T_{i-1} \) and thus iterate the process until we obtain \( to_{0,j} \) that can serve to produce a model \( M_0 \). The key challenge of this paper is to devise an analysis that takes as input a constraint \( to_{i,j} \) and a transformation specification \( T_{i-1} \), and produces as output a constraint \( to_{i-1,j} \). Such a method exists in the formal framework of Algebraic Graph Transformation (AGT) [5] in the context of the formal proof of correctness of graph programs. It is the transformation of postconditions into preconditions [9] that we propose to adapt and reuse in our context. Since we consider transformations specified in ATL [10], a translation to AGT is necessary.
As shown in Figure 1, we propose to translate the ATL transformation \( T_{i-1} \) into a graph transformation program (\( ATL2AGT \) arrow) and \( to_{i,j} \) into a graph
---
\(^5\) Alloy language and tool, http://alloy.mit.edu/
constraint (OCL2GC arrow). Assuming the constraint is a postcondition of $T_{i-1}$, we automatically compute the precondition $t_{i-1,j}$ that is sufficient to satisfy the postcondition (Post2Pre arrow) using the formal foundation of AGT. Since ATL embeds OCL constraints, ATL2AGT also uses OCL2GC. However this is a complex translation [14] that will not be addressed given the space limitations. We thus focus on a first proposal of ATL2AGT in section 5 and Post2Pre in section 6, both limited to the structural aspects of the semantics and constraints. First, we recall the main elements of ATL and AGT in the next section.
3 Semantics of ATL and AGT
3.1 ATL and OCL
ATL [10] is a model-to-model transformation language combining declarative and imperative approaches in a hybrid semantics. A transformation consists of a set of declarative matched rules, each specifying a source pattern and a target pattern. The source pattern is a set of objects of the input metamodel and an optional OCL\(^6\) constraint acting as a guard. The target pattern is a set of objects of the output metamodel and a set of bindings that assign values to the attributes and references of the output objects. The execution of a transformation consists of two main phases. First, the matching phase searches in the input model for objects matching the source patterns of rules (i.e. satisfying their filtering guards). For each match of a rule’s source pattern, the objects specified in the target pattern are instantiated. A tuple of source objects may only match one rule, otherwise an error is raised. For this reason the order of application of rules is irrelevant. Second, the target elements’ initialization phase executes the bindings for each triggered rule. Bindings map scalar values to target attributes, target objects (instantiated by the same rule) to target references, or source objects to target references. In the latter case, a resolve operation is automatically performed to find the rule that matched the source objects, and the first output object created by that rule (in the first phase) is used for the assignment. If no or multiple resolve candidates are found, the execution stops with an error.
As the current proposal is limited to structural aspects, we only consider bindings of target references and not those of attributes. OCL constraints are not considered as OCL2GC (Figure 1) is too complex to address within this paper [14]. Instead, we will use test objectives in the form of AGT graph constraints.
3.2 AGT and Graph Constraints
Several graph transformation approaches are proposed in the theory of Algebraic Graph Transformation [5]. We will be using the approach of Typed Attributed Graph Transformation with Inheritance which we found suitable to our needs and which is supported in the AGG tool allowing for concrete experimentation of our proposals (see section 7). There are 3 main elements to a graph transformation:
\(^6\) Object Constraint Language (OCL), http://www.omg.org/spec/OCL
a *type graph*, a set of *transformation rules*, and a *high-level program* specifying the order of execution of rules.
Graphs consist of *nodes* connected with directed *edges*. Much like models conform to metamodels, typed graphs conform to a *type graph*. As introduced in [3], *metaclasses, references* and *metaclass inheritance* in metamodels correspond to *node types, edge types*, and *node type inheritance* in type graphs which allows an easy translation between the two. Even though multiplicities and containment constraints are not addressed in type graphs, they are supported in AGG.
A graph transformation is defined as a set of *productions* or *rules* executed in a graph rewriting semantics. There are two major approaches to defining rules and their execution. Even though the theory we use is based on the Double Pushout (DPO) approach, we will use the simpler Single Pushout (SPO) approach and notation which is also the one implemented in AGG. A rule consists of a *morphism* from a *Left-Hand Side* (*LHS*) graph to a *Right-Hand Side* (*RHS*) graph. The LHS specifies a pattern to be matched in the transformed graph. Elements mapped by the morphism are preserved and elements of the RHS that are not mapped by the morphism are new elements added to the transformed graph. We do not address element deletion since our translation will not need it (see section 5). Thus the execution of a rule consists in finding a match of the LHS in the transformed graph and adding the new nodes and edges.
With the transformation rules defined above, we can construct so called *high-level programs* [9] consisting of the sequencing or the iteration of rules. A program can be (1) elementary, consisting of a rule $p$, (2) the sequencing of two programs $P$ and $Q$ denoted by $(P;Q)$, or (3) the iteration of a program $P$ as long as possible, denoted by $P\downarrow$, which is equivalent to a sequencing $(P;(P;\cdots))$ until the rule no longer applies.
*Graph constraints* are similar to OCL constraints for models. They are defined inductively as nested conditions, but for the sake of simplicity we consider a very basic form $\exists(C)$ where $C$ is a graph. A graph $G$ satisfies such a constraint if $G$ contains a subgraph isomorphic to $C$. This form is suitable to express test objectives which typically require particular patterns to exist in models.
Next, we present the example that will help us illustrate our proposal.
## 4 Example: Code Generation
We aim to apply our approach to a realistic code generator from Simulink\textsuperscript{7} to Ada/C source code, under development in the collaborative research project *Project P*\textsuperscript{8}. Simulink is a synchronous data flow language widely used by industrials for the design of control algorithms. The code generator consists of a chain of up to 12 model transformations (depending on configuration options), including flattening of nested structures, sequencing, code expansion and optimisation. We consider the *Code Model Generation* (CMG) transformation step of
\textsuperscript{7} MathWorks Simulink, \url{http://www.mathworks.com/products/simulink/}
\textsuperscript{8} Project P, \url{http://www.open-do.org/projects/p/}
this chain to illustrate our translation to the AGT semantics. Then, considering a postcondition on the output of CMG, we construct a precondition on its input.
CMG transforms a Simulink model into a model of imperative code. A simplified version of the input metamodel is shown on the left side of Figure 2. Computation blocks such as *Sum* or *UnitDelay* receive data through their *Inport* sa n ds e n dt h e r e s u l t o ft h e i rc o m p u t a t i o n t h r o u g ht h e i r *Outport* s. *Signal* s convey data from a source *Outport* to a target *Inport*. The output metamodel of CMG shown on the right side of Figure 2 features variables (*Variable*), expressions (*Expression*), references to variables (*VarExp*) and imperative code statements. In particular, an assignment statement (*AsgnStmt*) assigns its *rightExp* expression to its *leftExp* which typically is a reference to a variable.
The ATL implementation of the CMG transformation consists of the 3 matched rules in Listing 1.1. The first rule creates a *Variable* for each *Outport* of the input model, and the second one creates a *VarExp* for each *Signal*. Note that the second rule requires resolving the *Outport* at line 5 into a *Variable* and will be used to illustrate our modeling of the resolve mechanism in AGT. The last rule creates 2 assignment statements referencing a *Variable* created by the same rule at line 13, a *VarExp* resolved at line 12, and a *Variable* resolved at line 14.
As for the test objective, we consider it directly in the graph constraint form in Figure 3. It requires that an assignment statement exists where both the source
Fig. 3: Example Test Objective
and the target of the assignment are references to variables. This pattern matches the objects created by ATL rule $UDel$, and thus requires resolve operations.
5 ATL to Algebraic Graph Transformation
This section introduces our main contribution, the translation of ATL transformations to artifacts of an algebraic graph transformation: a type graph, graph transformation rules and a high-level program. Given the rewriting semantics of AGT and the exogeneous nature of the transformations we consider, we choose to model the ATL transformation as a rewriting of the input graph that adds the output elements. Consequently, the type graph includes types corresponding to both the input and the output metamodels. As explained in Section 3.2, the correspondence of metamodel elements to graph type elements is straightforward [3], and the resulting type graph is depicted in Figure 4. In addition, tracing node types are added to support the ATL resolve mechanism. First, an abstract Trace node relates source objects ($SMElement$) to target objects ($CMEElement$) of ATL rules. Second, for each ATL rule, a concrete trace node (named $<\text{atrule-name}>_\text{Trace}$) references the actual source and target types of this rule. These trace nodes will be used by the graph transformation rules, as explained next.
Fig. 4: Resulting Type Graph in AGG
Much like the execution semantics of ATL, the graph transformation starts with a set of instantiation rules that create output nodes without linking them. For example, $O2Var_\text{Inst}$ in Figure 5a matches an $Outport$ and creates a $Variable$ and a concrete trace $O2Var_\text{Trace}$ relating the source and target nodes (numbers indicate mapping by the rule morphism). Then, a second set of resolving rules rely on the trace nodes produced in the first phase to link output nodes. For example, $S2VExp_\text{Res}$ in Figure 5b matches an $Outport$ and a $Trace$ node
to find the resulting Variable and create the variable edge. Thus the elements created in the RHS of Figure 5a (O2Var_Trace and Variable) are matched later by the LHS in Figure 5b (Trace and Variable). Note the use of abstract Trace nodes in the resolving rules to allow resolving with any rule as long as the number and types of source and target elements match, as per the ATL semantics.
Finally, a high-level program implements the two phases by iterating instantiation rules first and resolving rules second, yielding the following for CMG:
\[ P = O2Var_{Inst} \downarrow; S2VExp_{Inst} \downarrow; UDel_{Inst} \downarrow; S2VExp_{Res} \downarrow; UDel_{Res} \downarrow \]

**Fig. 5:** GTS rules translated from ATL rules
Having translated the ATL transformation to the AGT semantics, we next explain how we use precondition construction to back-propagate test objectives.
## 6 Transformation of Postcondition to Precondition
In [9], Habel, Pennemann and Rensink formally define a construction of weakest precondition for high-level programs in the interest of proving transformation correctness. Given a program and a postcondition, the weakest precondition is a constraint that characterizes all possible input graphs that lead to the termination of the program with a final graph satisfying the postcondition. A precondition construction is defined for one rule application and applied inductively to the sequence of rules defined by the program. In the case of \( P \downarrow \) programs each number of iterations of \( P \) from 0 to \( \infty \) must be considered, making the construction theoretically infinite.
However, in contrast with proof of correctness, we actually do not need to compute the weakest precondition. Since the final goal is to find a test model satisfying the test objective, computing one sufficient precondition would be enough. To do so, we limit iterations of rules in the program to a bounded number, making the precondition construction finite (the choice of bounds remains
an open point at this stage). For example we can bound the CMG transformation to two applications of $O2Var$ and one application of each of the other rules:
\[
P = O2Var_{Inst}; O2Var_{Inst}; S2VExp_{Inst}; UDel_{Inst}; S2VExp_{Res}; UDel_{Res}
\]
As for the precondition construction of each rule, the theoretical construction requires to consider all possible overlaps of the RHS of the rule with the graph of the postcondition. Each overlap represents a way in which the rule may contribute to the postcondition. For each overlap, we perform an operation similar to a backwards execution of the rule\(^9\) and thus construct a sufficient precondition.
### 7 Prototype and Results
We have prototyped our approach using the *Henshin* and *AGG* frameworks. *ATL2AGT* is implemented with the Henshin API, and an existing service is used to export the artifacts to AGG. Precondition construction is not readily available in AGG, so we have implemented *Post2Pre* using the existing services such as generating overlaps of two graphs and constructing a pushout complement. For the example test objective introduced in Figure 3, two of the preconditions we obtain are shown in Figure 6. The existence of one of these patterns in input models ensures that the *UDel* rule is able to execute and resolve the necessary elements to produce the pattern required by the test objective.
\[\text{Fig. 6: Preconditions Computed for the Example Test Objective}\]
### 8 Conclusion
In this paper we have approached the problem of testing model transformation chains with two main concerns: achieving high test coverage and using test models in the input language of the chain to ease the analysis of detected errors. To this end, we have proposed to extend existing approaches of test objective generation with a method to propagate intermediate test objectives back to the input language. Central to this method is the transformation of postconditions of one transformation step into preconditions, which was the focus of this paper. We have contributed a first translation from ATL semantics into the AGT semantics and adapted the theoretical precondition construction to achieve our goal.
\(^9\) the formal construction is a *pushout complement*
In future work, we plan to investigate the OCL2GC step of our approach and alleviate the limitation to structural aspects by handling object attributes based on works such as [4,12,14]. Moreover, we plan to work towards test-suite minimality [2] by allowing a test model to cover several test objectives across the chain and only back-propagating non-satisfied test objectives.
References
|
{"Source-Url": "https://www.adacore.com/uploads_gems/amt14_submission_5.pdf", "len_cl100k_base": 4925, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 28298, "total-output-tokens": 6321, "length": "2e12", "weborganizer": {"__label__adult": 0.00037026405334472656, "__label__art_design": 0.0003306865692138672, "__label__crime_law": 0.0003058910369873047, "__label__education_jobs": 0.0006198883056640625, "__label__entertainment": 5.823373794555664e-05, "__label__fashion_beauty": 0.0001779794692993164, "__label__finance_business": 0.00018787384033203125, "__label__food_dining": 0.0003581047058105469, "__label__games": 0.00047659873962402344, "__label__hardware": 0.0008153915405273438, "__label__health": 0.0005822181701660156, "__label__history": 0.00022292137145996096, "__label__home_hobbies": 9.113550186157228e-05, "__label__industrial": 0.000453948974609375, "__label__literature": 0.0003001689910888672, "__label__politics": 0.00023925304412841797, "__label__religion": 0.0005121231079101562, "__label__science_tech": 0.0232696533203125, "__label__social_life": 9.369850158691406e-05, "__label__software": 0.004673004150390625, "__label__software_dev": 0.96484375, "__label__sports_fitness": 0.0003540515899658203, "__label__transportation": 0.0005669593811035156, "__label__travel": 0.00020956993103027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25590, 0.02875]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25590, 0.37029]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25590, 0.8699]], "google_gemma-3-12b-it_contains_pii": [[0, 2401, false], [2401, 5628, null], [5628, 8107, null], [8107, 11128, null], [11128, 14358, null], [14358, 15999, null], [15999, 17961, null], [17961, 20001, null], [20001, 22242, null], [22242, 25590, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2401, true], [2401, 5628, null], [5628, 8107, null], [8107, 11128, null], [11128, 14358, null], [14358, 15999, null], [15999, 17961, null], [17961, 20001, null], [20001, 22242, null], [22242, 25590, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25590, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25590, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25590, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25590, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25590, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25590, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25590, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25590, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25590, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25590, null]], "pdf_page_numbers": [[0, 2401, 1], [2401, 5628, 2], [5628, 8107, 3], [8107, 11128, 4], [11128, 14358, 5], [14358, 15999, 6], [15999, 17961, 7], [17961, 20001, 8], [20001, 22242, 9], [22242, 25590, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25590, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
c593bec10d43d5dbd355dc229b4803a925be0229
|
Containerisation and the PaaS Cloud
Claus Pahl
Abstract—Containerisation is widely discussed as a lightweight virtualisation solution. Apart from exhibiting benefits over traditional virtual machines in the cloud, containers are especially relevant for Platform-as-a-Service (PaaS) clouds to manage and orchestrate applications through containers as an application packaging mechanism. We discuss the requirements that arise from having to facilitate applications through distributed multi-cloud platforms.
Index Terms—Cloud Computing, Cluster, Container, Docker, Kubernetes, Multi-cloud, PaaS, Virtualisation.
1 INTRODUCTION
The cloud relies on virtualisation techniques to achieve elasticity of large-scale shared resources. Virtual machines (VMs) have been the backbone at the infrastructure layer providing virtualised operating systems. Containers are a similar, more lightweight virtualisation concept, i.e., less resource and time consuming. They have been suggested as a solution for more interoperable application packaging in the cloud.
VMs and containers are both virtualisation techniques, but solve different problems. The difference is that containers are tools for delivering software – i.e., there is a PaaS (Platform-as-a-Service) focus – in a portable way aiming at more interoperability [1] while still utilising operating systems (OS) virtualisation principles. VMs on the other hand are about hardware allocation and management (machines that can be turned on/off and be provisioned) – i.e., there is an IaaS (Infrastructure-as-a-Service) focus on hardware virtualisation. Containers as a replacement for VMs are only a specific use case where the allocation of hardware resources is done through containers by componentising workloads in-between clouds.
For portable, interoperable applications in the cloud, we need a lightweight distribution of packaged applications for deployment and management [2]. A solution is containerisation. The basic ideas of containerisation are
- a lightweight portable runtime,
- the capability to develop, test and deploy applications to a large number of servers and
- the capability to interconnect containers.
Bernstein [3] already proposes containers to address concerns at the cloud PaaS level. They also relate to the IaaS level through sharing and isolation aspects.
This article reviews the virtualisation principles behind containers, in particular in comparison with virtual machines. The relevance of the new container technology for PaaS cloud shall be specifically investigated. As applications are distributed today, the resulting requirements for application packaging and interoperable orchestration over clusters of containers are also discussed. We aim to clarify how containers can change the PaaS cloud as a virtualisation technique, specifically PaaS as a platform technology. We go beyond [3], addressing what is needed to evolve PaaS significantly further as a distributed cloud software platform resulting in a discussion of achievements and limitations of the state-of-the-art. To illustrate concepts, some sample technologies will be discussed if they exemplify technology trends well.
2 VIRTUALISATION AND THE NEED FOR CONTAINERISATION
Historically, virtualisation technologies have developed out of the need for scheduling processes as manageable container units. Processes and resources in question are the file system, memory, network and system info.
Virtual machines as the core virtualisation construct of the cloud have been improved successively by addressing scheduling, packaging and resource access (security) problems. VM instances as guests use isolated large files on their host to store their entire file system and run typically a single, large process on the host. While security concerns are largely addressed through isolation, a number of limitations remain. It needs full guest OS images for each VM in addition to the binaries and libraries necessary for the applications, i.e., a space concern that translates into RAM and disk storage requirements and is slow on startup (booting might take from one to more than 10 minutes [4]), see Fig. 1.
Packaging and application management is a requirement that PaaS clouds need to answer. In a virtualised environment, this has to be grounded in technologies that
allow the sharing of the underlying platform and infrastructure in a secure, but also portable and interoperable way. Containers can match these requirements, but a more in-depth elicitation of specific concerns is needed.
A container holds packaged self-contained, ready-to-deploy parts of applications and, if necessary, middleware and business logic (in binaries and libraries) to run applications [5], see Fig. 1. An example would be a Web interface component with a Tomcat server. Successful tools like Docker are frameworks built around container engines [6] that allow containers to act as a portable way to package applications to run in containers. This means that a container covers an application tier or node in a tier, which results in the problem of managing dependencies between containers in multi-tier applications. An orchestration plan describes components, their dependencies and their lifecycle in a layered plan. A PaaS then enacts the workflows from the plan through agents (which could be a container runtime engine). PaaS can support the deployment of applications from containers.
In PaaS, there is a need to define, deploy and operate cross-platform capable cloud services [7] using lightweight virtualisation, for which containers are a solution. There is also a need to transfer cloud deployments between cloud providers, which requires lightweight virtualised clusters for container orchestration [3]. Some PaaS are lightweight virtualisation solutions in this sense.
3 CONTAINERISATION FOR LIGHTWEIGHT VIRTUALISATION AND APPLICATION PACKAGING
Recent OS advances have improved their multi-tenancy capabilities, i.e., the capability to share a resource.
3.1 Linux Containers
As an example of OS virtualisation advances, new Linux distributions provide kernel mechanisms such as namespaces and cgroups to isolate processes on a shared OS – supported through the Linux container project LXC.
- Namespace isolation allows groups of processes to be separated not allowing them to see resources in other groups. Different namespaces are used by container technologies for process isolation, network interfaces, access to interprocess communication, mount-points or for isolating kernel and version identifiers.
- cgroups (control groups) manage and limit resource access for process groups through limit enforcement, accounting and isolation, e.g., limiting the memory available to a specific container. This ensures containers are good multi-tenant citizens on a host. It provides better isolation between possibly large numbers of isolated applications on a host. Control groups allow sharing available hardware resources between containers and, if required, setting up limits and constraints.
Docker builds its solution on LXC techniques. A container-aware daemon, such as dockerd for Docker, is used to start containers as application processes and plays a key role as the root of the user space's process tree.
3.2 Docker Container Images
Based on these mechanisms, containers are OS virtualisation techniques particularly suitable for application management in the PaaS cloud. A container is represented by lightweight images – VMs are also based on images, but full, monolithic ones. Processes running in a container are almost fully isolated. Container images are the building blocks from which containers are launched.

Fig. 2. Container Image Architecture.
As it is currently the most popular container solution, Docker shall illustrate how containerisation works. A Docker image is made up of file systems layered over each other, similar to the Linux virtualisation stack, using the LXC mechanisms, see Fig. 2.
- In a traditional Linux boot, the kernel first mounts the root file system as read-only, then checks its integrity before switching the rootfs volume to read-write mode. Docker mounts the rootfs as read-only as in a traditional boot, but instead of changing the file system to read-write mode, it uses a union mount to add a writable file system on top of the read-only file system.
- There may actually be multiple read-only file systems stacked on top of each other. Using union mount, several file systems can be mounted on top of each other, which allows creating new images by building on top of base images. Each of these file system layers is a separate image loaded by the container engine for execution.
- Only the top layer is writable. This is the container itself, which can have state and is executable. It can be thought of as a directory that contains everything needed for execution. Containers can be made into stateless images (and reused in more complex builds), though.
A typical layering could include (top to bottom, see Fig. 2): a writable container image for applications, an Apache image and an Emacs image as sample platform components, a Linux image (a distribution such as Ubuntu), and the rootfs kernel image.
Containers are based on layers composed from individual images built on top of a base image that can be extended. Complete Docker images form portable application containers. They are also building blocks for appli-
cation stacks. The approach is lightweight as single images can be changed and distributed easily.
3.3 Containerising Applications and Managing Containers
The container ecosystem consists of an application container engine to run images and a repository or registry operated via push and pull operations to transfer images to and from host-based engines. The repositories play a central role in providing access to possibly tens of thousands of reusable private and public container images, e.g., for platform components such as MongoDB or Node.js. The container API allows creating, defining, composing, distributing containers, running/starting images and running commands in images.

Containers for applications can be created by assembling them from individual images, possibly based on base images from the repositories, which can be seen in Fig. 2 that shows a containerised application. Containers can encapsulate a number of application components through the image layering and extension process. Different user applications and platform components can be combined in a container. Fig. 3 illustrates different scenarios using the container capability of combining images for platform and application components.
The granularity of containers, i.e., the number of applications inside, varies. Some favour the one-container-per-app approach, which still allows composing new stacks easily (e.g., changing the Web server in an application) or reuse common components (e.g., monitoring tools or a single storage service like memcached – either locally or predefined from a repository such as the Docker Hub). Apps can be built/rebuilt and managed easily. The downside is a larger number of containers with the respective interaction and management overhead compared to multi-app containers, though the container efficiency should facilitate this.
Storage and network management are two specific issues that containers as application packages for interoperable and distributed contexts must facilitate.
- There are two ways data is managed in Docker – data volumes and data volume containers. Data storage features can add data volumes to any container created from an image. A data volume is a specially designated directory within one or more containers that bypasses the union file system to provide features for persistent or shared data – volumes can be shared and reused between containers, see Fig. 4. A data volume container enables sharing persistent data between application containers through a dedicated, separate data storage container.
- Network management is based on two methods for assigning ports on a host – network port mappings and container linking. Applications can connect to a service or application running inside a Docker container via a network port. Container linking allows linking multiple containers together and sending information between them. Linked containers can transfer data about themselves via environment variables. To establish links and some relationship types, Docker relies on the names of containers. Container names have to be unique, which means that links are often limited to containers of the same host (managed by the same daemon).
3.4 Comparison
Both traditional VMs and containers shall be compared in order to summarise the two technologies, see Table 1. Some sources are also concerned about security, suggesting to run for instance only one Docker instance per host to avoid isolation limitations [3].
<table>
<thead>
<tr>
<th>Containers</th>
<th>VMs</th>
</tr>
</thead>
<tbody>
<tr>
<td>Standardisation</td>
<td>Fairly standardised system images with capabilities similar to bare-metal computers (e.g., OVF from DMTF).</td>
</tr>
<tr>
<td>Not well standardised, OS- and kernel-specific with varying degrees of complexity.</td>
<td></td>
</tr>
<tr>
<td>Host/guest architecture</td>
<td>Can run guest kernels that are different from the host, with consequent more limited insight into host storage and memory management.</td>
</tr>
<tr>
<td>Run host kernels at guest level only, but can do so possibly with a different package tree or distribution such that the container kernel operates almost like the host.</td>
<td></td>
</tr>
<tr>
<td>Boot process</td>
<td>Started through standard boot process, resulting in a number of hypervisor processes on the host.</td>
</tr>
<tr>
<td>Can start containerised application directly or through container-aware init daemon like systemd. These appear as normal processes on the host.</td>
<td></td>
</tr>
</tbody>
</table>
3.5 Different Container Models
We use Docker to illustrate some core concepts, but a range of other container technologies exist for different operating systems types (we single out Linux and Windows below) and also specific or generic solutions for PaaS platforms [8]:
- Linux: Docker, LXC Linux containers, OpenVZ, and others for variants such as BSD, HP-UX and Solaris.
- Windows: Sandboxie
- Cloud PaaS: Warden/Garden (in Cloud Foundry), LXC (in OpenShift)
There is still an ongoing evolution of OS virtualisation
and containerisation, aiming at providing OS support through standard APIs and tools for container management, network management and making resource utilisation more visible and manageable.
The tool landscape is equally in evolution. As an example, Rocket is a new container runtime from the CoreOS project (CoreOS is Linux for massive server deployments), which is an alternative to the Docker runtime. It is specifically designed for composability, security, and speed. These concerns highlight the teething concerns that the community is still engaged with.
4 CONTAINERISATION IN PAAS CLOUDS
While VMs are ultimately the medium to provision PaaS platform and application components at the infrastructure layer, containers appear as a more suitable technology for application packaging and management in PaaS clouds.
4.1 PaaS Features
PaaS generally provide mechanisms for deploying applications, designing applications for the cloud, pushing applications to their deployment environment, using services, migrating databases, mapping custom domains, IDE plugins, or a build integration tool. PaaS have features like built farms, routing layers, or schedulers that dispatch workloads to VMs. A container solution supports these problems through interoperable, lightweight and virtualised packaging. Containers for application building, deployment and management (through a runtime) provide interoperability. Containers produced outside a PaaS can be moved in – the container encapsulates the application. Existing PaaS have embraced the momentum caused by containerisation and standardised application packaging driven by Docker. Many PaaS have a container foundation for running platform tools.
4.2 PaaS Evolution
The evolution of PaaS is moving towards container-based, interoperable PaaS.
- The first generation was made up of classical fixed proprietary platforms such as Azure or Heroku.
- The second generation was built around open-source solutions such as Cloud Foundry or OpenShift that allow users to run their own PaaS (on-premise or in the cloud), already built around containers. OpenShift moves now from its own container model to the Docker container model, as does Cloud Foundry through its internal Diego solution.
- The current third generation includes platforms like Dawn, Deis, Flynn, Octohost and Tsuru, which are built on Docker from scratch and are deployable on own servers or on public IaaS clouds.
Open PaaS like Cloud Foundry and OpenShift treat containers differently, though. While Cloud Foundry supports state-less applications through containers, stateful services run in VMs. OpenShift does not distinguish these.
4.3 Service Orchestration
Development and architecture are central PaaS concerns. Recently, microservice architectures are discussed. This is an approach to breaking monolithic application architectures into SOA-style independently deployable services, which are well supported by container architectures. Services are loosely coupled, independent services that can be rapidly called and mapped to whatever business process is required. The microservices architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms. These services are independently deployable by fully automated deployment and orchestration framework. They require the ability to deploy often and independently at arbitrary schedules, instead of requiring synchronized deployments at fixed times. Containerisation provides an ideal mechanism for their deployment and orchestration, particularly, if these are to be PaaS-provisioned.
5 CONTAINER ORCHESTRATION AND CLUSTERING
Containerisation facilitates the step from a single host to clusters of container hosts to run containerised applications over multiple clusters in multiple clouds [9]. The built-in interoperability makes this possible.
Fig. 4. Container-based Cluster Architecture.
5.1 Container Clusters
A container-based cluster architecture groups hosts into clusters [10]. Fig. 4 that illustrates an abstract architectural scenario based on common container and cluster concepts. Container hosts are linked into a cluster configuration.
- Each cluster consists of several (host) nodes – where nodes are virtual servers or hypervisors or possibly bare-metal servers. Each host node holds several containers with common services such as scheduling, load balancing and applica-
tions.
- Each container can hold continually provided services such as their payload service, so-called jobs, which are once-off services (e.g., print), or functional (middleware service) components.
- Application services are logical groups of containers from the same image. Application services allow scaling an application across nodes.
- Volumes are used for applications that require data persistence. Containers can mount volumes. Data stored in these volumes persists, even after a container is terminated.
- Links allow two or more containers, typically on a single host, to connect and communicate.
This creates an abstraction layer for cluster-based service management that goes beyond container solutions like Docker.
A cluster management architecture has the following components:
- The deployment of distributed applications through containers is supported using a virtual scalable service node (cluster), with high internal complexity (supporting scaling, load balancing, failover) and reduced external complexity.
- An API allows operating clusters from the creation of services and container sets to other lifecycle functions.
- A platform service manager looks after the software packaging and management.
- An agent manages the container lifecycles (at each host).
- A cluster head node service is the master that receives commands from the outside and relays them to container hosts.
This allows development without regard to the network topology and requires no manual configuration [11].
A cluster architecture is composed of engines to share service discovery (e.g., through shared distributed key value stores) and orchestration/deployment (load balancing, monitoring, scaling, and also file storage, deployment, pushing, pulling).
This satisfies some of the requirements listed by Kratzke [8] for cluster architectures. A lightweight virtualised cluster architecture should provide a number of management features as part of the abstraction on top of the container hosts:
- Hosting containerised services and providing secure communication between these services,
- Auto-scalability and load balancing support,
- Distributed and scalable service discovery and orchestration,
- Transfer/migration of service deployments between clusters.
A sample cluster management platform is Mesos, an Apache project that binds distributed hardware resources into a single pool of resources. Mesos can be used by application frameworks to efficiently manage workload distribution. It is a distributed systems kernel following the same principles as the Linux kernel, but at a different level of abstraction. The Mesos kernel runs on all cluster machines and provides applications with APIs for resource management and scheduling across cloud environments. It natively supports LXC and also supports Docker.
A sample clustering management solution that is at a higher level than Mesos is the Kubernetes architecture, which is supported by Google. Kubernetes can be configured to allow orchestrating Docker containers on Mesos at scale. Kubernetes is based on processes that run on Docker hosts that bind hosts into clusters and manage containers. Minions are container hosts that run pods, i.e., sets of containers on the same host. Openshift has adopted Kubernetes. Expertise by Google incorporated in Kubernetes competes here with platform-specific evolution towards container-based orchestration. Cloud Foundry, for instance, uses Diego as a new orchestration engine for containers.
5.2 Network and Data Challenges
Containers in distributed systems require advanced network support. Containers provide an abstraction that makes each container a self-contained unit of computation. Traditionally, containers were exposed on the network via the shared host machine’s address. In Kubernetes, each group of containers (called pods) receives its own unique IP address, reachable from any other pod in the cluster, whether co-located on the same physical machine or not. This requires advanced routing features based on network virtualization.
Data storage is another problem in distributed container management besides the network aspect. Managing containers in Kubernetes clusters might be hampered in terms of flexibility and efficiency by the need for pods to co-locate with their data. What is needed is to pair up a container with a storage volume that, regardless of the container location in the cluster, follows it to the physical machine.
5.3 Orchestration Scenarios
Container cluster-based multi-PaaS is a solution for managing distributed software applications in the cloud, but this technology still faces challenges. These include formal descriptions or user-defined metadata for containers beyond image tagging with simple IDs, but also clusters of containers and their orchestration. The topology of distributed container architectures needs to be specified and its deployment and execution orchestrated, see Fig. 5.
While there is no accepted solution for the orchestration problems, its relevance shall briefly be illustrated using a possible solution. While Docker has started to develop its own orchestration solution and Kubernetes is another relevant project, a more comprehensive solution that would tackle orchestration of complex application stacks could involve Docker orchestration based on the topology-based service orchestration standard TOSCA, which is for instance supported by the Cloudify PaaS. Cloudify uses TOSCA (Topology and Orchestration Specification for Cloud Applications [12]) to enhance the portability of cloud applications and services, see Fig. 5. TOSCA enables:
- the interoperable description of application and
infrastructure cloud services, here containers hosted on nodes,
- the relationships between parts of the service, here service compositions and links as illustrated in Fig. 4,
- the operational behaviour of these services (e.g., deploy, patch, shutdown) in an orchestration plan.
Fig. 5. Cluster Topology Orchestration [adapted from TOSCA].
This is independent of the supplier creating the service, and any particular cloud provider or hosting technology. TOSCA will also make it possible for higher-level operational behaviour to be associated with cloud infrastructure management. Using TOSCA templates for container clusters and abstract node and relationship types, an application stack template can be specified.
5.4 Observations
Some PaaS have started to address limitations in the context of programming (such as orchestration) and DevOps for clusters. The examples used above allow some observations. Firstly, containers are by now largely adopted for PaaS clouds [3]. Secondly, standardisation by adopting emerging de-facto standards like Docker or Kubernetes is also happening, though at a slower pace. Thirdly, development and operations are still at an early stage.
Cloud management platforms are still at an earlier stage than the container platforms that they build on. While clusters in general are about distribution, the question emerges as to which extent this distribution reaches the edge of the cloud with small devices and embedded systems and whether devices running small Linux distributions such as the Debian-based DSL (which requires around 50MB storage) can support container host and cluster management.
In conclusion, container technology has a huge potential to substantially advance PaaS technology towards distributed heterogeneous clouds through lightweightness and interoperability - which has also been recognised by Bernstein and others [3]. However, significant improvements are still required to deal with data and network management aspects as is providing an abstract development and architecture layer.
ACKNOWLEDGMENT
This work was supported in part by the Irish Centre for Cloud Computing and Commerce (IC4), an Irish national Technology Centre funded by Enterprise Ireland and the Irish Industrial Development Authority, and by Science Foundation Ireland grant 13/RC/2094 to Lero - the Irish Software Research Centre.
REFERENCES
BIOGRAPHY
Claus Pahl. Claus Pahl is the Lead Principal Investigator of the Irish Centre for Cloud Computing and Commerce IC4 and a Funded Investigator and an Executive Member of the Irish Software Research Centre Lero. His research interests include software engineering in service and cloud computing, specifically migration and scalability concerns. He holds a Ph.D. in computing from the University of Dortmund and an M.Sc. from the University of Technology in Braunschweig.
|
{"Source-Url": "http://doras.dcu.ie/20642/1/CCM-2015-Pahl-Containers-PaaS-Cloud.pdf", "len_cl100k_base": 5186, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22452, "total-output-tokens": 6140, "length": "2e12", "weborganizer": {"__label__adult": 0.0002720355987548828, "__label__art_design": 0.00043320655822753906, "__label__crime_law": 0.0003409385681152344, "__label__education_jobs": 0.0008749961853027344, "__label__entertainment": 9.614229202270508e-05, "__label__fashion_beauty": 0.00013136863708496094, "__label__finance_business": 0.0006036758422851562, "__label__food_dining": 0.0002911090850830078, "__label__games": 0.0005230903625488281, "__label__hardware": 0.0016021728515625, "__label__health": 0.0004982948303222656, "__label__history": 0.0003383159637451172, "__label__home_hobbies": 9.745359420776369e-05, "__label__industrial": 0.0004444122314453125, "__label__literature": 0.0002644062042236328, "__label__politics": 0.0002570152282714844, "__label__religion": 0.0004057884216308594, "__label__science_tech": 0.1387939453125, "__label__social_life": 0.00012552738189697266, "__label__software": 0.04742431640625, "__label__software_dev": 0.80517578125, "__label__sports_fitness": 0.0002008676528930664, "__label__transportation": 0.00054168701171875, "__label__travel": 0.00023055076599121096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29342, 0.01838]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29342, 0.39675]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29342, 0.91628]], "google_gemma-3-12b-it_contains_pii": [[0, 4324, false], [4324, 9493, null], [9493, 14409, null], [14409, 18897, null], [18897, 24577, null], [24577, 29342, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4324, true], [4324, 9493, null], [9493, 14409, null], [14409, 18897, null], [18897, 24577, null], [24577, 29342, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29342, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29342, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29342, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29342, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29342, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29342, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29342, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29342, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29342, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29342, null]], "pdf_page_numbers": [[0, 4324, 1], [4324, 9493, 2], [9493, 14409, 3], [14409, 18897, 4], [18897, 24577, 5], [24577, 29342, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29342, 0.05882]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
2d5dc3756ffbda6c1f40c4f9087879f763827069
|
Crowdsourcing Database Systems: Overview and Challenges
Chengliang Chai†, Ju Fan‡, Guoliang Li†, Jiannan Wang‡, Yudian Zheng‡
†Tsinghua University ‡Renmin University ‡Simon Fraser University ‡Twitter
chaic115@mails.tsinghua.edu.cn, fanj@ruc.edu.cn, liguoliang@tsinghua.edu.cn, jnwang@sfu.ca, yudianz@twitter.com
Abstract—Many data management and analytics tasks, such as entity resolution, cannot be solely addressed by automated processes. Crowdsourcing is an effective way to harness the human cognitive ability to process these computer-hard tasks. Thanks to public crowdsourcing platforms, e.g., Amazon Mechanical Turk and CrowdFlower, we can easily involve hundreds of thousands of ordinary workers (i.e., the crowd) to address these computer-hard tasks. However, it is rather inconvenient to interact with the crowdsourcing platforms, because the platforms require one to set parameters and even write codes. Inspired by traditional DBMS, crowdsourcing database systems have been proposed and widely studied to encapsulate the complexities of interacting with the crowd. In this tutorial, we will survey and summarize the fundamental techniques in designing interacting with the crowd. In this tutorial, we will survey and summarize the fundamental techniques in designing crowdsourcing database systems. Firstly, traditional databases use “close-world” model, which processes queries based on the data inside database only; while crowdsourcing databases use the “open-world” model, which can utilize the crowd to crowsource data, i.e., collecting a tuple/table or filling an attribute. Secondly, crowdsourcing databases can utilize the crowd to support operations, e.g., comparing two objects, ranking multiple objects, and rating an object. These two main differences are attributed to involving the crowd to process database operations. In this paper, we review existing works on crowdsourcing database system from the following aspects. Crowdsourcing Overview. Suppose a requester (e.g., Amazon) has a set of computer-hard tasks (e.g., entity resolution tasks that find the objects referring to the same entity). The requester first designs the tasks. Then the requester publishes her tasks on a crowdsourcing platform, e.g., AMT. Workers who are willing to perform such tasks accept the tasks, answer them and submit the answers back to the platform. The platform collects the answers and reports them to the requester. If a worker has accomplished a task, the requester who publishes the task can approve or disapprove the worker’s answers. The approved workers will get paid from the requester.
Crowdsourcing Database Design Techniques. The crowd has some different characteristics from machines. (1) Not Free. Workers need to be paid for answering a task, and it is important to control the cost. (2) Error Prone. Workers may return noisy results, and we need to tolerate the noises and improve the quality. (3) Diverse. Workers have various background knowledge, leading to different accuracies to answer different tasks. We should capture workers’ characteristics to achieve high quality. (4) Dynamic. Workers are not always online to answer tasks and we need to control the latency. Many techniques have recently been proposed to handle these features to redesign database operators and optimization techniques. (i) Task Design. We can design different task types to support a crowdsourced operator. For example, in crowdsourced sort, we can ask the crowd to either compare two objects or rank multiple objects. We can select different task design techniques to optimize crowdsourced operators. (ii) Truth Inference. To tolerate the noisy results, we can assign each task to multiple workers, model the workers’ quality, and infer the results by aggregating the answers. (iii) Task Assignment. We assign appropriate tasks to workers and make full use of workers’ unique talents. (iv) Answer Reasoning. We can model the tasks and deduce the answers of tasks based on those of other tasks. For example, in crowdsourced join, if we get crowd’s answers that “US = United States” and “US = America”, we can deduce that “United States = America”.
1Guoliang Li is the corresponding author.
### Optimization Techniques
**Operators**
- CrowdSelect
- CrowdJoin
- CrowdSort
- CrowdTopK
- CrowdMax
- CrowdMin
- CrowdCount
- CrowdCollect
- CrowdFill
**Optimization Objectives**
- Cost
- Latency
- Quality
**Design Techniques**
- Truth Inference
- Task Assignment
- Answer Reasoning
- Task Design
- Latency Reduction
---
**Fig. 1: Architecture of Crowdsourcing DB Systems.**
(v) **Latency Reduction.** We can model workers’ behavior and design effective models to reduce the latency.
**Crowdsourcing Systems & Operators.** Using the aforementioned techniques, recent efforts have been made to develop crowdsourcing database systems, such as CDB [13], CrowdDB [10], Qurk [18], Deco [20], and CrowdOP [9]. For achieving high crowdsourcing query processing performance, the systems focus on optimizing cost (cheap), latency (fast) and quality (good). Moreover, there are also techniques that focus on designing individual crowdsourced operators, including selection [22], join [24], top-k/sort [7], aggregation [12], and collect [23], [21].
- **Tutorial Structure.** We can do both 1.5 and 3 hours tutorial but prefer the 3 hours’ one. The 3 hours’ tutorial is split into 2 sections. In the first section (1.5 hours), we first give an overview of crowdsourcing (20 min), including motivation of crowdsourcing, basic concepts (e.g., workers), crowdsourcing platforms, crowdsourcing workflow, and crowdsourcing applications. Then we talk about an overview of crowdsourcing database systems (20 min, see Section [1]), and fundamental techniques in designing crowdsourced operators, including task design (10 min), truth inference (10 min), task assignment (10 min), answer reasoning (10 min), and latency reduction (10 min). In the second section (1.5 hours), we first discuss different crowdsourced operators (60 min), e.g., selection, join, topk, sort, max/min, count, collect, fill. Finally we provide emerging challenges (15 min). We leave 15 min for Q&A to interact with the tutorial audience. If we have to do the 1.5 hours’ one, we tend to remove the section about operators(60 min), reduce the time of challenges and Q&A to 10 min in total and remove the latency reduction(10 min).
- **Tutorial Audience.** The intended audience includes all ICDE attendees from research and industry communities. We will not require any prior background knowledge and a basic understanding of database (e.g., selection, join) will be helpful.
- **Differences from Existing Tutorials.** There are existing crowdsourcing tutorials (e.g., in KDD’18 [3], VLDB’16 [1], VLDB’15 [11], ICDE’15 [5], VLDB’12 [8], SIGMOD’17 [15]). VLDB’16 [1] investigates human factors involved in task assignment and completion. VLDB’15 [5] focuses on truth inference in quality control. ICDE’15 [11] reviews some crowdsourcing operators, crowdsourced data mining and social applications. VLDB’12 [8] introduces crowdsourcing platforms and discusses general design principles for crowdsourced data management. SIGMOD’17 [15] focuses on quality, cost, and latency control for crowdsourced data management. KDD’18 [8] focuses on different applications and operations in crowd-powered data mining. Compared with these tutorials, we focus on the fundamental techniques for building a practical crowdsourced database system. Moreover, we systematically review crowdsourcing operators and optimization techniques that are proposed in recent five years.
---
**Fig. 2: Comparison of Crowdsourcing DB Systems.**
### II. System Design Overview
Several crowdsourcing database systems [10], [20], [18], [9], [13] are recently proposed to encapsulate the complexities of leveraging the crowd for query processing. In this part, we introduce an overview of the design of these systems.
- **Data model.** Existing crowdsourcing database systems are built on top of the traditional relational data model, where data is specified as a schema that consists of relations and each relation has a set of attributes. The difference is that crowdsourcing database systems employ an open-world assumption that either some attributes of a tuple or even an entire tuple can be crowdsourced based on queries from the requester.
- **Query language.** Most crowdsourcing query languages follow the standard SQL syntax and semantics, and extend SQL by adding features that support crowdsourced operations, e.g., asking the crowd to perform data processing operations.
- **Architecture.** The architecture of a typical crowdsourcing database system is illustrated in Figure 1. A SQL-like query is issued by a crowdsourcing requester and is first processed by a QUERY OPTIMIZER. Like traditional databases, the QUERY OPTIMIZER parses the query into a tree-structure query plan, and then applies optimization strategies to produce an optimized query plan. However, the key difference is that the tree nodes in a query plan are crowd-powered operators. Typically, a crowd-powered operator abstracts a specific type...
of operation that can be processed by the crowd. Figure 2 shows how operators are supported by the existing systems.
Crowd-powered operators are then executed by CROWDSOURCING EXECUTOR to generate human-intelligent tasks (HITs) and publish the HITs on crowdsourcing platforms (e.g., AMT). Next, after collecting answers from the crowd, the executor evaluates the query plan and returns the final result to the requester. To this end, the executor employs several crowdsourcing data processing techniques, e.g., truth inference, task assignment, answer reasoning, task design, and latency reduction. Figure 2 illustrates how the systems implement these techniques, with the details in Section III.
- **Optimization.** Query optimization is indispensable in crowdsourcing database systems, as the difference of various query plans may be several orders of magnitude. It is worth noting that crowdsourcing optimization is more challenging than that of traditional databases, because it needs to optimize multiple objectives, including quality control, cost control, and latency control. It is desirable for a system to support “multi-objective” optimization, as any single optimization may not satisfy requester’s needs. Figure 2 compares the existing systems regarding their capabilities of supporting optimization.
III. CROWDSOURCING OPERATORS
A. **Design Techniques**
- **Truth Inference.** Crowdsourcing may yield relatively low-quality results or even noise and Truth Inference aims to infer the correct answer (called truth) of each task given multiple workers’ answers. Existing studies first build worker model to estimate workers’ quality, and then infer the truth and workers’ quality based on the following intuitions: (1) a worker is of high quality if her answer is close to the truth; (2) a task’s answer is highly probable to be the truth if the answer is given by a high quality worker [27].
- **Task Assignment.** Workers have diverse qualities on different tasks, and Task Assignment aims to wisely assign tasks to workers within a given cost budget. When a worker requests for tasks, existing works will estimate the gain of assigning each task to the worker (first estimating the worker answer to this task based on the collected answers and then computing the quality improvement), and assign the task with the highest gain of improvement [27].
- **Answer Reasoning.** In many cases, the tasks generated by crowdsourced operators have inherent relationships, Answer Reasoning aims to deduce the answers of some tasks (without needing to crowdsource these tasks) based on answers of crowdsourced tasks. For example, suppose a crowdsourced join operator generates three tasks: (A, B), (B, C), and (A, C). If we have already known that A is equal to B, and B is equal to C, then we can deduce that A is equal to C based on transitivity, thereby avoiding the cost for checking (A, C).
- **Task Design.** Task Design focuses on designing effective task types to optimize crowdsourced operators. For example, [24] proposes two task types to optimize crowdsourced join: (1) pair-based task asks workers to identify whether two given objects refer to the same real-world entity; (2) cluster-based task asks workers to group entities into different clusters.
B. **Operator Design**
Existing works focus on using the above design techniques to implement crowd-powered operators to optimize cost, quality, and latency. Table I summarizes how the crowdsourced operators are implemented based on various techniques.
- **CrowdSelect.** Given a set of items, crowdsourced selection identifies items that satisfy a set of constraints, e.g., selecting images that have both mountains and humans. Existing works can be classified into three categories: (1) Crowd Filtering [19] (or All-Selection) returns all items that satisfy the given constraints; (2) Crowd Find [22] (or k-Selection) returns k items that satisfy the given constraints; (3) Crowd Search [26] (or 1-Selection) returns only one item that satisfies the given constraints. They focus on finding the answers within a cost or latency constraint.
- **CrowdJoin.** Existing crowdsourcing works mainly focus on Equi-Join. Given a table (or two tables), a crowdsourced Equi-Join is to find all record pairs in the table (or between two tables) that refer to the same entity. It is rather expensive to enumerate every pair to ask the crowd, and existing crowdsourcing works focus on designing user-friendly interfaces [24] or leveraging transitivity relations [25] to reduce the cost while keeping high quality.
- **CrowdSort/Topk.** Given a set of items which are comparable but are hard to be compared by machines, CrowdTopk (or Sort) aims to find top-k items (or a ranking list) based on a certain criterion, e.g., understanding difficulty of sentences, clarity of images. The challenges include tolerating the comparison error and reducing the cost.
<table>
<thead>
<tr>
<th>TABLE I: Crowdsourced Operators</th>
<th>Techniques</th>
</tr>
</thead>
<tbody>
<tr>
<td>CrowdSelect</td>
<td>Filtering</td>
</tr>
<tr>
<td>Find</td>
<td>Truth Inference, Task Assignment, Latency Reduction</td>
</tr>
<tr>
<td>Search</td>
<td>Truth Inference, Task Assignment, Latency Reduction</td>
</tr>
<tr>
<td>CrowdJoin</td>
<td>CrowdER</td>
</tr>
<tr>
<td>Transitivity</td>
<td>Truth Inference, Task Assignment, Answer Reasoning</td>
</tr>
<tr>
<td>CrowdSort/CrowdTopk</td>
<td>Heuristics</td>
</tr>
<tr>
<td>Hybrid</td>
<td>Truth Inference, Task Assignment, Answer Reasoning</td>
</tr>
<tr>
<td>CrowdMax/CrowdMin</td>
<td>[22]</td>
</tr>
<tr>
<td>CrowdCollect</td>
<td>[24]</td>
</tr>
<tr>
<td>CrowdFill</td>
<td>[21]</td>
</tr>
<tr>
<td>CrowdCount</td>
<td>[16]</td>
</tr>
</tbody>
</table>
- **Latency Reduction.** In many cases, requesters have latency requirement and Latency Reduction aims to reduce the latency. There are several ways in latency reduction: (1) setting each HIT with a higher price to reduce the latency in collecting answers; (2) leveraging the round/statistical model to capture the latency in workers’ answering tasks; and (3) devising strategies (e.g., dynamically maintaining a pool of fast workers) to improve the latency.
max item in a dataset, e.g., finding the most beautiful picture about Great Wall.
- **CrowdCount.** Crowdsourced Count \[16\] is to count the number of items in a dataset that satisfy a given constraint, e.g., counting the number of birds in a picture. Existing works focus on designing effective task types and devising unbiased sampling estimator.
- **CrowdCollect.** Different from the above query operators which perform queries on a given set of known items, CrowdCollect \[23\] tries to collect the unknown items from the crowd, e.g., enumerating the top-100 universities in US. It focuses on improving the coverage of the collected items.
- **CrowdFill.** CrowdFill \[21\] focuses on asking the crowd to fill the cells in a table. For example, given a table that shows the statistics of football players, it asks workers to fill missing cells (e.g., the position of Messi). It focuses on achieving high quality, while without taking large cost and latency.
IV. CROWD SYSTEM CHALLENGES
Query Optimization. A SQL query often corresponds to multiple query plans and it relies on a query optimizer to select the best plan. Existing optimizer estimates the computation cost of each query plan and chooses the one with the minimum estimated cost. However, this process turns to be quite challenging in a crowdsourcing environment because (1) there are three optimization objectives (result quality, monetary cost, and latency) that need to be considered and (2) humans are much more unpredictable than machines.
Benchmark. A large variety of TPC benchmarks standardize performance comparisons for database systems and promote the development of database research. Although there are some open datasets \{http://dbgroup.cs.tsinghua.edu.cn/ligl/crowddata\}, there is still lack of standardized benchmarks available. In order to better explore the research topic, it is important to study how to develop evaluation methodologies and benchmarks for crowd db systems.
Crowdsourcing Indexing. Crowdsourcing indexing will be useful for many crowdsourced operators, such as join and sort. However, it is non-trivial to build such index, because crowd-powered comparisons are error-prone and building index also incurs cost. Therefore, it calls for techniques to reduce the building cost while preserving the index accuracy.
Acknowledgement. This work was supported by the 973 Program of China (2015CB358700), NSF of China (61632016, 61521002, 61661166012), Huawei, and TAL education.
REFERENCES
BIOGRAPHY
Chengliang Chai is a PhD student at the Department of Computer Science, Tsinghua University, China. His research interests include data integration and crowdsourcing (especially crowdsourced data integration).
Ju Fan is currently working as an associate professor at the Department of Computer Science in Renmin University, Beijing, China. His main research interests include knowledge base and crowdsourcing (especially crowdsourcing systems).
Guoliang Li is currently working as a professor at the Department of Computer Science, Tsinghua University, Beijing, China. His research interests mainly include data cleaning and integration, and crowdsourcing (especially crowdsourcing database system and optimization).
Jiannan Wang is currently working as an assistant professor at the School of Computing Science in Simon Fraser University, Canada. His main research interests include data cleaning and crowdsourcing (especially crowdsourcing optimization).
Yudian Zheng is now a Machine Learning scientist at Twitter. His research interests include crowdsourced data management (especially truth inference and task assignment).
|
{"Source-Url": "http://dbgroup.cs.tsinghua.edu.cn/ligl/papers/icde19-tutorial.pdf", "len_cl100k_base": 4117, "olmocr-version": "0.1.53", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 14934, "total-output-tokens": 6000, "length": "2e12", "weborganizer": {"__label__adult": 0.0003821849822998047, "__label__art_design": 0.0006871223449707031, "__label__crime_law": 0.0004684925079345703, "__label__education_jobs": 0.0093841552734375, "__label__entertainment": 0.00014591217041015625, "__label__fashion_beauty": 0.0002689361572265625, "__label__finance_business": 0.0010290145874023438, "__label__food_dining": 0.0005340576171875, "__label__games": 0.0007343292236328125, "__label__hardware": 0.0009899139404296875, "__label__health": 0.0011510848999023438, "__label__history": 0.000518798828125, "__label__home_hobbies": 0.00017440319061279297, "__label__industrial": 0.0006718635559082031, "__label__literature": 0.0005269050598144531, "__label__politics": 0.00045013427734375, "__label__religion": 0.0004978179931640625, "__label__science_tech": 0.265869140625, "__label__social_life": 0.00027489662170410156, "__label__software": 0.032684326171875, "__label__software_dev": 0.681640625, "__label__sports_fitness": 0.00027298927307128906, "__label__transportation": 0.0005946159362792969, "__label__travel": 0.0002741813659667969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23075, 0.02634]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23075, 0.5724]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23075, 0.87397]], "google_gemma-3-12b-it_contains_pii": [[0, 4207, false], [4207, 9175, null], [9175, 15451, null], [15451, 23075, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4207, true], [4207, 9175, null], [9175, 15451, null], [15451, 23075, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23075, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23075, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23075, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23075, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23075, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23075, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23075, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23075, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23075, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23075, null]], "pdf_page_numbers": [[0, 4207, 1], [4207, 9175, 2], [9175, 15451, 3], [15451, 23075, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23075, 0.11504]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
d525c86aa4385995eac41e535474cd62dea7bc0e
|
CMMI Level 2 Within Six Months? No Way!
Global Analytic Information Technology Services, Inc. (GAITS) decided to receive a Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI) Level 2 rating within five months. The purpose of this article is to show that when an organization is already doing competent project management, the effort to benchmark that capability using CMMI is almost straightforward, and it is possible to achieve a Level 2 CMMI appraised rating within six months. This means there must be management support, the right CMMI project personnel, selection of the right effort(s) to be evaluated, and a CMMI appraiser who understands the company’s effort and provides positive feedback.
Even though there was no contractual requirement, the GAITS owners decided in November 2005 to initiate a project to achieve a SEI CMMI appraised Level 2 rating within five months for a GAITS program.
The first thing the owners did was designate a mature effort for CMMI evaluation, i.e., a five-year Federal Aviation Administration (FAA) Independent Verification and Validation (IV&V) program. The program was chosen due to its requirement to use an internationally accepted process, i.e., the Institute for Electrical and Electronic Engineering (IEEE) 1012, Software Verification and Validation; the program was already active for almost two years; and the program's receipt of outstanding ratings from the GAITS quarterly customer satisfaction surveys. As a result of IEEE 1012, there was a built-in requirement to have a project plan, i.e., our IV&V plan, which management believed would be the foundation to achieve its CMMI goal. Without performing an internal appraisal, the program had a GAITS assumed level of maturity that would satisfy most, if not all, of the CMMI Level 2 requirements. Even though this proved to be true, they had a lot of work ahead.
The owners then designated a CMMI required sponsor from the senior managers to work with the CMMI project personnel as a channel of communications to other senior managers, and to ensure GAITS obtained the required CMMI project training, resources, and guidance. Next, they assigned a CMMI project leader who had experience as a process developer and who was an active member of the selected program. Finally, the owners assigned a CMMI project technical leader who was experienced with the FAA and who could provide the CMMI project with technical and administrative support.
With the assistance of the mentor, Electronic Data Systems Corporation (EDS), as part of the Department of Defense (DoD) Mentor Protégé Program, GAITS selected an SEI-approved appraisal company to perform the CMMI appraisal. GAITS then selected a lead appraiser.
The GAITS assumption that the IV&V program could quickly be appraised at CMMI Level 2 had to be tested. If this assumption were not true, then more time would be needed.
By attending an SEI CMMI course and by reading books, the GAITS CMMI team realized the IV&V program had many of the needed artifacts/evidence. The perceived main problems were to fill in the gaps, to verify the artifacts met the requirements, to map the artifacts to the requirements, and to accomplish all of this within five months.
Most of the gaps consisted of documenting how we already did business in terms of the CMMI Process Areas (PAs). For instance, the IV&V plan did not address the needed details for the CMMI described Configuration Management (CM) process or the Management Analysis (MA) process. In other situations, gaps were caused by the need to find the physical artifacts, e.g., meeting minutes and documents addressing more than one CMMI PA. This was accomplished over three months; the team was confident they had the needed information for a CMMI Level 2 rating. However, the work was just beginning.
To improve the chances for success, the IV&V Program Manager (PM) agreed to allocate time during his weekly staff meetings for the CMMI project personnel to introduce CMMI, the reason the GAITS owners were willing to spend the time and money to receive a Level 2 rating, and to train the staff on the CMMI process and what to expect from a CMMI appraisal.
Practice Implementation Indication Description (PIID)
One of the critical steps was to develop a CMMI PIID; see Table 1 (page 14) for an example. The PIID identified the CMMI Level 2 PAs (column 1) and related specific and generic goals and practices (column 2), direct and indirect artifacts (e.g., documents), direct artifact title and the indirect artifact title columns, action items (direct artifact recommendations and the indirect artifact recommendations columns), history of key CMMI project activities (direct artifact comments and the indirect artifact comments columns and the direct artifact weakness/artifact collection issue column), and who was responsible for each CMMI project activity (the last column).
In essence, a PIID is a traceability matrix between CMMI processes (the first two PIID columns) and the location of the related artifacts. The PIID was also used to track CMMI project progress.
The PIID direct artifact comments column also identifies the evidence within the identified artifact showing the specific CMMI requirement was satisfactorily met, e.g., what paragraph within a progress report addressed the communications of Project Monitoring and Control (PMC) progress to our senior managers or customer.
The PIID indirect artifact comments column is similar to the PIID direct artifact comments column but identifies the evidence within the identified artifact, showing artifacts are available to satisfy a CMMI indirect requirement.
Selected Program
The selected program involved the IV&V of an FAA critical, complex program involving aircraft flights throughout the United States. The IV&V program's staff size varies from year-to-year due to the annual FAA task order changes. Currently, there is a staff of 19 full-time personnel.
Even though the CMMI project per-
sonnel indicated the ability to be CMMI Level 2 appraised within five months would be impossible, an internal evaluation of the selected program showed the program was more advanced for a CMMI Level 2 rating than the CMMI project personnel initially thought. The ability to quickly develop, review, and correct the PA plans also helped, especially since the lead appraiser was one of the reviewers and provided very useful comments from a CMMI perspective that were very helpful and encouraging. The purpose of the lead appraiser’s review was to identify areas not meeting the CMMI Level 2 requirements. After about three months of work, the CMMI project leader and the lead appraiser notified the sponsor that six months was needed to finish the CMMI project. The company’s owners agreed to a one-month extension.
Roles and Responsibilities
The following provides information about how the CMMI team (CMMI project personnel, sponsor, and lead appraiser) worked together on this CMMI project.
The GAITS sponsor, a required CMMI appraiser position, provided the leadership needed to keep the CMMI project focused on the objective and provided needed communications to CMMI project personnel, other senior managers, and the lead appraiser. He also scheduled training for the CMMI project personnel and assumed the role of the acting PM when the PM left the company. Based on CMMI, the sponsor made changes to how the PM reported to the senior managers.
The GAITS FAA IV&V PM ensured compliance with the program’s contract, vision, and objectives (without this coordination, the CMMI project would have failed due to conflicts between the IV&V program and the CMMI project). This included identifying appropriate program and company related artifacts, providing comments on how the CMMI FAA IV&V PA plans disagreed with the way the program operated, and providing recommended changes. He also obtained concurrence from our FAA customer and government stakeholders to utilize the FAA program for the CMMI appraisal. A key FAA IV&V PM activity was to provide CMMI training time during the program’s weekly staff meetings. To improve communications between the program personnel and the CMMI project, he appointed PA managers to review and implement the PA plans.
The CMMI project leader managed the CMMI project and developed each of the PA plans and related documents, e.g., procedures and forms. Based on our environment, this was the most efficient way to develop the plans and to ensure compatibility between the plans and the program. Based on the CMMI project leader’s experience with the PAs, process improvements, knowledge of CMMI and the program, and his past development and implementation of process plans, there was minimal rework and it was easier for the lead appraiser to deal with one person rather than a separate person for each PA plan. To improve the overall CMMI project, the CMMI project leader also created the initial Process and Product Quality Assurance (PPQA) plan, checklists, and forms. When it was time to perform PPQA audits, the CMMI project leader was excluded, per the lead appraiser, from auditing the PAs since a conflict of interest existed, i.e., the CMMI project leader might not provide objective evidence of what was found during the audit of plans the CMMI project leader developed.
The GAITS project technical leader provided backup to the CMMI project leader and kept the CMMI project leader informed of daily CMMI project activities. Whereas the CMMI project leader managed the CMMI project and developed the PA plans, the CMMI project technical leader’s main role was to ensure the plans were implemented as described and to identify non-conformances. To accomplish this role, the CMMI project technical leader was assigned to perform the PPQA audits and to find and store the required artifacts. (NOTE: Since there would be a conflict of interest for the CMMI project technical leader to audit the PPQA PA, the FAA IV&V PM appointed another person to audit the PPQA PA.) The CMMI project technical leader also documented discrepancies discovered during the PPQA audits and followed through to ensure the identified corrective actions corrected the discrepancies. Since the PAs were being implemented based on documented plans, the CMMI project technical leader worked with the PA managers prior to and during the PPQA audits to modify the initial PA plans and audit checklists to correct errors or to improve the processes. The CMMI project technical leader also maintained the PHID by working with the lead appraiser and program personnel to document the location of artifacts and to resolve issues. This was a critical task and required many hours of work to ensure timeliness, consistency, and completeness, while working with others (e.g., PM, PA managers, and the lead appraiser) to ensure

The lead appraiser performed was to identify items required and not required by CMMI Level 2. For example, some of the items in the process plans were for CMMI Level 5 and could not be supported by other process plans.
He also ensured the PA plans were developed for a service support program rather than a system or software development program. The difficulty here was that the CMMI model was oriented toward system or software development rather than service support programs, e.g., quality assurance, quality control, IV&V, and CM. As a result, some of the CMMI principles and artifact contents did not apply or had to be re-defined so we could implement the intent of the CMMI principles and artifacts from a service support perspective.
To provide continuity, the lead appraiser remained involved with the CMMI project from the beginning until the conclusion of the Standard CMMI Appraisal Method for Process Improvement (SCAMPI™) for final appraisal evaluation.
Issues
The FAA IV&V program was finishing its second year when the CMMI project started. As a result, an item the lead appraiser initially had an issue with was that GAIT5 did not have a CMMI project-planning plan. To resolve this, the CMMI project developed a CMMI IV&V project management plan (PMP) that used the existing, official deliverable (the FAA IV&V plan) and added the necessary CMMI items. To make maintenance easier (since the contract is renegotiated each year to identify annual tasks, resource needs, and funding), the existing plan was made an attachment to the CMMI IV&V PMP. As a result, the FAA IV&V CMMI PMP referenced the FAA IV&V plan as much as possible and specifically addressed items not addressed by the FAA IV&V plan. Thus, the CMMI portion of the FAA IV&V CMMI PMP should remain static throughout the contract while only modifying the official FAA IV&V plan attachment to list negotiated tasking, resourcing, and funding for the upcoming year. All of this was still compatible with the IEEE 1012 IV&V plan template.
For the CMMI project personnel, the hardest concept to understand was the difference between the following (NOTE: these are my definitions):
- **A direct artifact**: An output artifact used to show a process was performed and completed as described.
- **An indirect artifact**: An artifact supporting a process, e.g., a process input. This is used to show a process was initiated. Thus, a direct artifact of one process could be an indirect artifact for another process.
Another issue was that the FAA IV&V program's products do not require pre-delivery coordination with other groups; especially since the IV&V products are normally reports documenting IV&V evaluations of products from the FAA and their development contractor. Therefore the IV&V program does not require a Configuration/Change Control Board (CCB). Instead, from the start, the program established a peer-review process to ensure program products (excluding proprietary products, e.g., products with pricing information) satisfied contractual requirements. As a result, the stated internal review process would document the peer-review results, followed by a final PM review just prior to delivery. This system has worked well for the program and was acceptable to the lead appraiser, especially since the only customer comments occur during the annual IV&V plan update when the contract is re-negotiated and new tasks are identified. The main point is that they have a very successful review/approval process that does not use a normal development approval group (i.e., CCB). The lead appraiser had to keep reminding himself that for a service support program, this was not a violation of CMMI principles.
For those wondering about the issue of making sure the changes are lasting, CMMI has a requirement that there be an appraisal within three years of the passing of an appraisal. Thus, a group can lose its CMMI status if the group does not continuously maintain the correct artifacts.
Lessons Learned
Before starting an official CMMI appraisal project, an organization needs to perform an honest self-evaluation (or hire an outside, honest broker). One of the key outputs is a PIID. Using the PIID format, the CMMI deficiencies can be clearly listed and addressed. In GAIT5's situation, they had most of the needed artifacts, but they were not organized to provide easy, documented, and logical access. For instance, some of the artifacts were on the hard drive of individual laptops. As a result, these artifacts were moved to a more central location. Some of the data and information was placed under restricted access since some of this data and information was proprietary (such as billable information and they had subcontractors with access to the database). Another issue with the individual laptop storage was the inconsistency of the file names within an
individual’s database folder. As part of the CMMI CM PA, the CM manager developed a CMMI required standardized program repository and a standardized naming convention.
A major benefit of our CMMI appraisal effort was to clearly identify where information and data were to be stored. With the CM manager’s development of a repository infrastructure, finding and retrieving program information and data greatly improved. This was also a great help for the new PM to quickly come up-to-speed about the program. At the same time, our people are better able to share information and data.
**Conclusions**
With the cooperation of organizational personnel and the lead appraiser, a CMMI Level 2 rating can be accomplished in less than 18 months without compromising how an organization operates. This does not mean every attempt to be Level 2 can occur within 18 months. As described earlier, there are many things that must fall into place.
Having a program with well-established processes can only speed up the appraisal process, especially if the program processes are similar to what the CMMI is looking for. This also helps speed up the process to develop PA plans. A major effort was for the CMMI project leader to document what those processes were and to compare the results with their requirements.
Having a person who is knowledgeable with the program/organization(s) being evaluated and very experienced with writing plans, procedures, and checklists can not only minimize issues discovered by a lead appraiser, but can also ensure these documents are quickly developed or existing documentation is corrected.
Having an almost full-time person (i.e., our CMMI project technical leader) being the PIID point-of-contact, creating and maintaining the PIID, and performing the initial PPQA audits also speeds up the process. This person should work directly with the lead appraiser and others and should also provide the sponsor and lead appraiser with status reports – weekly at first, but daily as the date of the SCAMPI approaches.
Ensure that the lead appraiser will work with your organization to understand your environment and to provide help rather than just provide a list of needed corrective actions. If the lead appraiser has pre-conceived notions about how an organization must operate, the CMMI project sponsor and leader must ensure these notions are corrected or a compromise can be reached. With the cooperation of the lead appraiser, the sponsor and the CMMI project personnel can help ensure success.
Acquiring a CMMI Level 2 rating is not cheap and cannot occur haphazardly. The main costs are organization personnel (in our situation, two almost full-time people and several part-time people) and paying for the lead appraiser and CMMI training. However, GAITS estimated the results, especially when the organization follows through to maintain at least the Level 2 rating, should pay for the CMMI investment within two years. Being organized and having artifacts to show defined processes are being followed helps organizations enhance competitiveness and reduce cost. For example, portions of the PA plans can be used within proposals.
The lead appraiser informed us that based on SEI rules, since the CMMI evaluated program represented over 67 percent of the IV&V division’s work, the IV&V division was CMMI Level 2 rated. Thus, our rating was at a higher organizational level than we had planned.
As mentioned before, SEI requires that we will be re-evaluated at a later date to ensure we are maintaining at least a CMMI Level 2 rating. To help non-developmental system and software efforts, SEI has completed a CMMI supplement to address services rather than development efforts. This should greatly assist service organizations – like IV&V – that desire CMMI appraisal.
---
**About the Author**
George Jackelen works at GAITS, Inc. as a senior systems engineer on the referenced FAA IV&V program as CMMI project leader. He has been directly involved with IV&V for more than 10 years and quality assurance for more than 18 years. Jackelen spent 20 years in the United States Air Force on various information technology projects and more than 20 years with private industry doing work for federal and state agencies.
85 South Bragg ST 4th FL
Alexandria, VA 22312
Phone: (703) 866-2400
Fax: (703) 866-2423
E-mail: gjackelen@gaits.com
---
**More Online from CrossTalk**
CrossTalk is pleased to bring you this additional article with full text at <www.stsc.hill.af.mil/crosstalk/2007/02/index.html>.
**Connecting Software Industry Standards and Best Practices:**
**Lean Six Sigma and CMMI**
Gary A. Gack and Karl D. Williams
_Six Sigma Advantage, Inc._
Integration of Six Sigma and the Capability Maturity Model Integration (CMMI) is becoming fairly widespread, yet confusion remains about their relationship. Part One of this article includes several case studies that answer some of the more common questions. Part Two describes the relationship of Lean Six Sigma and Six Sigma’s approach to improvement of existing products and processes (Define, Measure, Analyze, Improve, Control [DMAIC]), and Part Three examines the relationship between Design for Lean Six Sigma (used to develop new products and processes or major enhancements) and the CMMI Engineering Process Areas.
Software professionals, especially those working in the Department of Defense environment, face a somewhat bewildering array of relevant standards and best practices. As awareness and penetration of Lean Six Sigma in this environment have increased significantly over the last several years, we find many organizations struggling to understand and leverage the relationships between Lean Six Sigma and several other approaches to software process improvement, including CMMI.
|
{"Source-Url": "http://www.crosstalkonline.org/storage/issue-archives/2007/200702/200702-Jackelen.pdf", "len_cl100k_base": 4512, "olmocr-version": "0.1.53", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 14912, "total-output-tokens": 4746, "length": "2e12", "weborganizer": {"__label__adult": 0.00041413307189941406, "__label__art_design": 0.0006346702575683594, "__label__crime_law": 0.0008540153503417969, "__label__education_jobs": 0.011627197265625, "__label__entertainment": 0.00012886524200439453, "__label__fashion_beauty": 0.00023221969604492188, "__label__finance_business": 0.028045654296875, "__label__food_dining": 0.0004887580871582031, "__label__games": 0.0008893013000488281, "__label__hardware": 0.0018978118896484375, "__label__health": 0.0007753372192382812, "__label__history": 0.0004169940948486328, "__label__home_hobbies": 0.00036406517028808594, "__label__industrial": 0.002758026123046875, "__label__literature": 0.00030303001403808594, "__label__politics": 0.0003490447998046875, "__label__religion": 0.0004425048828125, "__label__science_tech": 0.0635986328125, "__label__social_life": 0.0002582073211669922, "__label__software": 0.0743408203125, "__label__software_dev": 0.80859375, "__label__sports_fitness": 0.0004808902740478515, "__label__transportation": 0.0016832351684570312, "__label__travel": 0.00034928321838378906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21548, 0.02449]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21548, 0.16472]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21548, 0.96113]], "google_gemma-3-12b-it_contains_pii": [[0, 6008, false], [6008, 10888, null], [10888, 15751, null], [15751, 21548, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6008, true], [6008, 10888, null], [10888, 15751, null], [15751, 21548, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21548, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21548, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21548, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21548, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21548, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21548, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21548, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21548, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21548, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21548, null]], "pdf_page_numbers": [[0, 6008, 1], [6008, 10888, 2], [10888, 15751, 3], [15751, 21548, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21548, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
3021f417ae8fe443b6c58e8f2e65c2f3e1f78c82
|
A Case for Enrichment in Data Management Systems
Dhrubajyoti Ghosh\textsuperscript{1}, Peeyush Gupta\textsuperscript{1}, Sharad Mehrotra\textsuperscript{1}, and Shantanu Sharma\textsuperscript{2}
\textsuperscript{1}University of California, Irvine, USA. \textsuperscript{2}New Jersey Institute of Technology, USA.
ABSTRACT
We describe \textsc{EnrichDB}, a new DBMS technology designed for emerging domains (e.g., sensor-driven smart spaces and social media analytics) that require incoming data to be enriched using expensive functions prior to its usage. To support online processing, today, such enrichment is performed outside of DBMSs, as a static data processing workflow prior to its ingestion into a DBMS. Such a strategy could result in a significant delay from the time when data arrives and when it is enriched and ingested into the DBMS, especially when the enrichment complexity is high. Also, enriching at ingestion could result in wastage of resources if applications do not use/require all data to be enriched. \textsc{EnrichDB}'s design represents a significant departure from the above, where we explore seamless integration of data enrichment all through the data processing pipeline — at ingestion, triggered based on events in the background, and progressively during query processing. The cornerstone of \textsc{EnrichDB} is a powerful \textit{enrichment data and query model} that encapsulates enrichment as an operator inside a DBMS enabling it to co-optimize enrichment with query processing. This paper describes this data model and provides a summary of the system implementation.
1. INTRODUCTION
This paper envisions a new type of data management technology that seamlessly integrates \textit{data enrichment} in the data analysis pipeline. Data analysis pipeline refers to the process of acquiring data from data sources, potentially enhancing the data, ingesting it into a database system, and running queries on the enhanced data. Today, organizations have access to potentially limitless data sources in the form of web data repositories, social media posts, and continuously generated sensory data. Such data is often low-level/ raw and needs to be enriched to be useful for analysis. Functions used to enrich data (referred to as \textit{enrichment functions} in the paper) could consist of (a combination of) custom-compiled code, declarative queries, and/or expensive machine learning techniques. Examples of enrichment functions include sensor interpretation and fusion over sensory inputs, mechanisms for sentiment analysis over social media posts, and named entity extraction in text.
Traditionally, data enrichment is performed offline as part of a periodic Extract-Transform-Load (ETL) process. This process is performed inside a separate system and the enriched data is stored in a data warehouse for analysis. This approach adds significant latency between the time data arrives (or is created) and when it is available for analysis.
[14] has highlighted the limitations of the traditional data warehouse approach in analyzing the recent data (as it arrives) for online business applications. It has led to the emergence of Hybrid Transaction/Analytical Processing (HTAP) systems that support both transactional and analytical workloads. A warehouse strategy (of periodic enrichment as part of ETL) exhibits similar limitations in application contexts, where enrichment is part of the data processing pipeline. One possibility to overcome this limitation is enriching the data as it arrives. Systems (e.g., Spark Streaming [20] often used for scalable ingestion) are capable of executing enrichment functions on newly arriving data prior to its storage in a DBMS. Recently, [17] has explored ways to optimize enrichment during ingestion by batching such operations.
Enriching data at arrival is only feasible when enrichment functions are simple. Complex functions (e.g., Multi-layer Perceptron and Random Forest), often, used to classify/interpret incoming data, may take several hundred milliseconds to execute on a single core of a modern server.\textsuperscript{1} Applying such functions at ingestion will allow a system to ingest only tens of events per second per core which is very low.
An alternate strategy is to restrict ETL process to selectively enrich only a part of the data (based on expected usage) at ingestion. However, predicting usage is difficult, especially in an online setting where an analyst can pose any ad-hoc query. If the prediction underestimates the need of enrichment, it may not support certain queries and overestimation leads to wasted enrichment and resources.
Motivating Example. A quintessential example domain for which \textsc{EnrichDB} is designed, is a sensor-driven smart space environment. Such an environment is often instrumented with a large number of sensors producing data, which is stored in databases. Such data consists of videos, images, data from motion sensors, as well as connectivity data of user’s mobile devices with WiFi access points. Such data needs to be processed before it can be used by applications. E.g., [12] uses connectivity data of user’s mobile devices with WiFi access points to localize users inside a building. Furthermore, one can use surveillance camera images to localize users more accurately. Localization based on WiFi connectivity data or images can be expensive, e.g., analyzing a single WiFi
-----
\textsuperscript{1}E.g., a server of 64 core Intel Xeon CPU E5-4640, 2.40GHz, and 128GB memory.
connectivity event takes \(\approx 200\text{ms}\), and analyzing a single image takes \(\approx 1s\). If we consider a campus environment with hundreds of WiFi access points and cameras (where \(\approx 1,000\) Wi-Fi events/sec and \(\approx 100\) images/sec are produced by the sensors), we will need \(\approx 5\) minutes of processing time for locating person using the data that has been generated in one second, and such a processing time is not feasible.
Instead, we need to process such data during query execution in an adaptive manner. Queries on such data can be ad-hoc: for example, a visitor planning to attend an event at a location may wish to know the attendees already arrived (or the count) apriori to avoid crowded regions. Another example will be exploring suspicious activities that may create a timeline of events at different parts of a building using WiFi connectivity data and then performing detailed analysis using camera images. To answer such ad-hoc queries, if a system enriches the required data at query time, it can still result in high latency depending on the query selectivity.
Motivated by the above limitations, we design ENRICHDB — an adaptive data management technology that allows enrichment to be performed all through the data processing pipeline, i.e., during ingestion, triggered based on events, or during query processing. ENRICHDB is designed based on the following criteria:
**Semantic Abstraction and Transparency of Enrichment.** ENRICHDB supports a declarative interface to specify and to link enrichment functions with higher-level observations that the functions generate from raw data. Users may associate one or more such functions that differ in terms of quality (e.g., uncertainty in the enriched value) and cost (e.g., execution time of the function).
In ENRICHDB, developers do not have to deal with raw data directly — applications can be fully developed based on higher-level semantic observation. Furthermore, developers do not have to be concerned about what data has to be enriched, using which functions, and at what stage of data processing. ENRICHDB maintains the state of enrichment of objects and performs enrichment automatically based on the current state of objects.
**Optimization of Enrichment.** ENRICHDB allows enrichment all through the data processing pipeline. ENRICHDB makes sure that enrichment of objects is performed optimally. At query time enrichment, ENRICHDB exploits query optimizer to prune away enrichment of objects that do not influence the query results. Furthermore, ENRICHDB allows enrichment of data closer to where the data resides resulting in a low data movement.
**Progressive Computation.** When ENRICHDB executes enrichment functions during query processing, it produces query answers progressively. A progressive query answering technique (motivated by Approximate Query Processing systems [10] that provided progressive query answering for aggregation queries) produces an initial set of answers that are improved over time as data is further enriched.
The cornerstone of ENRICHDB is Enrichment Data and Query Model (EDQM) that integrates enrichment as a first-class operator in the database system. This paper describes both data and query models in §2 and briefly describes the implementation of ENRICHDB in §3. The codebase and detailed discussion on design decisions are presented in [2].
### 2. DATA AND QUERY MODEL
In this section, we develop a new data and query model, called Enrichment Data and Query Model (EDQM).
#### 2.1 Data Model
In EDQM, the data is modeled using relations where a relation can have two types of attributes: (i) **derived** attributes that require enrichment and (ii) **fixed** attributes that do not require enrichment. Each derived attribute is optionally associated with a domain size. If the domain size is not specified, then that attribute is considered to have a value from a continuous range. The command for specifying a relation in ENRICHDB is shown below.\[
\text{CREATE TABLE wifi(id int, user_id char(30), timestamp_time, wifi_ap char(30), location int derived:304);}\]
The value of a derived attribute is determined using one or more **enrichment functions** associated with it.
**Enrichment functions.** EDQM supports a general class of enrichment functions (frequently used in real world). The input to an enrichment function is a tuple and the output is either a single value, multiple values, or a probability distribution, as described below.
We categorize enrichment functions based on the output cardinality: (i) **single-valued**: outputting a single value, e.g., a binary classifier [16], (ii) **multi-valued**: outputting a set of values, e.g., top-k classifiers [11], (iii) **probabilistic**: outputting a probability distribution over the possible values of a label, e.g., probabilistic classifiers [6]. Also, enrichment functions can be categorized based on the size of output domain: (i) **categorical**: predicts outputs from a finite set of possible values, e.g., sentiment of positive/negative, and (ii) **continuous**: outputs a real number, e.g., a weather of 72.8°F.
An enrichment function is associated with two parameters: (i) **cost**: the average execution time/tuple, and (ii) **quality**: a metric of goodness (i.e., accuracy) of enrichment function in determining the correct value of the derived attribute.
**Training of enrichment functions.** EDQM supports training procedures for enrichment functions that internally uses machine learning models to predict the value of derived attributes. Often such models use a supervised learning method [5] that learns a mapping function between a set of input and output pairs from a ground truth data set (often referred to as training data). A user needs to
\[\text{The derived attributes cannot be updated directly by the user.}\]
specify the table that stores the training data for the model. Below, we show an example where a machine learning model of Multi-Layer Perceptron (MLP) is learned using a training procedure of model_train. The training data is stored in wifi_train table and the name of the model is location_mlp. It uses the attribute values of feature as input to the model and outputs the prediction for location attribute. The model-specific parameters are passed as a string in model_params.
\[ \text{SELECT } \text{db.model_train}('wifi_train', 'location_mlp', 'mlp', 'location', 'feature', model_params); \]
The cost and quality of enrichment functions can either be specified by user or can be determined automatically by using several methods, e.g., train/test split and k-fold cross-validation during the training phase.
In real scenarios, often multiple enrichment functions are used to perform a particular analysis. To localize a person, one can use multiple ML functions, e.g., decision tree, random forest, and multi-layered perceptron models. ENRICHDB supports specification of such functions using a function-family. Formally, the set of enrichment functions for a derived attribute \( A_i \) is called function-family of \( A_i \). (We use calligraphic font for derived attributes.) Outputs of enrichment functions in a function-family are combined using a combiner function. One can use weighted-average, majority-voting, or stacking-based [19] combiner functions. Below we show, creation of function-family for location attribute consisting of multiple functions along with their cost (seconds/tuple) and quality (measured in AUC) respectively, using the assign_enrichment_functions command.
\[ \text{SELECT } \text{db.assign_enrichment_functions}('wifi', [['location',3,'location_dt',0.8,0.7], ['location',4,'location_for',0.6,0.8], ['location',1,'location_mlp',0.95, 0.9]]); \]
### Table 1: The wifi table (location is derived).
<table>
<thead>
<tr>
<th>id</th>
<th>user_id</th>
<th>time</th>
<th>wifi_ap</th>
<th>location</th>
</tr>
</thead>
<tbody>
<tr>
<td>t1</td>
<td>24</td>
<td>09:14</td>
<td>56</td>
<td>L1</td>
</tr>
<tr>
<td>t2</td>
<td>22</td>
<td>10:26</td>
<td>110</td>
<td>NULL</td>
</tr>
<tr>
<td>t3</td>
<td>108</td>
<td>14:10</td>
<td>116</td>
<td>L4</td>
</tr>
</tbody>
</table>
### Table 2: State output for derived attributes.
<table>
<thead>
<tr>
<th>tid</th>
<th>location</th>
</tr>
</thead>
<tbody>
<tr>
<td>t1</td>
<td>L1:0.54, L2:0.35, L3: 0.11</td>
</tr>
<tr>
<td>t2</td>
<td>L1:0.1, L2:0.1, ... , L10:0.1</td>
</tr>
<tr>
<td>t3</td>
<td>L4:0.8, L5: 0.15, L6: 0.05</td>
</tr>
</tbody>
</table>
State of a Derived Attribute. Enrichment state or state of a derived attribute \( A_i \) in tuple \( t_k \) (denoted by \( \text{state}(t_k, A_i) \)) is the information about enrichment functions that have been executed on \( t_k \) to derive \( A_i \). The state has two components: state-bitmap that stores the list of enrichment functions already executed on \( t_k, A_i \); and state-output that stores the output of executed enrichment functions on \( t_k, A_i \).
E.g., consider that there are three enrichment functions \( f_1, f_2, f_3 \) and out of which \( f_1, f_3 \) have been executed on \( t_k, A_i \). Also, assume that the domain of \( A_i \) contains three possible values: \( d_1, d_2, \) and \( d_3 \). Thus, the state-bitmap for \( t_k, A_i \) contains \( \{101\} \), i.e., only first and third functions have been executed and the state-output of \( t_k, A_i \) contains: \( \{0.7, 0.3, 0\} \), \( \{0.8, 0, 0.1\} \), i.e., the output of the first and third enrichment functions (remaining arrays are left empty). The state-output stores a list of probability distributions when the enrichment functions are probabilistic. For single/multi-valued functions and continuous functions, the state-output attribute stores the actual output of the function instead of a probability distribution, \( e.g., \{19\} \\] .
State of Tuples and Relations. The notion of state of derived attributes is generalized to the state of tuples and relations in a straightforward way. The state of a tuple \( t_k \) is the concatenation of the state of all derived attributes of \( t_k \), e.g., the state of a tuple \( t_k \) of a relation \( R \) with three derived attributes \( A_{p}, A_{q}, \) and \( A_{r} \) is denoted by \( \text{state}(t_k) = (\text{state}(t_k, A_{p}))||\text{state}(t_k, A_{q}))||\text{state}(t_k, A_{r})) \).
Relative Ordering of Enrichment Functions. In EDQM, the user can specify or can be learned by ENRICHDB using a training dataset) the relative order in which enrichment functions need to be executed. This order is specified using the state of tuples for each derived attribute. Such relative ordering is important for ensembling different enrichment functions to be executed on a tuple. This ordering is stored in a table called DecisionTable (see Table 3).
This table, for each derived attribute of a relation, stores a map that — given the current state of a tuple with respect to the attribute — specifies the next function that should be executed to further enrich the attribute, as well as (optionally) the expected improvement in quality (denoted as benefit) that will result from enriching the attribute of the tuple. ENRICHDB uses benefit and the cost of enrichment functions to order the enrichment of tuples.
In Table 3, each row stores a map containing (state bitmap, entropy range) as keys and the corresponding (next best function, benefit) pair as values. Consider the tuple \( t_1 \) of wifi table (see Table 1) and assume that the location state bitmap of \( t_1 \) is \([1,0,0]\) and the location state output of \( t_1 \) is \([0.54, 0.35, 0.11], [0,0,0], [0,0,0] \). The entropy of \( t_1 \) is \((-0.54 \times \log_3(0.54) - 0.35 \times \log_3(0.35) - 0.11 \times \log_3(0.11)) \approx 0.85 \). From first row of Table 3, since entropy of \( t_1 \) is in the range \((0.75, 1)\), the decision table specifies that the next best function to execute is \( f_2 \) and its benefit as 0.22.
### 2.2 Query Model
This section describes the query language (§2.2.1), query semantics (§2.2.2), and the goal of enrichment (§2.2.3).
#### 2.2.1 Query Language
The query language of ENRICHDB is an extended...
version of SQL. Queries in ENRICHDB are associated with a query semantics (which are required to deal with probabilistic values of derived attributes) and (optional) quality parameter for the quality of the query results.
Two types of query semantics for probabilistic data have been proposed in the past: (i) determination-based semantics [7] and (ii) possible world (PW) semantics [15]. The determination-based semantics converts probabilistic representation to a single or a small set of deterministic worlds. The query is executed in each of these worlds and a single deterministic answer is produced. In contrast, in PW semantics, all possible worlds are generated (implicitly/explicitly) from probabilistic representation and the query is executed in each world. The result consists of all possible tuples along with their probability of being part of the result in at least one world. The choice of one semantics over the other depends on the application scenarios. In some scenarios, applications can make good decisions by using the most probable answers, whereas in some scenarios, they require analysis of all possible answers along with their probability distribution. Due to simplicity, we have implemented the determination-based query semantics in ENRICHDB (the implementation of PW semantics is under development).
An example query in ENRICHDB that requires a minimum quality of 0.9 is shown below:
```
SELECT wifi.location as p_location, wifi.timestamp as p_time FROM wifi
WHERE p_location = 'LI'
AND p_time BETWEEN ('10:00', '12:00')
AND QUALITY 0.9;
```
### 2.2.2 Query Semantics
In determination-based query semantics, tuples of all participating relations in a query are determined first before evaluating the query. The process of converting a probabilistic data representation, i.e., the output of probabilistic enrichment functions, to a deterministic representation is referred to as the **determination process**.
Consider a derived attribute \( A_i \) and a tuple \( t_k \). The value of tuple \( t_k \) in attribute \( A_k \) (i.e., \( t_k.A_k \)) is determined using a **determination function** (**DET**) based on tuple’s state. \( DET(state(t_k, A_i)) \) returns a single or multiple values for \( t_k.A_i \) or a NULL value, representing a situation when state of the attribute does not provide enough evidence to assign any value for \( t_k.A_i \). Determinization concept naturally extends to a tuple and a relation. The determined representation of a relation \( R \) is denoted by:
\[
DET(R) = DET(state(t, A_j)) \forall t, \in R, \forall A_j \text{ of } R.
\]
### Simple Predicates
Consider an expression \( A_i \ op \ a_m \), where \( A_i \) is a derived attribute, \( op \) is an operator, and \( a_m \) is a possible value of \( A_i \). The operator \( op \) is one of the following operators: \( (=, \neq, >, \geq, <, \leq) \). If the output of \( DET(state(t_k, A_i)) \) is NULL, then the expression evaluates to \( T \). If \( DET(state(t_k, A_i)) \) is a singleton set \( S \) and \( x \in S \) such that \( x \ op \ a_m \) holds, then the expression evaluates to \( T \); otherwise, \( F \). If \( DET(state(t_k, A_i)) \) is a multi-valued set (say \( S \)) and \( \exists x \in S \) such that \( x \ op \ a_m \) holds, then it is possible that \( t_k \) satisfies the expression, and hence, it evaluates to \( P \). However, if \( \forall x \in S \) for which \( x \ op \ a_m \) holds, then the expression evaluates to \( F \).
Consider an expression \( A_i \ op \ A_j \), where \( A_i \) and \( A_j \) are two derived attributes of (possibly different) relations and \( op \) is a comparison operator. If \( DET(state(t_k, A_i)) \) or \( DET(state(t_l, A_j)) \) is NULL, then the condition evaluates to \( U \). If both \( DET(state(t_k, A_i)) \) and \( DET(state(t_l, A_j)) \) are singleton sets and for elements \( x \in DET(state(t_k, A_i)) \) and \( y \in DET(state(t_l, A_j)) \), \( x \ op \ y \) holds, then the condition evaluates to \( T \); otherwise, \( F \). In case one or both of \( DET(state(t_k, A_i)) \) and \( DET(state(t_l, A_j)) \) are multi-valued sets and \( \exists x \in DET(state(t_k, A_i)) \) and \( \exists y \in DET(state(t_l, A_j)) \), such that \( x \ op \ y \) holds, then the condition evaluates to \( P \); otherwise, \( F \).
### Complex Predicates
Complex predicates are formed using multiple comparison conditions connected by Boolean operators (AND \( \land \), OR \( \lor \), and NOT \( \neg \)). Table 4 shows the truth table for such logical operators.
to $P$. When both expressions evaluate to either $T$, $F$, or $U$, we follow the same evaluation logic as in standard SQL.
**Aggregation.** Aggregation functions on fixed attributes are evaluated as in SQL, while, on a derived attribute, return a range of values $[l, u]$, denoting the lower and upper bounds of aggregated value. An aggregation function (e.g., $\text{count}$, $\text{sum}$, $\text{min}$) applied to all $T$ tuples of a set produces the lower bound $l$, while applied to all $T$ and $P$ tuples produces the upper bound $u$. E.g., consider a query on Table 1 that counts the occupancy of location $L_1$, and assume that the table has 250 tuples of which 100 tuples evaluate to $T$, while 20 of the remaining 150 tuples evaluate to $P$. Hence, the condition evaluation logic returns a range of $[100, 120]$. Likewise, group-by aggregation results in such a range per group.
**Top-k Aggregation.** ENRICHDB first evaluates aggregation functions for each group-by key (as described above), and then ranks their outputs by a ranking function. The query result consists of a set of group-by keys with the top-k ranks. The purpose of the ranking function is to return a minimal answer set $A$, such that the real top-k groups are guaranteed to be part of $A$. ENRICHDB sorts the group-by keys based on the lower bounds in a descending order and selects the first $n$ (where $n \geq k$) group-by keys as the minimal answer set $A$ such that the upper bound of $(n+1)$-th key is lower than the lower bound of the $n$-th key. This ensures that the $(n+1)$-th group-by key cannot be part of the top-k answer set.
Consider a query that returns top-2 locations with highest occupancy from Table 1. Suppose after applying $\text{count}(\cdot)$, the locations had following bounds for occupancy: $L_1: [100, 150], L_2: [110, 120], L_3: [100, 115]$, and $L_4: [80, 95]$. The results returned are locations $\{L_1, L_2, L_3\}$ that guarantees that the actual top-2 locations (i.e., $L_1, L_2$) are part of the result. $L_4$ is excluded as the upper bound of occupancy (i.e., 95) is lower than the lower bounds of locations in the answer.
Based on the definition of determination function and the predicate evaluation logic as described above, we define the query semantics as follows: $q(R_1, R_2, \ldots, R_n) = q(\text{DET}(R_1), \text{DET}(R_2), \ldots, \text{DET}(R_n))$
Here, $q(R_1, R_2, \ldots, R_n)$ is a query on relations $R_1, \ldots, R_n$, $\text{DET}(R_i)$ is the determined representation of the $i^{th}$ relation. Query $q$ is rewritten as $q'$ to be executed on the determined representations of relations using the four valued logic as described above.
### 2.2.3 Quality Measure of Query Results
In ENRICHDB, we measure the quality of answers to (i) set based queries using Jaccard’s similarity or expected $F_\alpha$-measure, (ii) aggregation queries using the root-mean-square error, mean absolute error, or the half-interval length of query answer, and (iii) group-by and top-k queries using the summation of half-interval lengths of all group by keys.
**Progressive Score.** Since ENRICHDB allows users to stop query evaluation at any instance of time (even before the quality requirement is met), performing enrichments impacting answer quality as early as possible is needed. ENRICHDB’s effectiveness is measured using the following progressive score (similar to [13, 4]):
$$\mathcal{PS}(\text{Ans}(q, E)) = \sum_{i=1}^{E} W(e_i) \cdot (Q(\text{Ans}(q, e_i)) - Q(\text{Ans}(q, e_{i-1})))$$
The query execution time is discretized into sub-intervals, called epochs $\{e_1, e_2, \ldots, e_E\}$, $W(e_i) \in [0, 1]$ is the weight allotted to the epoch $e_i$, $W(e_i) = W(e_{i+1})$, $Q$ is the quality of answers, and $[Q(\text{Ans}(q, e_i)) - Q(\text{Ans}(q, e_{i-1}))]$ is the improvement in the quality of answers occurred in the epoch $e_i$. The quality $Q$ is measured according to the type and semantics of the query as discussed above. Given a query, a quality metric, and a set of weights assigned to each epoch, ENRICHDB’s goal is to achieve maximum progressive score for the query, if query execution is stopped early.
### 3. ENRICHDB IMPLEMENTATION
There are two possible ways of implementing the above data model as shown in Figure 1: (i) a loosely coupled (LC) approach, wherein an enrichment module is implemented separately from the DBMS, and (ii) a tightly coupled (TC) approach, wherein an enrichment module is tightly integrated with the query processing module of the DBMS. ENRICHDB follows TC approach on top of PostgreSQL as it uses the query context to eliminate redundant enrichment.
Consider a query with two selection conditions on derived attributes $\mathcal{A}_1$ and $\mathcal{A}_2$, connected using $\text{AND}$, the LC approach will enrich the tuples for both $\mathcal{A}_1$ and $\mathcal{A}_2$. In contrast, in TC, after enriching $\mathcal{A}_1$ of a tuple, if it does not satisfy the condition on $\mathcal{A}_1$, then attribute $\mathcal{A}_2$ is not enriched. Such a pruning strategy can be very effective, when queries are complex and selective. Furthermore, the TC approach executes the enrichment functions closer to the data, in the database engine.
An ENRICHDB query is wrapped in a stored procedure that internally executes appropriate SQL queries on top of PostgreSQL tables during multiple epochs. The query results are maintained using Incremental Materialized Views (IMV) [3] to reduce the overhead of executing queries multiple times. Enrichment functions are implemented as user-defined functions (UDFs), and their execution is orchestrated by a special UDF that executes enrichment functions as UDFs by taking them as arguments. For implementation details, please check [2].
4. USE CASE OF ENRICHDB
This section describes how ENRICHDB can be used to develop the application described in §1 that finds out location of attendees already arrived for an event. It requires fine-grained localization of people using WiFi connectivity data inside a building using multiple predictive models with different cost and quality [12]. The application poses queries to find out attendees at a location between two time intervals. Ease of Application Development. To develop this application, the steps to take in ENRICHDB are presented below. ENRICHDB-based implementation is much simpler (∼26 lines of code) as compared to any loosely coupled implementation, where enrichment is performed outside of DBMS and requires much more lines of code (∼130 lines [2]).
```sql
-- Creating a new table
CREATE TABLE wifi(id int, user_id char(30),
time timestamp, wifi_ap char(30),
location int derived:304)
-- Training ML Models
SELECT db.model_train('wifi_train',
'location_dt', 'decision_tree',
'location', 'feature[], model_params');
-- Associating functions with 'location'
SELECT db.assign_enrichment_functions(
'wifi', [['location','3', 'loc_dt', 0.8, 0.7],
['location','4', 'loc_for', 0.9, 0.8]]);
-- Setting up decision table
SELECT db.learn_decision_table('wifi',
'location', 'WifiValidation');
-- Adding data
SELECT db.enriched_insert('INSERT INTO wifi
VALUES (1,1051,"10:02",12, NULL)');
-- Executing Queries
CALL db.exec_driver('SELECT location, time
FROM wifi WHERE id<100 AND location = "L1"
AND time BETWEEN ("10:00","12:00")', 20,5);
```
Performance Evaluation. Figure 2 shows the quality of results achieved by ENRICHDB with respect to time for the query described above (Line 20). The results are produced at the end of each epoch, where the epoch duration is set to 5 seconds. The quality is measured using normalized $F_1$ measure, i.e., $F_1 = \frac{P \times R}{P + R}$, where $F_1$ is the maximum $F_1$ measure achieved during query execution.
Figure 2 highlights that ENRICHDB provides high-quality query results within the first few epochs of a query execution as compared to the strategy of eager enrichment that enriches the tuples completely and then executes the query.
5. RELATED SYSTEMS
ENRICHDB can be viewed as a system similar to Extract-Load-Transform (ELT) based systems [1], where the data is extracted and loaded to a data warehouse/lake system and enrichment is performed at the analysis time. In contrast, ENRICHDB provides a powerful data model to developers that make application programming very easy. Query-driven approaches of data cleaning has been studied significantly [18, 8]. However, such works were restricted to only data cleaning algorithms of duplicate detection, duplication elimination, and entity resolution, whereas ENRICHDB supports a general class of enrichment functions such as classification, clustering, and regression functions. Systems for supporting ML using databases (e.g., Apache MADlib [9], RIOT [21]) are designed to learn ML models inside or on top of database systems; however, such systems do not support semantic abstraction of specifying enrichment functions and linking them to higher-level observation generated by them as supported by ENRICHDB.
6. CONCLUSION
In this paper, we proposed ENRICHDB — a new system for supporting data enrichment inside a single data management system. The cornerstone of ENRICHDB is a powerful enrichment data model that encapsulates enrichment as an operator inside a DBMS enabling it to co-optimize enrichment with query processing. Furthermore, ENRICHDB provides semantic abstraction, transparency of enrichment, and progressive computation of queries to make application programming very simple for the developers.
Acknowledgements. This material was partially funded by the research sponsored by DARPA under agreement number FA8750-16-2-0021 and NSF Grants No. 1527536, 1545071, 2032525, and 2008993.
7. REFERENCES
[9] J. M. Hellerstein et al. The madlib analytics library or MAD skills, the SQL. VLDB 2012.
|
{"Source-Url": "https://sigmodrecord.org/publications/sigmodRecord/2206/pdfs/06_Research_Ghosh.pdf", "len_cl100k_base": 7728, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 26161, "total-output-tokens": 8853, "length": "2e12", "weborganizer": {"__label__adult": 0.00032591819763183594, "__label__art_design": 0.00039887428283691406, "__label__crime_law": 0.000457763671875, "__label__education_jobs": 0.0011653900146484375, "__label__entertainment": 9.888410568237303e-05, "__label__fashion_beauty": 0.00017547607421875, "__label__finance_business": 0.0006313323974609375, "__label__food_dining": 0.0004086494445800781, "__label__games": 0.0004658699035644531, "__label__hardware": 0.0010318756103515625, "__label__health": 0.0005779266357421875, "__label__history": 0.00031375885009765625, "__label__home_hobbies": 0.00012874603271484375, "__label__industrial": 0.0006513595581054688, "__label__literature": 0.0003027915954589844, "__label__politics": 0.0003399848937988281, "__label__religion": 0.0003788471221923828, "__label__science_tech": 0.139404296875, "__label__social_life": 0.00011962652206420898, "__label__software": 0.0305023193359375, "__label__software_dev": 0.8212890625, "__label__sports_fitness": 0.00022971630096435547, "__label__transportation": 0.0004940032958984375, "__label__travel": 0.00021767616271972656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33649, 0.04387]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33649, 0.5459]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33649, 0.86438]], "google_gemma-3-12b-it_contains_pii": [[0, 5535, false], [5535, 11406, null], [11406, 17453, null], [17453, 22004, null], [22004, 27761, null], [27761, 33649, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5535, true], [5535, 11406, null], [11406, 17453, null], [17453, 22004, null], [22004, 27761, null], [27761, 33649, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33649, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33649, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33649, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33649, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33649, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33649, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33649, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33649, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33649, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33649, null]], "pdf_page_numbers": [[0, 5535, 1], [5535, 11406, 2], [11406, 17453, 3], [17453, 22004, 4], [22004, 27761, 5], [27761, 33649, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33649, 0.06667]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
2867442a3ec1e16af2a9ae24da27d606cd5f9227
|
DYNAMIC LARGE-SCALE GESTURAL CONTROL IN SUPERCOLLIDER SERVER
Joshua Parmenter
Center for Digital Arts and Experimental Media
University of Washington, Seattle
ABSTRACT
Discusses a number of classes in the JoshLib extension library for the SuperCollider real-time synthesis language. ProcMod gives the user the ability to control groups of events with global controls and gestural shaping, as well as real-time performance flexibility. ProcEvents and ProcSink give the composer and performer control over instances and overall structure of ProcMod events.
1.INTRODUCTION
The SuperCollider classes described here have been developed from my composition of a number of pieces for instrument and live-electronics as well my recent research for the composers Richard Karpen and Juan Pampin at the Center for Digital Arts and Experimental Media at the University of Washington is Seattle. In the late 1990s, Karpen began a series of pieces for soloist and real-time processing with the SuperCollider 2 programming language for Mac OS 9. In the first of these, Sotto-Sopra for amplified violin and real-time electronics, a system was created to allow the computer part of the piece to be controlled during a performance through input from a computer operator. One of the great advantages of the structure developed by Karpen was the ability to start processes dynamically and free the resources of the process once they were no longer needed (rather then a more common approach to many live electronics pieces where all processes are started before a piece begins, and are then controlled through live or automated mixing). The structure of Sotto-Sopra required a pre-set ordering of the signal processing functions to be composed then triggered in real-time to coincide with events in the violin part. This required a very precise performance by both violinist and computer operator, while opening up many opportunities in the composition of the electronic part since Karpen able to use the computer’s full power for any part of the piece, rather then having to base his processing decisions on the limitations on the total processing power needed for continuous computation of all signal processing techniques. Karpen’s Solo-Tutti for viola and live-electronics, Juan Pampin’s Oid for piano, live electronics and video and my own Music for Bassoon and Live Electronics and Organon Sustenuto for flute, cello, bassoon, double bass and live electronics challenged and refined this structure even further to allow for more flexible performance situations, the triggering of events in response to the performer’s input in addition to the computer operator, as well as the ability to control processes involving multiple instruments. With the appearance of Mac OS X and the development of SuperCollider Server, I have been able to begin work on a library of functions that encapsulates many aspects of the control structures that have been developed in this style of interactive piece. The library also greatly expands on the earlier possibilities to allow for control of the computer part solely by the performer, visual feedback on the state of a performance, as well as the ability to connect a single SuperCollider language client (sclang) to multiple sound synthesis engines (scsynth) across multiple processors on a single machine or to multiple computers sharing a high speed network. While the language itself provides a number of tools to control small parts of this compositional approach (envelope generators, control buses, synthesis grouping), until now it has been up to the programmer to find ways to synchronize these tools. For example, functions that could control the release of a global envelope with the halting of a loop controlling the creation of a stream of notes had to be created on a piece by piece basis. With ProcMod, there is now a tool available in SuperCollider that allows for easy control and syncing of a number of these parameters, providing a structure through which the control of complex algorithmic process can be treated as a single unit. ProcEvents and ProcSink provide a structure that allows the combination, dynamic creation and control of the gestures created with ProcMod.
2.PROCMOD
ProcMod (short for Process Module) is the most basic object in the system for the management of large scale events (example 1). The creation of an instance of the ProcMod class automatically communicates with the language to set-up a number of basic controls. Each ProcMod will create its own group in the scsynth node tree, as well as register a control bus for sending values from a global amplitude envelope that can be utilized by each event a ProcMod generates. Amplitude can be changed dynamically, and these changes of signal can be smoothed with ProcMod’s lag parameter.
Each instance of ProcMod can have a function to be evaluated when the ProcMod is played, as well as additional functions that can be executed when the ProcMod is released and finished. OSCresponder (for receiving OpenSound Control messages from scsynth or other OSC enabled programs) can also be registered to an instance of ProcMod. The allocation and release of the OSCresponder will be added at play time, and removed when the ProcMod is released. Once a function is started, ProcMod also records its starttime according to the global system time, and makes the time elapsed since the start of a ProcMod available for use to the functions both within
**ProcMod**
**Class Methods**
*new(env, amp, id, group, addAction, target, function, releaseFunc, onReleaseFunc, responder, ...)*
env - an overall amplitude envelope that synths create in a ProcMod function can access. This will run a js env on ProcMod/envs that can be read by other synths in the same process through the creation of a procmodenv synth. There is a max of 20 breakpoints to the env. If the Env has a releaseNode, ProcMod will continue to process events until release is called.
amp - an overall amplitude control for an instance of ProcMod.
id - a symbol or "string" to be used later to identify an instance of ProcMod.
group - a group for an instance of ProcMod to run in. Defaults to nil and creates a new group.
addAction - an addAction for this instance of ProcMod. Defaults to 0.
target - a target for this instance of ProcMod. Defaults to the ProcMod.
function - a Function, Task or Routine to be evaluated on the playing of this instance of ProcMod.
releaseFunc - a Function, Task or Routine to be evaluated after the ProcMod has released its release message.
onReleaseFunc - an instance of Function, Task or Routine to be evaluated at release time.
responder - an instance of OSCResponder or OSCResponderNode for use by this instance of ProcMod. It is automatically added when the ProcMod starts, and released after the ProcMod mishes its release.
timeScale - applies a scale function to the ProcMod envelope. Defaults to 1. lag applies to changes the val since passed into this instance of ProcMod.
clock - an instance of Clock to run this instance of ProcMod. Defaults to SystemClock.
server - an instance of Server to run this ProcMod on. Useful for remote servers. Defaults to Server.default.
**Instance methods**
play - evaluates this instance of ProcMod. ProcMod.function is evaluated, and ProcMod.responder is set up if they are declared.
release - same as play.
release - releases an instance of ProcMod. If ProcMod.env has a release function, functions and OSC responders will wait until this has executed before releasing the ProcMs functionality.
kill - immediately free the ProcMod, regardless of ProcMod.env.
release - return the control bus id the global envelope is written to for this instance of ProcMod.
env - (EnvNode) - an instance of Env to be sent to the synthdef controlling an instance of ProcMod.s overall amplitude and event control. If a number is passed it represents a realtime for the ProcMod.
function - (func) - an instance of Function, Task or Routine to be evaluated on ProcMod.envbus.
releaseFunc(func) - an instance of Function, Task or Routine to be evaluated after a ProcMod has released.
onReleaseFunc(func) - an instance of Function, Task or Routine to be evaluated after a ProcMod has released.
responder - (OSCResponder) - an instance of OSCResponder or OSCResponderNode for use by this instance of ProcMod.
amp - (val) - If there is an envelope controlling the overall amplitude of events, set the amplitude to val.
lag - (val) - If there is an envelope controlling the overall amplitude of events, set the lag time for changes of amplitude to take effect (with the amp, instance method)
data - (Association) - places the Association into a Dictionary for later access.
**Example 1: The ProcMod Class Description**
a ProcMod as well as in the SuperCollider language itself. If the global envelope is of a fixed duration, the ProcMod, when started, will execute that envelope over the specified duration then free all of the resources associated with that instance of ProcMod. If the envelope has a releaseNode, ProcMod will continue to execute its functions until the release message is sent to the object. The object also contains a generic data slot that can hold information that may need to be accessed outside of a the individual ProcMod instance by any other part of the SuperCollider client, (similar to creating and referencing a structure in C).
The global envelope control and function handling are the two strongest features of the ProcMod class. When the ProcMod is created, it checks the env slot for a number of possibilities. If an envelope of indefinite duration is used, the release duration of the envelope is calculated and will be used to free resources after the envelope is released. If a fixed duration envelope is passed in as the argument, the information about its duration is used to halt the creation of new events in the ProcMod function, as well as freeing the resources that are being used in the group that is created by the ProcMod. The control bus where envelope data is written to can be passed into events created by the ProcMod by querying ProcMod's envbus instance variable, or through an argument to the ProcMod's function. If nothing is passed into the envelope argument, then the process will run indefinitely, and on release all processes will be immediately shut-off. If a number is passed in, it is used as a duration in seconds that indicates the amount of time after a release message is received by the ProcMod before all processes will cease. In both of these instances, no envelope or control bus is created.
The function argument accepts a number of different kinds of objects. Functions can be used most effectively for events that simply need to be started when the ProcMod is played. Tasks and Routines (or a Function that returns a Task or Routine) may be used for events where future scheduling is necessary. Infinite loops and other events where it is impossible to foresee the number of events that will need to be created are best handled in this way. The Task or Routine will create its own processing thread, and once started, the process will continue to execute until the instance of ProcMod is released and the thread is freed. The group, control bus and the instance of server that are used by ProcMod are passed in as arguments to the Function when the ProcMod executes.
Since the functionality of ProcMod is all contained within its own thread and synthesis within its own node group, it is easy to create and run multiple ProcMocs at the same time, each with its own gestural shaping. The entire ProcMod process can also be placed into a specific place in scsynth’s order of execution, to allow sound to be routed from one ProcMod into another. Effects routing and processing is easily handled, and multiple processes can be routed into a single effects process. Finally, ProcMod contains a simple GUI interface (example 2) that allows the starting and stopping of the ProcMod’s function, as well as amplitude control.
**Example 2: The ProcMod Graphic User Interface (GUI)**
The server slot will allow a single sclang process to control multiple scsynth server processes. On multiprocessor systems, separate server instances can be run on different processor cores. In addition, the communication between the SuperCollider language and multiple servers can be broadcast over a TCP or UDP network, so it is possible for a single sclang process to control an almost limitless number of synthesis servers. At this point, the limitations are placed on the calculations necessary to run the single client process, rather then the limitations of the DSP engine.
3. PROCEVENTS
The ProcEvents class can be used to organize a predetermined pattern of ProcMod or function instances. Its main functionality comes through the ability to initiate ordered events and releases. The predominant usage of ProcEvents would be through its gui method that displays current event information, gives control to the computer operator of overall amplitude, and allows for all computation to be released or immediately halted. The
GUI also allows the operator to skip to a specific event if needed. The events array that is passed into an instance of ProcEvents is also very flexible. Each event in the event array is itself an array of two optional objects, the first a ProcMod or function to execute when the event index is referenced, the second a ProcMod’s ‘id’ that should be released. Both of these slots may be yet another array of multiple ProcMods to start or release, encouraging the user to build ProcMods as modularly as possible, then creating complexity through varying combinations of simpler processes.
In addition to simply stepping through events, there are slots in the class to store instances of ProcMod that can be used to for special instances of ProcMod that need to be executed with the first event or for when the program is killed. These ‘init’ and ‘kill’ ProcMods are especially useful for starting synthesis processes that are global (e.g. Ambisonic decoding or dynamics processors) or for allocating and freeing memory.
As with ProcMod, multiple instances of ProcEvents may be run, and again, multiple synthesis servers may be used to distribute heavy processor loads. ProcEvents also contains a number of timers from which the processes under the control of an instance of ProcEvents can call upon. The starttme method of ProcEvents will return the time stamp of the initial event. If the ‘id’ of a ProcMod contained in a ProcEvent’s event array is passed in to the starttme function, ProcEvents will return the starttme of the ProcMod instance in relation to the initial ProcEvents starttme. The new method of ProcEvents will return the current time in relation to the starttme of the instance of ProcEvents, while passing this function an ‘id’ will return how long a specific ProcMod has been running. Finally, it is possible to attach an audio trigger to an instance of ProcEvents to tell the class to move on to the next event. This allows the triggering of events to be completely controlled by the performer. In performances where score following is of great importance, this allows the performer, who is following the score as closely as anyone can, to step through the necessary processes. The GUI also has the option to display a large number display in these instances to provide feedback to the performer (example 3).
Including code for an entire piece is not possible in the space provided, but example 4 shows a short sample of code. initproc reads live input, limits it, and sends it out to a virtual bus for use by any other process. pmod is a function that creates an instance of ProcMod that is programmed to create a granular gesture from filtered versions of the limited input. The pmod function allows each event to have its own global envelope, controls over the windowsize and number of overlapping grains, as well as lower and upper bounds for a random number generator that will be polled with each new grain to control the center frequency of each windows filter.
```{
var pevents, pmod, initproc, routebus;
//filters, envelopes and pans the input
SynthDef(\filtgrain, [arg procenv, inbus, dur, filtfreq, pan,
var filt, grainenv;
grainenv = EnvGen.ar(Env([0, 1, 0], [0.5, 0.5], \wavelm),
timeScale: = dur, doneAction: = 2) * In.kr(procenv);
filt = BPF.ar(In.ar(inbus), filtfreq, 0.01, grainenv);
OffsetOut.ar(0, Pan2.ar(filt, pan))).load(s);
SynthDef(\limit, [arg procenv, outbus, limit = 1;
var src = AudioIn.ar([(1) * In.kr(procenv)];
Out.ar(outbus, Limiter.ar(src, limit))).load(s));
//an audio bus to route sound from limit to all others synths
routebus = s.audioBusAllocator.alloc(1);
//a function to create new ProcMods with varying parameters
pmod = (arg globenv, id, amp = 1, grainsize,
hifreq = 1760, lowfreq = 480;
ProcMod(globenv, amp, id)
.function({(arg group, envbus, server;
Task({
var waittme;
waittme = grainsize / overlaps;
loop({
server.sendMsg({\ns_new, \filtgrain,
server.nextNodeID, 0, group,
procenv, envbus, \dur, grainsize,
\filtfreq, hifreq, \rand(lowfreq),
\pan, 1.rand2, \inbus, routebus);
waittme.wait;
}))});
initproc = ProcMod(Env([0, 1, 0], \sin, 1),
.addAction: 0, target: 0) .function({(arg group, envbus, server;
server.sendMsg({\ns_new, \limit, server.nextNodeID, 0,
group, \outbus, routebus);
"ProcEvents started".postln })
/*Event Numbers*/
pevents = ProcEvents(
/*0*/ [pmod.value(Env([0, 1, 0], \sin, 1), \ev1,
20.dbamp, 0.02, 1, 1760, 880),
nil],
/*1*/ [pmod.value(Env([0, 1, 0], \lin, 1), \ev2,
6.dbamp, 0.02, 1, 1760, 880),
nil],
/*2*/ [pmod.value(Env([0, 1, 0], \lin, 1), \ev3,
6.dbamp, 0.02, 1, 1760, 880),
nil],
/*3*/ [nil, \ev3],
/*4*/ [nil, initproc]
).load, initproc, nil, "Sample Piece";
pevents.perfGUI;
})
```
Example 4: A short piece using the ProcMod and ProcEvents structure
The ProcEvents structure has proven to be very flexible and robust. Pieces for soloist with electronics are easily managed with the structure, while pieces for ensemble with electronics work well. Each ProcEvents instance stores all of the ProcMods it is responsible for in the instance variable ‘eventDict’, allowing other processes or instances of ProcEvents to access the event structure. As a result, data between processes can be shared through ProcMod’s ‘data’ structure, and events in one performer’s ProcEvents instance can release events in another. Finally, all of ProcEvent’s events are set-up when the code is initially interpreted, avoiding computational overhead during a performance.
4.PROCSINK AND FUTURE PLANS
ProcSink is a recent effort to expand the performance possibilities of the ProcMod object. Where ProcEvents works well for pre-designed electronic parts, there is little flexibility
Example 5: An algorithmically created instance of ProcSink with GUI
for situations where a non-linear approach to an electronic part is wanted. ProcSink is designed to have ProcMods added to its structure on the fly, and allows for each instance of ProcMod to have its own on/off toggle and amplitude control. Once again, a flexible GUI is available and will automatically update itself as new processes are created (example 5). In addition to ProcSink’s ability to expand its performance possibilities in real-time, the structure also lends itself very well to bringing the creation and altering of an electronic part’s processes into a rehearsal situation. The state of ProcSink can be saved at any time to a file, and that state can then be recreated later without having to reload any code. While the ProcSink class can be ideal for improvisational or rehearsal work, I also see it as a tool that can be developed to eventually produce fixed event structures. One goal I have for ProcSink is the ability to record performance information so that it can be recreated later, perhaps through a function that captures a ProcSink performance and saves it as a ProcEvents type of structure. Both ProcEvents and ProcSink would also be enhanced if some typical features of digital audio workstation automation controls (as well as the visual manipulation of envelopes) could be created. It would also be ideal for there to be a way for ProcSink to generate multiple instances of a single ProcMod that could then be played simultaneously with any number of layers.
```javascript
// the function ‘a’ returns a new ProcMod. Takes the upper // and lower bounds of possible frequencies as an argument a = {arg high, low; var proc; // create a new ProcMod proc = ProcMod.new(Env((0, 1, 0), [1, 1], \sin(1), -12.dbamp); proc.function(({arg group, envbus, server; Task(); var minsize, overlaps; overlaps = 8; loop({ s.sendMsg(\s_new, \singrain, server.nextNodeID, 0, group, \freq, high.rrand(low), \amp, 1, \dur, minsize, \envbus, envbus); (minsize / overlaps).wait));)})); proc; // the new ProcMod is returned from the function. ;}
// create new instances of ProcMod... store them to the // variables ‘b’, ‘c’ and ‘d’ b = a.value(5000, 10000); c = a.value(880, 440); d = a.value(8000, 12000); // play all three {b, c, d}.do({arg proc; proc.play}); b.release; // stop them one at a time c.release; d.release;
```
Example 6: Functional creation of ProcMod instances from a basic prototype function.
This kind of prototyping functionality still needs to be implemented to ProcMod. Currently, SuperCollider’s Function object can be used to algorithmically generate multiple instances of a basic ProcMod, where the instance that is created takes the arguments to the function into account. A simple example of this can be seen in example 6 where the basic ProcMod functionality is the creation of a random granular cloud texture with note frequencies randomly chosen between an upper and lower bounds. By evaluating the Function ‘a’, a new instance of ProcMod is created to reflect the arguments to the function. This functionality should be set up either as a method to the ProcMod object itself, or as an additional class whose sole function is the generation of ProcMod instances with flexible parameter control. If this can be achieved, I see no reason why the GUI for ProcSink shouldn’t be controlled through the prototyping object, with control arguments also appearing in the interface for the user. Finally, it would be ideal if there were methods to all of these classes that would allow these structures to be easily saved as stand-alone applications.
5. CONCLUSION
The separation of the synthesis server and language client in SuperCollider 3 provided the overall environment with greater stability and more flexibility, yet cost the user the tighter integration between large scale signal based controls that existed in SuperCollider 2. ProcMod and its companion classes provides a language based solution to this loss of larger scale control through its management of a combination of smaller tools available to the user in the standard SuperCollider library of classes and objects, allowing the user to create complex gestures, while controlling those gestures as a single entity.
6. REFERENCES
|
{"Source-Url": "https://quod.lib.umich.edu/cgi/p/pod/dod-idx/dynamic-large-scale-gestural-control-in-supercollider-server.pdf?c=icmc&format=pdf&idno=bbp2372.2007.029", "len_cl100k_base": 5080, "olmocr-version": "0.1.53", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 13041, "total-output-tokens": 5658, "length": "2e12", "weborganizer": {"__label__adult": 0.000576019287109375, "__label__art_design": 0.0048065185546875, "__label__crime_law": 0.00032901763916015625, "__label__education_jobs": 0.0006961822509765625, "__label__entertainment": 0.0010976791381835938, "__label__fashion_beauty": 0.00018286705017089844, "__label__finance_business": 0.00020134449005126953, "__label__food_dining": 0.00048828125, "__label__games": 0.0008077621459960938, "__label__hardware": 0.00295257568359375, "__label__health": 0.0003311634063720703, "__label__history": 0.00019609928131103516, "__label__home_hobbies": 0.00011640787124633788, "__label__industrial": 0.0003814697265625, "__label__literature": 0.0003597736358642578, "__label__politics": 0.00021731853485107425, "__label__religion": 0.0007557868957519531, "__label__science_tech": 0.0243988037109375, "__label__social_life": 0.00012230873107910156, "__label__software": 0.0394287109375, "__label__software_dev": 0.9208984375, "__label__sports_fitness": 0.0003609657287597656, "__label__transportation": 0.0003132820129394531, "__label__travel": 0.0001885890960693359}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23945, 0.02907]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23945, 0.41606]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23945, 0.89168]], "google_gemma-3-12b-it_contains_pii": [[0, 5459, false], [5459, 13138, null], [13138, 18844, null], [18844, 23945, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5459, true], [5459, 13138, null], [13138, 18844, null], [18844, 23945, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23945, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23945, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23945, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23945, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23945, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23945, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23945, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23945, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23945, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23945, null]], "pdf_page_numbers": [[0, 5459, 1], [5459, 13138, 2], [13138, 18844, 3], [18844, 23945, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23945, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
614bfda21970bb3c34f285cac65e8f2a95b000e5
|
Trace Matrix Analyzer (TMA)
Wenbin Li, Jane Huffman Hayes, Fan Yang, Ken Imai, Jesse Yannelli, Chase Carnes, Maureen Doyle
Computer Science
University of Kentucky
Lexington, Kentucky, USA
wenbin.li@uky.edu, hayes@cs.uky.edu, fan_yang_1@brown.edu, jesse.yannelli@uky.edu, chase.carnes@uky.edu,
Abstract—A Trace Matrix (TM) represents the relationship between software engineering artifacts and is foundational for many software assurance techniques such as criticality analysis. In a large project, a TM might represent the relationships between thousands of elements of dozens of artifacts (for example, between design elements and code elements, between requirements and test cases). In mission- and safety-critical systems, a third party agent may be given the job to assess a TM prepared by the developer. Due to the size and complexity of the task, automated techniques are needed. We have developed a technique for analyzing a TM, called Trace Matrix Analyzer (TMA), so that third party agents can perform their work faster and more effectively. To validate, we applied TMA to two TMs with known problems and golden answer sets: MoonLander and MODIS. We also asked an experienced software engineer to manually review the TM. We found that TMA properly identified TM issues and was much faster than manual review, but also falsely identified issues for one dataset. This work addresses the Trusted Grand Challenge, research projects 3, 5, and 6.
I. INTRODUCTION
"Requirements assurance aims to increase confidence in the quality of requirements through independent audit and review" [1]. A Trace Matrix (TM) represents the relationship between software engineering artifacts and is foundational for many assurance techniques such as criticality analysis, change impact analysis, and regression testing. In a large project, a TM might represent the relationships (trace links) between thousands of elements of dozens of artifacts (for example, between design elements and code elements, between requirements and test cases). In mission- and safety-critical systems, a third party agent may need to assess a TM prepared by the developer. There are currently no automated techniques to assist such an agent.
To support assurance activities, trace links and TMs must possess a number of characteristics (shared with requirements and requirement sets, as a matter of course [2]). Trace links must be: correct, unambiguous, and verifiable; the TM must be complete, consistent, and modifiable [2]. Our work focuses on ensuring that TMs and their trace links are complete and correct. Informally, a complete trace matrix is one where all the parent level elements trace to all appropriate children elements. A correct trace matrix is one that does not contain inappropriate or spurious trace links [1]. Theoretically, it is not possible to determine if a trace matrix is complete, just as it is not possible to determine that a set of requirements are complete. We therefore move to a surrogate line of inquiry: can we develop automated techniques to evaluate completeness (and correctness) as well as human analysts.
Toward that end, we developed a tool in C++ to analyze a given trace matrix and look for six types of potentially incorrect links (listed from hardest to easiest to detect): possible “bad” links, possible missing links, parents without children (e.g., high level requirements without links), children without parents, children with too many parents, and parents with too many children (e.g., a high level requirement may have more than ten children while most of the other high level requirements have less than five). The results were promising and the tool, Trace Matrix Analyzer or TMA, was rewritten in C# and converted to a TraceLab component.
The research question addressed by this paper is: Can a technique be developed to analyze provided trace matrices at least as well as humans? The contribution of the paper is several-fold:
- Introduces a technique for analyzing a provided TM,
- Undertakes an empirical study to evaluate the technique using a publicly available dataset,
- Undertakes an anecdotal study of manual review of a TM, and
- Provides a composite TraceLab component for use by others.
We applied TMA to two trace matrices for which issues had already been identified manually by independent experts. We used two common information retrieval (IR) measures to assess the composite component (using a gold standard or answer set against which to compare): recall, a coverage measure that indicates the percentage of true trace matrix issues that were retrieved; and precision, a noise measure.
that indicates the percentage of retrieved issues that were correct. We also looked at execution time. TMA exhibited 95% recall and 78% precision and ran very quickly (less than 2 seconds in wall clock time) for one dataset.
We also undertook an anecdotal study to compare the tool’s performance to that of a human analyst. We found little difference in the effectiveness of the two methods (TMA, manual), but found that TMA is much more efficient than manual evaluation.
The paper is organized as follows. Section 2 addresses trace matrix analysis. Section 3 presents related work. Sections 4 and 5 discuss validation and results, respectively. Section 6 provides conclusions and a look at future work.
II. TRACE MATRIX ANALYSIS
The Trace Matrix Analysis tool builds on prior work by Port et al. [1]. We explain possible issues that a TM may possess and the approach that is implemented in TMA.
A. Possible Issues
Trace Matrices must be examined by independent agents to ensure that the trace generation process was undertaken properly (whether a developer generated the TM as the lifecycle proceeded or an automated tool was used to generate the TM after the fact) as well as to check for common trace matrix issues. Specifically, our work focuses on assuring that trace matrices are complete and that individual links are correct.
In order to assure that a given TM is complete, we seek to address three questions: 1) do all parent elements have children elements?, 2) do all children elements have parents?, and 3) are there any missing links? The first question can be answered in a trivial way by examining the trace matrix for parent element identifiers (IDs) followed by no links. After inverting the trace matrix (examining it from the children element perspective), the second question can also be answered trivially by examining the trace matrix for children element identifiers (IDs) followed by no links.
The third question requires an evaluation of the trace matrix to apply heuristics or methods for identifying missing links. One way to accomplish this is to examine sibling relationships. For example, if children elements 1 and 2 share the parents A and B and child element 2 also has parent C, it could be inferred that child element 1 should also have a link to parent element C (see Figure 2). We have implemented such “sibling” checks for possible missing links. Further, we
have looked at the notion of investigation sets as implemented by Port et al. to study the relationships between non-functional and functional requirements in a generated trace matrix [1].
Port et al. [1] automatically generate a trace matrix between all functional and non-functional requirements in a given system. They then use heuristics to identify investigation sets for the trace matrix. This approach is shown in Figure 1. The generated trace matrix is shown in the upper left of the figure and labeled T. Its inverse is generated, called AT or anti-trace. Links that are found to possess the characteristic of “high similarity” (which Port et al. [1] do not define, but we define as a high relevance weight (from the vector space model with tf-idf weighting that is associated with a link in the TM) form the high similarity or HT set. Based on these sets, we can find the low risk investigation set or L (lower left corner of Figure 1), possible missing links or M and possible bad traces or F. Our TMA component uses these sets. M augments the set of possible missing links (completeness). F becomes the set of possible bad traces, thus addressing the other TM issue of interest: correctness of links. Next, we discuss how the approach has been implemented as a TraceLab component.
**B. Analysis Approach**
We implemented our approach using TraceLab, as shown in Figure 3. We created eight TraceLab components: High Similarity Matrix, Low Risk Trace, Possible Missing Link, Possible Bad Trace, Anti-Trace, Generate Issue List, Read Golden Answerset, and Compare Issue Lists. High Similarity Matrix uses the Vector Space Model to generate candidate links between the source and target artifacts. It then applies a threshold to the relevance weight and accepts all links above that threshold as elements of the High Similarity set HT. Low Risk Trace takes HT as input as well as the trace matrix T (from the answer set importer component) and determines the intersection of the two sets, L.
Possible Missing Link takes HT and L as input and returns their difference (M in the Port et al. paper [1]). Possible Bad Trace takes T and L as input and returns their difference (F, per Port et al. [1]). Anti-Trace returns AT which is the inverse of the trace matrix, T. Generate Issue List outputs all the issues identified (possible missing link, parent without children, etc.) along with the associated parent and/or child identifiers.
**III. RELATED WORK**
A number of researchers have described the creation [4, 10] or maintenance [8] of trace matrices, however validation and verification of the matrices has, up to now, primarily been performed by humans.
The use of automated methods for TM assessment was first explored and reported by A. Dekhtyar, Hayes, Sundaram, Holbrook, and O. Dekhtyar [4]. They present a method using Information Retrieval (IR), multiple trace recovery tools, and voting among the tools to demonstrate the detection and rejection of false positives introduced by automatic trace tools. They found that humans were better at finding false positives; however automation did find false positives that were not detected by humans. The findings from Dekhatyar et al. [4] influenced the requirements for a low false positive rate for the tool discussed within this paper. Also, our results have led us to consider using a variety of trace techniques for generating and combining multiple HT sets. This remains future work.
Port, Hayes, Huang, and Nikora [1] examined the problem of missing requirements between non-functional requirements and functional requirements. This paper applies a similar methodology using text-mining and statistical analysis to analyze TMs and identify sets of potential missing links for a larger set of artifacts.
Ghabi et al. [9] describe an interesting tool they developed for validation of requirements to code traces. They demonstrate their tools effectiveness using four gold-standard case studies. Ghabi’s work is similar to the work described here; however, it was developed to address maintenance of the trace matrix and not overall verification and validation. In addition, differences include: TMA is a static tool, TMA does not require a caller/callee relationship, and TMA does not require a code graph for evaluation. In addition, though Ghabi’s work may be extended to apply to a general graph, TMA was designed to support any and all types of artifact pair(s).
**IV. VALIDATION**
In order to evaluate the TMA approach, we undertook a small scale study. The research question, variables, hypotheses, study design, and threats to validity are presented below.
**A. Research Question**
The research question for the study is: Can a technique be developed to analyze provided trace matrices at least as well as humans in terms of effectiveness (recall, precision) and efficiency (time)?
A. Dependent and Independent Variables
The Dependent Variables (DV) are recall, precision, and time. Recall (R) measures the percentage of the issues that TMA is able to retrieve. Precision (P) measures how many incorrect issues TMA retrieves. Time (S) measures, in wall clock seconds, how long it takes for TMA to generate an issue list.
The Independent Variable (IV) is technique. The IV has two levels: TMA (T) or manual (M).
B. Hypotheses
There are three sets of hypotheses:
Null hypothesis 1 ($H_{01}$): $R(TM) = R(M)$
Alternative hypothesis 1 ($H_{A1}$): $R(TM) > R(M)$
Null hypothesis 2 ($H_{02}$): $P(TM) = P(M)$
Alternative hypothesis 2 ($H_{A2}$): $P(TM) > P(M)$
Null hypothesis 3 ($H_{03}$): $S(TM) = S(M)$
Alternative hypothesis 3 ($H_{A3}$): $S(TM) > S(M)$
B. Study Design
The study was designed to evaluate the performance of TMA on publicly available datasets compared to a human analyzing the trace matrix for the same datasets.
We ran the study on the Moderate Resolution Imaging Spectroradiometer (MODIS) dataset and the MoonLander dataset. The MODIS dataset [5, 6] is an open source NASA scientific instrument; it consists of 19 high level and 49 low-level requirements. The MoonLander dataset [7] is a text-based game written by undergraduates at CalPoly San Luis Obispo; it consists of 10 high level requirements and 5 test cases.
The consenting participant (per the University’s institutional review board process) was given a pre-study questionnaire to gauge any prior experience with tracing and trace matrices. The participant possessed strong software engineering experience (17 years), which put bias in favor of the manual method. The study was then explained to the participant. The participant was given the MoonLander dataset in hardcopy. The dataset contained the trace matrix, requirements, and test cases. The participant was asked to record the time expended on the task, to record issues, and then to answer several short post-study questions (such as describing the process applied).
After finishing, the participant was given the MODIS dataset and was also shown the RETRO.NET tool [8] that could be used to interactively examine the datasets and matrix. Some hardcopy artifacts were also provided: trace matrix, high-level requirements, low-level requirements. We manually calculated recall and precision using golden answer sets for each dataset. In addition, we ran the TMA Tracelab component on the same datasets and captured execution time. The choice of threshold significantly affects the performance of TMA. We used three thresholds (0.1, 0.2, and 0.4) (filtering the similarity scores generated by the vector space model) to generate the high similarity matrix which is used to generate the list of possible bad links and possible missing links. We checked the high similarity matrix generated by the Vector Space Model component manually and found that most of the links have weights between 0.1 and 0.2. Thus, the three thresholds we used were representative.
C. Threats to Validity
There were four possible types of threats to validity for the study. A threat to internal validity was the possibility that something other than our independent variable was impacting recall, precision, and time. The main threat was possible distractions or confounding factors regarding the human analyst. We did not monitor the analyst during the study and it is possible that they were distracted by other events and/or made avail of other sources to assist them (though we do not think this was the case). In addition, we provided an automated tool to assist with review of the second trace matrix. To mitigate this internal validity threat, we asked the participant to document the process they used as well as to time their work.
A possible threat to construct validity was “hypothesis guessing.” The participant may have tried to guess what the study was about and based his/her behavior accordingly. A threat to external validity is the use of two datasets that were rather small, although the MODIS dataset is a real dataset. Our datasets covered only two domains, we cannot generalize the results to all domains or all project trace matrices.
A possible threat to conclusion validity is that proper statistical analysis is not performed or that data violate the assumptions of the statistical tests. We cannot claim that our results are statistically significant (sample is too small). We discuss the results next.
V. RESULTS
Below we present the results of the study as well as some observations.
A. Study Results
The MODIS dataset and its trace matrix were inspected manually and independently: the trace matrix has 19 issues that comprise the trace matrix analysis golden answerset. Of these 19, there are: 11 parent artifacts without any children artifacts, six bad links, and two missing links. Table I shows the issues found by the participant and TMA. The participant identified 11 parent artifacts without children, four missing links, and three incorrect links. With threshold 0.1, TMA found 11 parent artifacts without children, 85 missing links, and seven bad links. TMA found all the parents with no children regardless of the threshold (this is a trivial check). With threshold 0.2, the number of missing links greatly reduced to four, while the number of bad links increased to eight. With threshold 0.4, TMA did not find any missing links and bad links increased to nine.

Table II shows the precision, recall, and time for the manual and TMA techniques applied to the MODIS dataset. The participant (the manual method) performed better than TMA with respect to finding missing links because both missing links were found; of note is that the participant also misidentified two links as missing (false positives). TMA at threshold of 0.1 also found both of the missing links, but returned many incorrect missing links. The reason for this is that there are too many links with weight higher than 0.1, and all these links are considered “possibly missing.” For bad links, the two methods performed similarly. The participant only found half of the bad links, but all that were found were correct; TMA threshold 0.2 and TMA threshold 0.4 found all six of the bad links, but identified false positives as well. The threshold of 0.4 also prevented the approach from finding any missing links, because the weights of most links in this dataset are below 0.4.
We added the number of issues (regardless of their types) for each method and arrived at the recall and precision shown in Table III. As can be seen, the manual method has the highest precision (89% versus 85%), and TMA at threshold of 0.2 has recall as high as 95% (versus 84% for manual).
While the effectiveness of the manual method and TMA 0.2 are not so different, there is a major difference in their efficiency. The participant spent 2,280 seconds (38 minutes) to evaluate the matrix (after training was performed) while the TMA method took only one second to generate a similar result. It should be noted that any false positives generated by TMA may require review (and thus time) on the part of an analyst.
Neither the manual review nor TMA performed well on the “toy” dataset (MoonLander). All of the non-trivial issues identified by TMA (there were five in total) were false positives. In addition, TMA missed three possible missing links. There was little consolation that TMA correctly identified the two parents with no children (making for two of eight issues correctly identified (25% recall) with five false positives of seven issues identified (28.7% precision). The participant also performed poorly on this dataset, incorrectly identifying one bad link and 19 missing links for recall of 100% and precision of 20% (this took him 25 minutes or
We are currently investigating the dataset to understand this result. Our current thinking is that the elements of MoonLander are very simple (sentences) and there is a tremendous amount of repeated text in each element of both artifacts.
Based on the results, we are not able to reject null hypotheses 1 and 2. It does appear that TMA saves significant time. If we had a larger sample size, we might be able to reject null hypothesis 3.
### B. Observations
It is obvious that the threshold greatly affects the performance of TMA. This is not surprising, because threshold directly affects the HT. As expected, with an increase of threshold comes a decrease in the number of links in HT. Consequently, TMA finds fewer possible missing links and more possible bad links. The recall will increase with the threshold, but the precision will reach a maximal value at a certain threshold and then decrease. In this study, the threshold value for maximal precision appears to be close to 0.2.
According to Table I, this effect is more obvious in finding missing links. While a suitable threshold (in this case, 0.2) can generate reasonable results, a higher threshold (in this case, 0.4) prevents TMA from finding any possible missing links. On the contrary, TMA 0.1 found many incorrect missing links. The low precision makes the result less useful since a human will be required to weed out false positives. To investigate this, we checked the similarity matrix generated by the Tracer component of TraceLab. We found that the weights of a large portion of the links are between 0.1 and 0.2. With a threshold of 0.1, these links were all included in the high similarity matrix and were considered as possible missing links. However, a threshold of 0.4 resulted in only one link being in the high similarity matrix, causing TMA to not find any possible missing links.
To analyze how to improve the performance of TMA, we also examined the possible missing links that were not found by TMA 0.2 and the false positive bad links that were only found by the TMA method. We found that the children artifacts of the two missing links share very few keywords with their parent artifacts; this causes the weight of these two links to be lower than the threshold of the high similarity matrix. The participant found these missing links based on the element semantics (they read the parent and child text and realized that the text meant the same thing though common terms were not used). However, the threshold prevents these links from being included in HT, which also makes it impossible to find these links in M, the possible missing links. Additionally, high thresholds also explain the two false positive bad links. The similarity weight of these two links was so low that even TMA 0.1 did not include them in HT.
The results show that the quality of HT significantly affects the performance of TMA. There are two factors that affect the quality of HT: the threshold and the tracing technique. The “best” threshold that maximizes TMA effectiveness for finding bad links and other issues may depend on the dataset. The value of this threshold should not be too low in order to prevent low precision in finding possible missing links; also, it should not be too high to prevent too many false positive bad links.
In this study, we used the vector space model (VSM) with term frequency-inverse document frequency (TF-IDF) weighing as the tracing technique. One possible way to improve HT is to use other tracing techniques (such as LSI, Probabilistic\(^1\)) or to use Okapi or LTU as the weighting option for VSM. Using different techniques to generate multiple HT sets, it is possible to build a “combined HT” that may have higher quality.
### VI. Conclusions and Future Work
We believe that TMA can be useful in validating a given trace matrix and be used to assist in finding issues in it. With a well-designed TMA program, analysts need to merely set the proper threshold and then check the results. A suitable threshold can be estimated by checking the weights of all possible links. Alternatively, when the TMA returns too many possible missing links or possible bad links, it is clear that the threshold should be increased or decreased accordingly. Comparing the effects of various thresholds is an easy task because of the efficiency of TMA. Once the analyst gets a result, he/she should focus on the possible bad links with low weights because these links may be false positives. Similarly, he/she should focus on the possible missing links with high
\(^1\) The reader is referred to the chapter on Information Retrieval techniques in the “Software and Systems Traceability” book (Springer, 2012) for definitions of LSI, LTU, and Okapi.
weights because these will always be included in the HT (regardless of correctness).
Future work includes examining the threshold issue to see if more specific, dataset-specific guidance can be given to human analysts. Another possible improvement is expanding the current approach by comparing the parents that have similar children. For example, if two parents share most of their children, it is possible that they also share other non-linking children. This may illustrate missing links. In addition, it should be noted that several of the analyses performed require only the TM and a list of the identifiers of the source and target artifacts (we assume that a TM presents the trace links from the source to the target [3]). This is useful for third party agents who are not permitted to share the text of the two artifacts being traced and further are not permitted to install external tools on their computer systems. At this time, we require the full text of both artifacts. Implementing checks using just the identifiers remains future work.
ACKNOWLEDGMENTS
This work is funded in part by the National Science Foundation under NSF grants CCF-0811140 (research) and ARRA-MRI-R2 500733SG067 (benchmark development). We thank our anonymous participant.
REFERENCES
|
{"Source-Url": "http://selab.netlab.uky.edu/homepage/publications/TEFSE-2013-TMA-cameraready-final.pdf", "len_cl100k_base": 5374, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22311, "total-output-tokens": 6370, "length": "2e12", "weborganizer": {"__label__adult": 0.00028824806213378906, "__label__art_design": 0.00030994415283203125, "__label__crime_law": 0.0003514289855957031, "__label__education_jobs": 0.0012531280517578125, "__label__entertainment": 5.334615707397461e-05, "__label__fashion_beauty": 0.00015974044799804688, "__label__finance_business": 0.0003032684326171875, "__label__food_dining": 0.00025272369384765625, "__label__games": 0.000499725341796875, "__label__hardware": 0.0007367134094238281, "__label__health": 0.0003879070281982422, "__label__history": 0.0001995563507080078, "__label__home_hobbies": 9.262561798095704e-05, "__label__industrial": 0.0003001689910888672, "__label__literature": 0.000270843505859375, "__label__politics": 0.0001690387725830078, "__label__religion": 0.0003268718719482422, "__label__science_tech": 0.026885986328125, "__label__social_life": 0.00011080503463745116, "__label__software": 0.0093841552734375, "__label__software_dev": 0.95703125, "__label__sports_fitness": 0.00024056434631347656, "__label__transportation": 0.0003676414489746094, "__label__travel": 0.00017380714416503906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27858, 0.01387]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27858, 0.39697]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27858, 0.93593]], "google_gemma-3-12b-it_contains_pii": [[0, 4793, false], [4793, 7198, null], [7198, 12067, null], [12067, 14102, null], [14102, 19886, null], [19886, 24631, null], [24631, 27858, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4793, true], [4793, 7198, null], [7198, 12067, null], [12067, 14102, null], [14102, 19886, null], [19886, 24631, null], [24631, 27858, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27858, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27858, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27858, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27858, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27858, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27858, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27858, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27858, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27858, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27858, null]], "pdf_page_numbers": [[0, 4793, 1], [4793, 7198, 2], [7198, 12067, 3], [12067, 14102, 4], [14102, 19886, 5], [19886, 24631, 6], [24631, 27858, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27858, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
fd9e456c5e8444f6d2cb2c65a3e9d4cb07e63757
|
Abstract
This task proposes a challenge to support the interaction between users and applications, micro-services and software APIs using natural language. It aims to support the evaluation and evolution of the discussions surrounding the application natural language processing techniques within the context of end-user natural language programming, under scenarios of high lexical and semantic heterogeneity.
1 Introduction
The specific syntax of traditional programming languages and the user effort associated with finding, understanding and integrating multiple interfaces within a software development task, defines the intrinsic complexity of programming. Despite the widespread demand for automating actions within a digital environment, even the basic software development tasks require previous (usually extensive) software development expertise. Domain experts processing data, analysts automating recurrent tasks, or a businessman testing an idea on the web depend on the mediation of programmers to materialise their demands, independently of the simplicity of the task to be addressed and on the availability of existing services and libraries.
Recent advances in natural language processing bring the opportunity of improving the interaction between users and software artefacts, supporting users to program tasks using natural language-based communication. This ability to match users’ actions intents and information needs to formal actions within an application programming interface (API), using the semantics of natural language as the mediation layer between both, can drastically impact the accessibility of software development. Despite the fact that some software development tasks with stricter requirements will always depend on the precise semantic definition of programming languages, there is a vast spectrum of applications with softer formalisation requirements. This subset of applications can be defined and built with the help of natural language descriptions.
This SemEval task aims to develop the state-of-the-art discussions and techniques concerning the semantic interpretation of natural language commands and user action intents, bridging the semantic gap between users and software artefacts. The practical relevance of the challenge lies in the fact that addressing this task supports improving the accessibility of programming (meaning a systematic specification of computational operations) to a large spectrum of users which have the demand for increasing automation within some specific tasks. Moreover, with the growing availability of software artefacts, such as APIs and services, there is a higher demand to support the discoverability of these resources, i.e. devising principled semantic interpretation approaches to semantically match interface descriptions with the intent from users.
The proposed task also intersects with demands from the field of robotics, as part of the human-robot interaction area, which depends on a systematic ability to address user commands that lie beyond navigational tasks.
From the point-of-view of computational linguistics, this challenge aims to catalyse the discussions in the following dimensions:
- Semantic parsing of natural language commands;
- Semantic representation of software interfaces;
• Statistical and ontology-based semantic matching techniques;
• Compositional models for natural language command interpretation (NLCI);
• Machine learning models for NLCI;
• API/Service composition and associated planning techniques;
• Linguistic aspects of user action intents.
2 Commands & Programming in Natural Language
The use of natural language to instruct robots and computational systems, in general, is an active research area since the 70’s and 80’s (Maas and Suppes, 1985; Guida and Tasso, 1982) (and within references). Initiatives vary over a large spectrum of application domains including operating system’s functions (Manaris and Dominick, 1993), web services choreography (Englmeier et al., 2006), mobile programming by voice (Amos Azaria, 2016), domain-specific natural programming languages (Pane and Myers, 2006), industrial robots (Stenmark and Nugues, 2013) and home care assistants.
The variability of domains translates into a wide number of research communities comprising different foci and being expressed by distinct terms such as natural language interfaces, end-user development, natural programming, programming by example and trigger-action development. Some of these terms embrace wide domains, also including non-verbal (visual) approaches.
2.1 Semantic Parsing & Matching
The interpretation of natural language commands is typically associated with the task of parsing the natural language input to an internal representation of the target system. This internal representation is usually associated with a n-ary predicate-argument structure which represents the interface for an action within the system. The identification of which action the command refers to and its potential parameters are at the centre of this task.
Taking as an example the natural language command:
*Please convert US$ 475 to the Japanese currency and send this value to John Smith by SMS.*
We can conceptualise the challenges involved in the command interpretation process in three dimensions: command chunking, term type identification and semantic matching. The chunking dimension comprises the identification of terms and segments in the original sentence that can potentially map to the system actions and parameters. The example command embodies two actions: converting currency and sending SMS. For the first action, the command interpreter needs to identify the currencies involved in the transaction and the financial amount (term type identification).
Other semantic interpretation processes might be involved. In the case of the second action, besides identifying John Smith as the message’s receiver, the interpreter also needs to resolve the co-reference of this value to the currency conversion result and instantiate it as a parameter in the content of the message. This first level of interpretation of the command would generate an output such as:
```
SEQUENCE {
ACTION: [ convert currency ]
ETH: [US$ 475] - [ (to) Japanese currency ]
}
```
```
ACTION: [ send sms ]
PARAMS: [this value] - [ (to) John Smith ]
}
```
The matching process corresponds to the mapping between terms from the user vocabulary to the terms used in the internal representation of the system (the API). In the given example, the system should find an action that can convert currencies and another that can send SMS messages.
In the example, depending on the parameterisation of the command interface, the value [US$ 475] needs to be split into two parameters, and these parts, mapped to the internal vocabulary of the system (US$ need to be interpreted as USD while Japanese currency needs to be translated to JPY. For the second action, similarly, John Smith will be used to retrieve a phone number from a user personal data source.
The final execution command is the result of the matching processing, as shown below:
```
ACTION ENDPOINT: [action id]
PARAMS:
```
The task can be addressed using different semantic interpretation abstractions: shallow parsing, lambda-calculus-based semantic parsing (Artzi et al., 2014), compositional-distributional models (Freitas and Curry, 2014; Freitas, 2015), information retrieval approaches (Sales et al., 2016). Additionally, pre-processing techniques such as clausal disembedding (Niklaus et al., 2016) and co-reference resolution are central components within the task.
While approaches and test collections emphasising the shallow parsing aspect of the problem are more prevalent in the literature (Section 3), others focusing on a semantic matching process involving a broader vocabulary gap (Furnas et al., 1987) are less prevalent. Part of this can be explained by the domain-specific nature of previous works (e.g. focus on spatial commands (Dukes, 2014)).
In contrast, this task emphasises the creation of a test collection targeting an open domain scenario, with a large-scale set of target actions, assessing the ability of command interpretation approaches to address a larger vocabulary gap. This scenario aims to instantiate a real use case for end-user natural language programming, since the action knowledge base used in the test collection maps to real-world APIs and so a semantic interpreter developed over this test collection can become a concrete end-user programming environment.
3 Similar Initiatives
Most of the applications related to the parsing of natural language commands are within the context of human-robot interaction. The Human Robot Interaction Corpus (HuRIC) describes a list of spoken commands between humans and robots. It is composed of three datasets which were developed under the context of three different events. They are annotated using Frame Semantics together with Holistic Spatial Semantics (Bastianelli et al., 2014).
Artzi et al. (2014) and Tellex et al. (2014) give a more focused contribution in the interpretation of spatial elements. In both cases, the vocabulary variability is more constrained. Similar vocabulary variability assumptions are present in Thomason et al. (2015) and Azaria et al. (2016).
In 2014, SemEval hosted a task related to the parsing of natural language spatial commands (Dukes, 2014), also targeting a robotics scenario. More specifically, the task proposed the parsing of commands to move a robot arm that moved objects
within a spatial region.
The proposed task can be contrasted with these previous initiatives in the following dimensions: (i) more comprehensive knowledge base of actions, (ii) generic (open domain) user programming scenarios and (iii) exploration of the interaction between actions and user personal information (Section 4).
The work that has more similarity with this test collection is the problem defined by Quirk et al. (2015) under the ifttt.com platform, which targets the creation of an if-then recipe from a natural language description provided by the user. The first difference between the two tasks is the fact that, while the program structure is limited to if-then recipes in Quirk et al., other more complex structures are supported in this task. Secondly, in the case of Quirk et al., the task requires only the mapping of the actions that comprise the recipe, keeping aside the instantiation of the parameter values, while our proposed task emphasises both. Finally, the presence of these two characteristics introduces the challenge of mapping co-references and metonymy within the task.
4 Task Definition
The task comprises 210 scenarios which consist of a total of 438 natural language commands. Figures 1 and 2 depict an excerpt of the task. A scenario is a set of sentences that defines a program in natural language. The excerpt below shows an example of a scenario:
“When a message from Enrico Hernandez arrives, get the necklace price; Convert it from Chilean Pesos to Euro; If it costs less than 100 EUR, send to him a message asking him to buy it; If not, write saying I am not interested.”
Associated with each scenario, there is a program which is composed of actions from the Action Knowledge Base (Action KB). In addition to the actions, the program also uses If and Foreach constructors, having the same semantics commonly expressed in programming languages to define the execution flow.
Like a programming language function, an action can have input parameters and return values. Table 1 shows examples of natural language commands describing scenarios.
<table>
<thead>
<tr>
<th>Natural language scenario commands</th>
</tr>
</thead>
<tbody>
<tr>
<td>If I receive a deposit from John Sanders in my bank account, send this message to him: “Hello John, thanks for your gift, I received your deposit of some money to me, thanks a lot, buddy.”</td>
</tr>
<tr>
<td>Send an email to Mark asking him for the picture we took in Munich. When I receive the answer, get the attached image and publish it on my Flickr account with the tags #munich, #germany, #ny-love</td>
</tr>
<tr>
<td>Find “Bachianas N.5 of Villa-Lobos” on Youtube. Get the link and send to my mum.</td>
</tr>
<tr>
<td>Find a picture of Darth Vader on Flickr. Post this text to my friends on Facebook with the picture of Darth Vader: May The Force Be With Us Next Friday!!!</td>
</tr>
<tr>
<td>Search on eBay for the iPhone 7 with the maximum price of 700 Euro and send the result list by e-mail to my wife.</td>
</tr>
<tr>
<td>Message Dr Brown by email, asking a suitable day for a meeting: When I receive the information, send to my wife by email:</td>
</tr>
<tr>
<td>Search for a picture of Yoda. Attach that image in a Facebook post and write this: Friends, let’s go to the cinema to see Star Wars on Friday.</td>
</tr>
<tr>
<td>When I receive an email from Helena, get the attachment. Print it and write to Mr Sanders by Skype: Hi Mr Sanders, the document is at the printer.</td>
</tr>
<tr>
<td>If someone reports a problem on GitHub, send the problem title by Skype to John, if the project name is FinanceSystem. For all other systems, send a message to the Tech Manager.</td>
</tr>
<tr>
<td>If Manchester United wins, put Thriller of Michael Jackson in Spotify “celebrations” playlist and call me to say “we are the champions, my friends.”</td>
</tr>
<tr>
<td>Open the door always when reaching Central Park.</td>
</tr>
<tr>
<td>Get a quote about science. Get a photo of Paris. Attach that image in an email, write the quote and send to <a href="mailto:maria@hotmail.com">maria@hotmail.com</a>.</td>
</tr>
<tr>
<td>Get the translation of the hashtag #sqn. Convert it to a QR code and send to my Skype account.</td>
</tr>
</tbody>
</table>
Table 1: Examples of natural language commands describing scenarios.
The values of the parameters map to constants (e.g. integer numbers, string values) or to tags, which represent returning data from previously executed actions. There are two types of tags:
- **<returnX>** The return tag represents the content returned by the action X, where X is a sequential identifier.
- **<item>** The item tag is used only in the context of Foreach constructors. It represents an iterated item.
Both types of tags have some additional naming assumptions in order to simplify the syntax of the generated program. Examples of valid tags are:
- **<return1>** - meaning the data returned by the first action in the scenario.
• `<item>.url` - represent the attribute `url` of the item.
In addition to the scenarios, the test collection consists of:
• **Action KB**: The set of available API functions along with their respective documentation. The information describing the API functions does not follow a strict pattern. While some documentation has rich natural language descriptions or show usage examples, others are succinct and just contain the frame and parameter names. The same occurs concerning data format, data type and returning data. This test collection reflects the variability and heterogeneity that we find in real-world APIs.
• **User KB**: A personal user information dataset, which is necessary to make commands more natural by supporting co-reference resolution. It allows commands like “Call John”, once the system can identify the proper phone number from the User KB.
An example excerpt of the User KB is described below:
```json
[
{
"name": "Maria Alice",
"address": "Rua Central, 35, Rio de Janeiro, Brasil",
"facebook": "malice",
"group": "classmates",
"mail": "maria@alice.com.br",
"phone": "555 111 222",
"skype": "maria.alice",
"tags": "my wife",
"twitter": "malice"
},
{
"name": "John Sanders",
"address": "7 North Avenue, New York, USA",
"facebook": "jsanders",
"group": null,
"mail": "john@fam.com",
"phone": "111 555 777",
"skype": "johnjohn",
"tags": null,
"twitter": "jsanders"
}
]
```
Table 2 shows examples of action frames used in the scenarios.
<table>
<thead>
<tr>
<th>id</th>
<th>action_name(params*)</th>
</tr>
</thead>
<tbody>
<tr>
<td>700000</td>
<td>make_a_payment(invoice)</td>
</tr>
<tr>
<td>600603</td>
<td>send_an_email(attachment_url)</td>
</tr>
<tr>
<td>700002</td>
<td>read_file_content(file)</td>
</tr>
<tr>
<td>700003</td>
<td>extract_content(info)</td>
</tr>
<tr>
<td>503679</td>
<td>convert(to)</td>
</tr>
<tr>
<td>700005</td>
<td>get_contacts(group)</td>
</tr>
<tr>
<td>600490</td>
<td>upload_public_photo_from_url(tags)</td>
</tr>
<tr>
<td>700006</td>
<td>search_image_on_Flickr(query)</td>
</tr>
<tr>
<td>700007</td>
<td>search_video_on_YouTube(query)</td>
</tr>
<tr>
<td>600413</td>
<td>create_a_link_post(link_url)</td>
</tr>
<tr>
<td>601735</td>
<td>post_a_tweet_with_image(image_url)</td>
</tr>
<tr>
<td>700008</td>
<td>tweets_from_search_term(query)</td>
</tr>
<tr>
<td>502328</td>
<td>directions( starting )</td>
</tr>
<tr>
<td>600352</td>
<td>new_item_from_search(search_term)</td>
</tr>
<tr>
<td>600979</td>
<td>share_a_link(image_url)</td>
</tr>
<tr>
<td>700009</td>
<td>create_calendar_item(which_day?)</td>
</tr>
<tr>
<td>600761</td>
<td>print_document(document_url)</td>
</tr>
<tr>
<td>601535</td>
<td>post_message(user_name)</td>
</tr>
<tr>
<td>500397</td>
<td>convert_file(file)</td>
</tr>
<tr>
<td>700011</td>
<td>any_new_post_by_someone(user)</td>
</tr>
<tr>
<td>600591</td>
<td>any_new_issue(user)</td>
</tr>
<tr>
<td>601206</td>
<td>new_article_in_section(section)</td>
</tr>
<tr>
<td>600187</td>
<td>add_a_bitlink(url)</td>
</tr>
<tr>
<td>601732</td>
<td>post_a_tweet(tweet_text)</td>
</tr>
<tr>
<td>601888</td>
<td>picture_of_the_day(section)</td>
</tr>
<tr>
<td>600840</td>
<td>add_photo_to_album(album_name)</td>
</tr>
<tr>
<td>600408</td>
<td>new_final_score(team)</td>
</tr>
<tr>
<td>601684</td>
<td>new_story_from_section(which_section?)</td>
</tr>
<tr>
<td>600596</td>
<td>create_an_issue(body)</td>
</tr>
<tr>
<td>601791</td>
<td>air_quality_changed(device)</td>
</tr>
<tr>
<td>503062</td>
<td>search(depart-date)</td>
</tr>
<tr>
<td>302335</td>
<td>check(text)</td>
</tr>
<tr>
<td>600326</td>
<td>take_snapshots(which_camera?)</td>
</tr>
<tr>
<td>503335</td>
<td>get-top-definition(hashtag)</td>
</tr>
</tbody>
</table>
The natural language scenarios, Action KB and User KB are all described using JSON as a serialisation format. The Action KB is composed of about 3800 micro-services from Mashape (mashape.com) and 1900 actions and triggers from the ifttt.com platform. APIs from Mashape and ifttt.com are public, and their instantiation for the challenge was approved by the platform owners.
Table 2 shows examples of action frames used in the dataset and Table 3 shows metrics about the scenarios, actions and the associated natural language commands, showing the natural language signature of the test collection.
4.1 Annotation
The scenarios containing the natural language commands were created using high-level task descriptions. These high-level task descriptions were sent to a crowdsourcing platform (CrowdFlower), in which workers were requested to express in natural language the commands which entail the scenario descriptions. Motivated by those scenario descriptions, the users proposed a set of commands which addresses the specification.
The excerpt below shows an example of a scenario description:
You are arranging a meeting with some people in Andre’s office. Adamantios is coming for that meeting, but he does not know how to drive in Passau. Additionally, you do not know where the office is.
One possible output for that description is:
• Ask Andre for the address of his office;
• Make a map from the university to it;
• Send the map to Adamantios including driving directions.
For each scenario description, in average ten workers were invited to suggest the natural language commands. The crowdsourcing process was followed by a data curation process which discarded 70% of the commands due to low quality issues. The other part of the sample was reviewed to correct misspellings and it was adjusted to comply with the task requirements while preserving the original syntactic structure and vocabulary.
5 Analysis of The Task Complexity
The task aims to explore vocabulary and syntactic structure variation within the natural language commands. It also targets the orchestration of different natural language processing techniques, including syntactic parsing, semantic role labelling, fine-grained semantic approximation and co-reference resolution.
5.1 Semantic approximation
Different actions and parameters can be expressed using distinct lexicalizations (synonymy) and abstraction levels. For example:
“If someone reports a problem in GitHub, send the problem’s headline by Skype to John.”
In the examples, the action in the knowledge base is expressed as “any new issue”, while intended “headline” in the returned value is expressed as “Issue Title”. Given the context, it is expected the system to be able to identify the equivalence between the pairs of terms (problem, issue) and (title, headline).
5.2 Syntactic variation
Additionally, interpreters are expected to cope with syntactic variation.
“If Manchester United wins, call me.”
“Get ready to call me in the case of victory of Manchester United.”
5.3 Co-reference and metonymy resolution
The first type of resolution needed is the pronominal co-reference, where a pronoun refers to a constant which was previously mentioned within the context of the same scenario. The metonymy resolution consists of using the reference to an attribute or type to refer to a constant or to a different attribute of a constant. For example:
“If an issue is created, send its content to the Tech Manager.”
This excerpt shows both cases. The bold *its* makes reference to an issue, while Tech Manager is a metonymy for the Tech Manager’s email (sandra@andrade.com.br according to the user KB).
6 Evaluation
The final dataset contains commands and their associated mappings to the Action KB. Given a command in natural language, it is expected that the participating systems provide:
- The correct action;
- The correct mapping of text chunks in the natural commands to parameters;
The participating systems were evaluated considering four criteria:
1. Resolved individual actions ignoring parameter values;
2. Resolved individual actions considering parameter values;
3. Resolved scenarios ignoring parameter values;
4. Resolved scenarios considering parameter values.
Criteria 1 and 2 are quantified by using precision and recall, while 3 and 4 are quantified by the percentage of the total number of scenarios which were addressed.
Participating teams were allowed to use external linguistic resources and external tools such as taggers and parsers.
7 Participants and Results
Initially, nine teams demonstrated interest in the tasks, but only one participated in the challenge.
Kubis et al. (2017) proposed the EUDAMU system, which implements an action ranking model based on TF-IDF and a type matching system.
The EUDAMU system is composed of a pipeline divided into six steps. It starts by preprocessing the dataset using three tools (NLTK, CoreNLP and SyntaxNet). In the pre-processing step, natural language commands are tokenized and each token is enriched with its lemma, part-of-speech and named entity labels. Additionally, it also adds the constituent and dependency structures associated with the commands. The final pre-processing step annotates the commands with types which supports the system to resolve co-references between the actions and references from the User KB. The same procedure (with the exception of the last step) is applied for the Action KB.
The preprocessing phase is followed by the Discourse Tagger, which is responsible for individualising the command from the paragraph description of the scenario. The team implemented this component using a rule-based approach. The next step is Action Ranker, which applies a TF-IDF
<table>
<thead>
<tr>
<th>Criterion</th>
<th>Metric</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Individual actions solved ignoring parameter values</td>
<td>precision</td>
<td>0.5490</td>
</tr>
<tr>
<td></td>
<td>recall</td>
<td>0.7066</td>
</tr>
<tr>
<td>Individual actions solved considering parameter values</td>
<td>precision</td>
<td>0.0533</td>
</tr>
<tr>
<td></td>
<td>recall</td>
<td>0.0533</td>
</tr>
<tr>
<td>Scenarios solved ignoring parameter values</td>
<td>accuracy</td>
<td>41.93%</td>
</tr>
<tr>
<td>Scenarios solved considering parameter values</td>
<td>accuracy</td>
<td>0%</td>
</tr>
</tbody>
</table>
Table 4: Results from Kubis et al.
model to rank the actions. The model was indexed using all textual content present in the Action KB, plus the actions which were mapped with in the training mappings file. The next step is the Reference Matcher that is designed to identify which output of a given action act as the parameters of a subsequent action. The next step is the Parameter Matcher. It infers parameter and value types which can serve as a support to the action matching process. Finally, based on the knowledge generated and stored in the previous steps, the rule-based Statement Mapper provides a list of up to 10 elements of possible matching action instances. Additional details of the proposed method can be found in the original paper (Kubis et al., 2017).
While the proposed solution has a high recall for the number of resolved actions, it fails mainly in providing the correct value for all the required parameters. Two types of linguistic settings showed to be more challenging:
- Description of commands split into two sentences. For example:
“Get the price of the book The Intelligent Investor. If it costs less than 25 Euros, buy it.”
where “25 Euros” is the parameter value of the action defined in the first sentence.
- Capturing actions with more specific/fine-grained semantics. For example:
“Once I have bet my running distance target of the week, set my current weight as 100 Kg in Fitbit.”
where the system ignored the temporal expression “of the week” and suggested the “Daily step goal achieved” instead of “Weekly distance goal reached” action. A second example of the same case is expressed in the command:
“Suspend the execution of my Samsung washer.”
where the term “Samsung” was ignored when selecting actions.
8 Summary
In the Semeval 2017 Task 11 we developed a test collection to support the creation of semantic interpretation methods for end-user programming environments. The test collection focuses on the following features in comparison with existing approaches: (i) open domain, (ii) large syntactic and vocabulary variability, (iii) dependent of co-reference and metonymy resolution. Moreover, as the test collection uses APIs available on the open web, it can be used to build real end-user programming environments. While there is space for the improvement of the precision and recall on the identification of the command actions, the main challenge remains in the matching of the parameters between natural language commands and the API.
References
Emanuele Bastianelli, Giuseppe Castellucci, Danilo Croce, Luca Iocchi, Roberto Basili, and Daniele Nardi. 2014. Huric: a human robot interaction corpus. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Hrafn
|
{"Source-Url": "http://andrefreitas.org/papers/preprint_semeval_task11_2017.pdf", "len_cl100k_base": 6040, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 27438, "total-output-tokens": 7158, "length": "2e12", "weborganizer": {"__label__adult": 0.00045561790466308594, "__label__art_design": 0.0007615089416503906, "__label__crime_law": 0.0005002021789550781, "__label__education_jobs": 0.003002166748046875, "__label__entertainment": 0.00021719932556152344, "__label__fashion_beauty": 0.0002224445343017578, "__label__finance_business": 0.00034689903259277344, "__label__food_dining": 0.0004146099090576172, "__label__games": 0.0010042190551757812, "__label__hardware": 0.0006661415100097656, "__label__health": 0.0007228851318359375, "__label__history": 0.00036263465881347656, "__label__home_hobbies": 0.0001270771026611328, "__label__industrial": 0.0004150867462158203, "__label__literature": 0.001552581787109375, "__label__politics": 0.0003159046173095703, "__label__religion": 0.0005588531494140625, "__label__science_tech": 0.1104736328125, "__label__social_life": 0.0002213716506958008, "__label__software": 0.021087646484375, "__label__software_dev": 0.85546875, "__label__sports_fitness": 0.00029587745666503906, "__label__transportation": 0.0005960464477539062, "__label__travel": 0.000213623046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30098, 0.03122]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30098, 0.55693]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30098, 0.85411]], "google_gemma-3-12b-it_contains_pii": [[0, 3295, false], [3295, 7184, null], [7184, 9570, null], [9570, 14256, null], [14256, 18890, null], [18890, 21701, null], [21701, 24028, null], [24028, 27713, null], [27713, 30098, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3295, true], [3295, 7184, null], [7184, 9570, null], [9570, 14256, null], [14256, 18890, null], [18890, 21701, null], [21701, 24028, null], [24028, 27713, null], [27713, 30098, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30098, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30098, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30098, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30098, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30098, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30098, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30098, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30098, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30098, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30098, null]], "pdf_page_numbers": [[0, 3295, 1], [3295, 7184, 2], [7184, 9570, 3], [9570, 14256, 4], [14256, 18890, 5], [18890, 21701, 6], [21701, 24028, 7], [24028, 27713, 8], [27713, 30098, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30098, 0.26697]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
3705917ad01536c604eeb9583fb9d97368d54c05
|
Terry:
Hey everybody, and welcome to our final project presentation for SP studios for our ParkInLot app. So first things first, we're going to go over some introductions, each of us are going to introduce ourselves, just explain what we did for the project and what experience we brought in to the project. My name is Terry Watson. I'm currently a CSUB. Student. My experience that I brought to the project was in SQL, HTML and PHP. The latter two I mostly learned during the project in terms of what I did for features for the project, I worked a lot on the database implementations, specifically with triggers for various features within the site. And then I also implemented reCAPTCHA, for the front end for various different sites security.
David:
I'm David Montes De Oca. Coming into this project, I had some experience with Transact SQL, MySQL, basic web development, and some PHP. With this project, I mainly worked on the back end and the database, as well as the algorithm we use for the parking spot trading.
Abraham:
My name is Abraham, I have experience with HTML, CSS, JavaScript, some UI and UX design as well. And I was responsible for the front end and the design of the app.
Tony:
Hi, everyone. I'm Tony Cervantes. For this project. I'm the person that was mostly in charge of general project overview. So making sure everyone was working on what they were supposed to, and essentially pushing their updates to our GitHub repository. Thankfully, I've already had experience with most of the things we were going to be using, thanks to our web development two course. And that included things like PHP, Maria database, or MySQL, as well as general HTML and JavaScript coding. So for the idea of this project, I came up with this project, essentially, about three to four semesters ago, one of my classes would begin around 11 am; So I would get to campus by 10:40. And one of these days that I was looking for a parking spot, I found a yellow vehicle, kind of like a dune buggy or a little Jeep. And essentially, I was able to take that person's spot for that day. Thanks to our schedules usually being like Monday, Wednesday or Tuesday, Thursdays, it seemed that every time my class that started at 11 was the same time when I assume his class ended around 10:30. Or he just decided to leave every single time around the same time, I would just wait right behind his vehicle until the vehicles owner came and drove off and left. So essentially, I just started taking his spot every single time my class was gonna start. And this worked out throughout essentially the whole semester. So this started making me think if there was already an application that exists that facilitates this trade off, where it's peer to peer or anything similar. It doesn't seem like there was so this is where our project began.
Terry:
Hey, it's me, again, we're gonna go over what's the problem that we're trying to address now. So here we have a map of CSUB. All these gray boxes are the various different parking lots that are available to students and faculty on campus. As you can see, there's quite a few different parking lots. So really, there should be no issue with people having enough space. But majority of the reason why there's an issue with parking is that whenever somebody is leaving, it's it's always difficult for somebody who's trying to arrive to be in the right place at the right time to get that spot. We as CSUB students and faculty all have a pretty bad time trying to park at the university. This is especially true during peak times. And also peak times of the year like final weeks and stuff like that. Whenever there's a lot of people on campus trying to park and leave simultaneously, it always creates a lot of chaos with a lot of people looping around through the parking lots, creating lots of traffic, lots of stress.
David:
To solve the problem that Terry described earlier, we created the application ParkInLot. So ParkInLot is essentially a tool that allows users to trade parking spots with one another. This uses a token based system just so we can ensure that people do not abuse the app. And ultimately, it just helps save stress and time.
So users will start off by registering for an account. And once they log in, they will have the option to either find a spot or give out a spot and when a user offers up a spot, their spot will be entered into our database and it will be ready to be paired with a requester. And once a requester is found, they will be shown the details as you can see there of the requesters car just to make the transaction a bit easier. Users who request a spot will be entered into our spot queuing algorithm. And once a match is found, they will be also given details for the trade. The users will also be given a complete button just to indicate when the spot trade is complete. And once this trade is complete, tokens will be exchanged, so the offer will gain a token and the requester will spend one of their tokens and ultimately these tokens will help keep the system fair and ensure that these users do not abuse the app, and they will need to offer up spots in order to be able to request spots in the future. Lastly, we implemented a map that automatically zooms in as you drive closer to the spot. Generally, this will be more useful once you're near campus and can actually see the parking spots clearly. This makes it easier for the requester to actually find the parking spot and will help us provide users with a better experience.
Terry:
But now I'm going to talk about our competition. In terms of direct competition, we don't have any. But there are some similar applications out there. These are pavement, Park, mobile and park stash. All three of these apps are much more similar to renting a parking space from its direct owner and things like Park driveways, and private parking lots. The problem with this is it can't be adapted to any parking lot unless monitoring
systems are in place. So things like cameras or meters so that the people who own the parking spaces are actually able to judge whether or not somebody is there. Another important distinction is that our application is free to use. And it's much more geared toward trading and peer to peer use.
Now I'm gonna go over the management style that we used. We used a combination of extreme programming and Scrum, both of which are agile methods. The reason why we switched over to Scrum halfway through our project is because this is all been online. And the scrum format is much more usable online, than extreme programming, which requires like pair programming and things like that. By using Scrum, we were able to just meet up whenever times called for it. And we could get a lot of work done that way individually.
Tony:
Okay, so now let's talk about our overall architecture. Essentially, our whole web application's foundation is coded in HTML and JavaScript, with CSS handling all of the styling. We used Odin's Maria DB to host all of our content's tables, which is from users to spots as well as using Odin itself to host our site. In order to access content from our database and display it onto their appropriate pages, PHP handles most of the static content. That which appears upon accessing a specific page that isn't going to be changed in real time immediately. For our algorithm, which is where we do need content to be updated almost immediately. And without having the page as a whole refresh, we used Ajax to handle the real time content acquisition. In terms of our external resources. We had Imgur's API, which allowed us to save our users vehicle pictures onto their own servers. And that way, the only thing being saved within our server is the link to those external images, as well as two different Google API's, which would be for their maps as well as reCAPTCHA.
Terry:
Now I'm going to go over the work plan that we had for semester one. In semester one, we focus mainly on the foundation and the groundwork for setting up the project and getting it operational. For about three to four weeks, we spent that creating the database for the users profile, house it attributes. This is all located in the users table on the database. For about three to five weeks, we moved on to creating a basic web view for signing in and signing up. For the last three weeks, we focused on implementing more options for the front end view. This includes adding more admin views. So being able to access database information as an admin.
Now I'm gonna move on to the work plan for semester two, semester two was definitely a lot higher workload than semester one or the first three to four weeks, we focused on getting the spot queue database tables operational. This includes things like the actual spots table itself, and all of the necessary attributes added to that table to facilitate spot trades. For four weeks after that, we focused on getting the front end view for spot
queuing and trading actually working that was handled almost exclusively by David for two to three weeks that we focused on getting the token system properly working and other various triggers working on the database that was all handled by me, Terry. For two weeks after that, it was primarily focused on getting alternative views for the different roles. This includes employees, users and administrators that was all handled by Tony for the last three to four weeks, we focus really heavily on the front end view for Google Maps. This allows users to see the location of the trade partners. And then we also focused really heavily on various chase goals. This is primarily styling and in improving the readability of the website.
David:
So now we're moving on to the features that we all completed individually. I first focused on implementing photo upload so that users can add a picture of their vehicle to their profiles, and I used the Imgur API to store and fetch these uploads pretty easily. I also implemented a password hashing and sanitation feature to securely store users passwords and other input data using the password hash function in PHP as well as the bind param function. So these two features are really important just to help protect our database from potential SQL injection as well as protecting the user's information. The next feature I completed was the parking spot requests. Users are able to request parking spots be paired with an offeror, and then there'll be provided with details on the parking spot in order to complete the trade. I implemented this feature along with a few others using Ajax, PHP and JavaScript. So similar to the request feature, the offer feature will give users the ability to offer their spots be paired with the requester and be provided with the details in order to complete the spot trade. And lastly, I implemented a spot queuing algorithm just to ensure that the requesters and offers are matched based on availability, location and time. To do this, I used Ajax calls to continuously get updates from our database. This then allows us to update and display the parking spot's availability and status accordingly. Both users are shown updates once the status of the parking spot changes, just to help make the transaction simple and straightforward.
Terry:
Now moving on to the features I completed. throughout the project, I mainly worked on three things the reCAPTCHA for the site and also a lot of the database implementation. When I chose the reCAPTCHA, it was out of two potential features we were considering to implement for added security. They were two factor authentication and reCAPTCHA. I eventually decided on reCAPTCHA because while our site does have user profiles, and we want them to be secure, it doesn't necessarily have any super important user information. The most personal information our website houses for users as their car info down to the last four digits of their license plates and their names. Rather than implement two factor authentication to improve account security, I decided on reCAPTCHA to improve database slash site resource security. I implemented the captchas on three different pages register, forgot username, and forgot password. This means that malicious users can't create bots that spam our database with fake accounts
and also can't use our email service to spam other people's emails under our name.
Moving on to what I implemented for the database, I created the table used to contain spots that are currently being swapped, and also the table called spots history that contains all the spots that have been deleted by the spot trading implemented by David. Beyond that, I implemented several cascading triggers in the database that handle all the various bookkeeping. As soon as a spot is deleted, depending on the conditions it was deleted in all the token handling, rating updates, and history keeping is done using this set of triggers. I did this in order to remove the dependency on users clients issuing SQL queries to the database to make it less likely for database errors to occur.
Essentially, all we rely on is the users hitting complete slash cancel and then the Delete query going through from their client. This is opposed to issuing every update query through the client using PHP, which would mean that if the client lost connection halfway through completing a trade, a small piece of the completion might not occur, such as token gain or loss or rating updating overall, this just prevents spa trading from being manipulated by users to keep tokens or otherwise tamper with the trade by intentionally disconnecting. There are some images here on the slide. On the left side, we have the captcha. This is for our account creation. It's essentially just reCAPTCHA. Using Google API. On the right we have a little diagram showing how a user completes a spot and how the different triggers initiate. So essentially, as soon as a user completes or cancels the spot, the update token triggers. This distributes tokens to the people who offered a spot and takes away tokens from the people who requested them, but only if the spot was completed correctly.
Tony:
Moving on to the features that I completed. First thing first was user registration. Of course, this would allow any user to create and register an account with our application, as well as make their user profile. Right after that I started working on profile update. This would allow users to update their profile with maybe requesting a new username change, changing their first and last name if they got married or something as such, as well as update their vehicle's information if they happen to buy a new car or just started using a different vehicle. Thanks to the help of David, we were able to also implement uploading a new picture and having that refresh and automatically, in real time show the new picture upon submission. The next thing was administrator database table views essentially without having to sign into our database through the service side every single time, you can view everything that is contained within our database within the application itself. So long as you weren't a normal user. You would have to be either an administrator or just a normal employee. For forgot username, a user can sign into our web application either using their email address or their username, but in the event that they don't want to use their email address and insist on trying to find their username, they can always request their username be sent to them. Just as well with our password reset page, when a user enters their email address into the text field, if that matches an email within our database, it's going to send that specific email a unique token that gets generated. With that token, the user will have five minutes to be able to reset their
password. Lastly would be our geolocation map when a user gets paired with the parking spot, a map will show them where their lot is, and it will begin zooming in the closer you get to that spot, which is going to help the user find the parking spot that was assigned to them.
Abraham:
So for the design of the app, I started off by first making some rough sketches on paper. And then after that, I used Adobe XD to create a few different prototypes of how the app would look. And during this time, I also designed the logo for the app. And I tried a ton of different versions before finally setting on the one we have. But after this, I used CSS. And I used bootstrap to create the grid layout for the app. And I decided to go with a mobile first approach, since this is primarily meant to be a mobile app. And because it would also make development easier since we could just use the desktop version for any like testing or whatever we have to do, and not really have to worry too much about making sure it looks good on mobile. For the design of the app, I decided to go with a dark theme, because it's easier on the eyes. For the other elements like the buttons and the order of layouts, my main priority with the design was to make sure that it was as straightforward as possible and easy to use. And I made sure that there was a clear hierarchy of elements, like for example, in the card that shows up when you're paired with someone, there's a clear separation between like the account info and the two different buttons. And I wanted to do that. So there wouldn't be too much confusion about how to use the app as because I just wanted to make sure that it was as clean and intuitive as possible. That way it's easier for people to use and more likely for people to use, and just minimize frustration with it as much as possible. So yeah, that's what I did for the design of the app.
David:
So now I'm going to go over a few challenges that we faced while implementing these features. First one, I was working on the photo upload feature, I started off with a PHP script. I was trying to write my own script to upload these photos into a folder in our directory on Odin. But ultimately, that didn't work because of some permission errors. And I ended up using the Imgur API instead. That ended up being pretty easy to implement and a lot quicker to retrieve and upload folders. Next, I had a couple issues with the spot queue algorithm at first, just learning how to use Ajax to send and retrieve data from the server. That was a challenge in itself. And also just getting both requester and offeror spots working together was quite a big challenge.
Tony:
The next set of challenges we face was with geolocation mapping. Essentially, the issue was how do we transfer our database content into our JavaScript that way Google Maps API can automatically adjust properly. Once we figured that out, the next set of issues were how to show the markers appearing for both the requests that you were matched with as well as with your current position. When we got it all working, it seemed to be
perfectly fine until you started driving towards the location, it would add up markers. So essentially, it would start leaving a trail of markers behind you as you moved. This was then fixed with the help of David by making a global variable that would just clear it upon setting your current position again. The other one is password reset. This was challenging because most of the guides or just any article I found online had at minimum three pages. So luckily, eventually, I was able to condense this into a single page to generate the user's unique token, as well as use Odin server email server to be able to send the user an email with that link with the token in a get request. And in terms of security, the token expires within five minutes so that someone doesn't try to access the token later on or try to spoof it and essentially change a user's password. The last one would be SQL triggers. Terry was having issues implementing this or getting it to work perfectly, just because of how many different times we access our database when trying to match a user with their spot as well as removing the tokens. But eventually, Terry was able to figure out how to get properly set up on their triggers. That way, when a user's spot request gets deleted, it removes or adds tokens appropriately to the user's account.
David:
As we were implementing some features we noticed we had to make a few changes, the first of which was scrapping an old timer idea that we had this idea essentially didn't work because they required the poster to post their spot before they were even at their vehicle. And this ultimately would have caused issues with the geolocation feature that we implemented. The next feature that we had to change was the geolocation map itself. We made it so that the users would need to request a spot when they were within a certain radius from campus, just so we can ensure that these transactions are quicker. And this also caused us to no longer need a GPS navigation for the requester. And lastly, we wanted to add a rating system, which required the use of triggers in our SQL database, just so users can know who is reliable.
Abraham:
Now, to recap, how is this going to help people? Well, it's going to help people by saving them a lot of stress, like not having to worry about, oh, how early do I need to leave for school because I may not be able to find parking, and it's gonna save them a lot of time. So they don't have to drive all over campus in circles, just trying to find a spot somewhere. And I think it just, it's gonna help a lot of people, especially during like the busiest hours at school. So what did we learn with this app, we learned a lot of web design like PHP, HTML, CSS, we learned algorithm analysis, and we learned how to implement databases into PHP to use on the web and to store and collect data.
Tony:
And now for our future plans, the first one being adding timers for both the requester and the poster. This way, let's say that someone posts their parking spot during non peak times, and no one accepts their spot or no one gets paired up. In this case, the
poster is out of luck, because they're not going to be able to get their token back. So in this case, we would add a timer for about five minutes. That way, if five minutes passed, the poster no longer has to remain at their parking spot and can leave with no worries, of not getting a token. This way, the spot gets deleted, and their token is allocated. As well as, this will protect the requesters so that after those five minutes, as soon as the poster has left, that parking spot is no longer available, so someone doesn't end up getting paired with it much later in the day. And now that spots no longer guaranteed As well for the requester, so in this case, the requester would have about five minutes to reach their destination to their parking spot. That way that gives ample time for them to arrive. And that way the poster isn't waiting too long for the person to arrive to their spot. As well as if that timer was to expire again, the requester would then be notified that that parking spot is no longer guaranteed. So they can choose to find a new one or just cancel the request or the matching in general. Our additional future plan would be expandability. At the moment, our application is geared more specifically toward CSUB more as a test case. So in the future, we can end up making this application be adaptable to every single type of CSU or UC system, where parking might be issues as well as potential stadiums or other type of concerts and events where parking might not be guaranteed or as easily available for users. So this is where we'll have to implement and how to be able to choose locations based on where you're at or where you're planning to park as well, in general, just expanding our app to multiple different venues.
And now on to our demo, starting with Abraham showing us the registration page.
**Abraham:**
Then we are going to put in our information. So I'm gonna put in my name, last name, create a username, enter your email, aldana, just gunna put this. **Password** - I'm just going to put p a s s. Then you put in your car information. So I have a Honda Civic, 2014. I'm just going to put in some random numbers here. And color, I have a red car, then you just click I am not a robot here to complete the captcha and then create account. Now to upload an image of the car, you're just gonna click Choose File and click your image. And there we go, upload. And it's up. Now to login. You just put in your email or username, and then the password and login.
**Terry:**
Hey, everybody, this is Terry. I'm just going to be demoing our forgot username and for that password for you. When you're navigating to the Forgot username section, you're asked for an email and then also presented with a recapture. If you simply put it in email and don't complete the recapture when you hit submit, it won't actually send any kind of email. You won't receive any kind of notification or anything like that. That's to prevent malicious users from using our email services for, well bad things, obviously. I'm just gonna go ahead and complete the reCAPTCHA. We're gonna see if we're able to get an email, you're prompted with a little notification, just letting you know that an email has been sent to your email address. We should see something pop up anytime now. There
it is. So, on the email side, you'll just get a little message chain. Hey, if you forgot your username, this is your username. Now that you have it, you're able to log in. On our website, you're also able to log in with just your email. But this is just in case people want their username to be able to log in later on.
Now I'm going to move on to the Forgot Password section, it works much the same, it's essentially the same concept. Once again, if you don't fill out the reCAPTCHA, it's not going to send you any kind of email, or any kind of reset link or anything like that. But if you do fill out the reCAPTCHA, it's gonna give you another prompt just letting you know, hey, we're gonna send you an email for that. Button refresh, this is the email that you get for the password reset. It also prompts you that if you didn't request for this password reset, you should go and change your password on your account. Because that means somebody is trying to get into your account, obviously, to be able to reset it, you just go and hit reset, this is going to give you the new reset. So we'll just reset it to something like one of my used passwords. Password has been successfully changed, go ahead and try to log in. So we'll try to just go ahead and log in really quickly. And just like that are able to access the website once again.
**David:**
Okay, so now we're going to demo the parking spot trade feature that we have implemented. So when I click find a spot as the requester, it will take me to another page that'll enter me into a parking spot request queue. And it'll be continuously searching for available spots in the database. And once a spot becomes available, it'll pair me. And I believe Tony is on the other end, about to offer up a spot.
**Tony:**
So let's say in this case, I'm about to leave campus and I no longer need my parking spot so I can give it up so I can get a token back. If that is I ended up using it that day.
So in this case, I'll select the button that says give a spot and as you can see, it says that I am not yet offering a parking spot. So I just have to choose whichever lot we belong or I am currently parking so let's choose lot F. Then I'm going to select offer my spot and as you can see, it says my spot has been posted.
David:
So now, both of our pages should have updated as they're both continuously checking for your parking spot matches. I'm shown with a few details of Tony's car, such as the color, model, make, and year, as well as the license plate digits at the end. So when I click on the View Details button, it will take me to another page where it'll allow me to share my location. It'll give me Tony's details, as well. And if I scroll down, it'll also show me their parking spot location. Generally, this is more useful once we're actually near the parking lot that we're trying to reach. But for right now, it zooms in once we get closer to the destination.
Tony:
And then on my site, let's say that for some reason, I end up going to the previous page or the page just closes out, times out or anything like that. If I was to go ahead and select the give a spot again, it would already show that I'm currently matched - which in this case, it would take me back to the page where it shows the person that I was matched with. And it's essentially at this point waiting for the requester to confirm whether or not they got the spot.
David:
And so once I go ahead and click Complete the trade, it'll basically exchange the tokens between me and Tony. I will lose a token since I've just requested a spot and Tony should gain a token onto his account. As you can see here, I started off with two tokens when I logged in, and now I currently have one.
Tony:
And on my screen you can see that the trade has been completed. So if I go back to the homepage, I started off with 10. And now I have 11. And with all the demos of our application that concludes our video. Thanks for watching.
|
{"Source-Url": "https://cs.csub.edu/~nick/seniorexpo/2021/projects/CMPS/ParkInLot/transcript.pdf", "len_cl100k_base": 6187, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 23959, "total-output-tokens": 6580, "length": "2e12", "weborganizer": {"__label__adult": 0.0006203651428222656, "__label__art_design": 0.00047969818115234375, "__label__crime_law": 0.0003173351287841797, "__label__education_jobs": 0.0024013519287109375, "__label__entertainment": 0.0001461505889892578, "__label__fashion_beauty": 0.0002498626708984375, "__label__finance_business": 0.0007581710815429688, "__label__food_dining": 0.0006113052368164062, "__label__games": 0.001094818115234375, "__label__hardware": 0.0010194778442382812, "__label__health": 0.0002841949462890625, "__label__history": 0.00019419193267822263, "__label__home_hobbies": 0.0001747608184814453, "__label__industrial": 0.0004732608795166016, "__label__literature": 0.00024127960205078125, "__label__politics": 0.00019609928131103516, "__label__religion": 0.00025153160095214844, "__label__science_tech": 0.0022411346435546875, "__label__social_life": 0.00017821788787841797, "__label__software": 0.00658416748046875, "__label__software_dev": 0.97900390625, "__label__sports_fitness": 0.0004854202270507813, "__label__transportation": 0.0015201568603515625, "__label__travel": 0.00031566619873046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29300, 0.00046]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29300, 0.00883]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29300, 0.97774]], "google_gemma-3-12b-it_contains_pii": [[0, 2829, false], [2829, 5961, null], [5961, 8951, null], [8951, 12283, null], [12283, 15814, null], [15814, 18913, null], [18913, 22034, null], [22034, 25342, null], [25342, 27296, null], [27296, 29300, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2829, true], [2829, 5961, null], [5961, 8951, null], [8951, 12283, null], [12283, 15814, null], [15814, 18913, null], [18913, 22034, null], [22034, 25342, null], [25342, 27296, null], [27296, 29300, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 29300, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29300, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29300, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29300, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29300, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29300, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29300, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29300, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29300, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29300, null]], "pdf_page_numbers": [[0, 2829, 1], [2829, 5961, 2], [5961, 8951, 3], [8951, 12283, 4], [12283, 15814, 5], [15814, 18913, 6], [18913, 22034, 7], [22034, 25342, 8], [25342, 27296, 9], [27296, 29300, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29300, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
2d37a7f6e41e35b17715121afe5d6dcdaf3149e0
|
1 Introduction
The standard currently uses the notion of threads of execution to allow for concurrent or parallel execution. It provides thread objects as a way to create and join threads of execution. This is a portable abstraction for threads as offered by many operating systems (e.g., POSIX Threads), which I will refer to as OS threads from now on.
OS threads are a specific mechanism, and they come with a set of quasi-standard features such as support for thread-local storage (i.e., objects with thread storage duration, §3.7.2). They also offer relatively strong forward progress guarantees, in particular that an unblocked thread should eventually make forward progress (§1.10). Users of threads rely on these properties when, for example, building concurrent applications: If two threads are in a producer–consumer relationship, the producer needs to make forward progress or the consumer will be blocked as well.
However, we do not need full-featured OS threads for all use cases. For example, when a program needs to crunch 1000 disjoint groups of numbers, it can do so in parallel but it does not need to run 1000 OS threads for this; if the groups are in fact independent, then working on one group does not need the results from work on another group, so we do not need concurrent execution. Instead, for the implementation, the number of OS threads to be used in this example is a performance decision.
While full-featured OS threads can easily exploit thread-level parallelism offered by the hardware, they also can be costly:
- Space overhead of a thread’s stack and other thread-local data.
- Construction/destruction of thread-local storage.
- Ensuring concurrent execution between all threads, at least eventually, which results in context switch costs and having to schedule all threads.
The implementation/OS has some leeway and can try to avoid some of these overheads, but this can be difficult in practice. For example, an OS scheduler typically cannot detect whether one OS thread spin-waits for another OS thread, so the scheduler faces a trade-off between, say, trying to avoid frequent context switches and letting one thread wait more than necessary; even if the threads use OS mechanisms to wait, it might—depending on the mechanism and the threads’ synchronization—not be visible to the scheduler which other thread is being waited for. Similarly, I suspect that many programmers expect the default scheduler to run OS threads in a more or less round-robin fashion.
Thus, if a program does not actually need concurrent execution (as in the parallel number-crunching example above) or other features of full OS threads, then using OS threads will lead to unnecessary runtime and space overheads.
In turn, if we want to use other lighter-weight implementations of concurrent/parallel execution, then those cannot provide full-featured OS threads. For example, a bounded thread pool is not a valid implementation of full OS threads (and thus the threads abstraction in the standard) because it does not guarantee concurrent execution (e.g., the pool’s threads might all be taken by consumers waiting for a producer that hasn’t been started because the pool’s threads are all being used).
This paper is an initial attempt at defining light-weight execution agents (EAs), which can be used to run more than one thread of execution but provide weaker forward progress guarantees and/or fewer features than OS threads (see Section 2). From this perspective, threads would be a rather heavy-weight variant among several kinds of EAs. This work is based on or has been motivated by prior discussions in SG1. It also is at an early stage, so I will conclude by discussing some of the open questions in Section 3.
## 2 Light-weight execution agents
In the standard, EAs are defined in §30.2.5.1: “An execution agent is an entity such as a thread that may perform work in parallel with other execution agents”. They are used to specify lock ownership, although the term “threads” is used in this context as well.
Several existing proposals to SG1 incorporate execution agents (even though this term is not used) that are seemingly weaker than threads:
- **N3724**, “A Parallel Algorithms Library”, provides execution modes that are essentially sequential, parallel, or allow both parallel and SIMD.
- **N3722**, “Resumable functions”, proposes language constructs that allow programmers to write code as if they had EAs that execute concurrently but do not provide all features that threads provide.
- **N3734**, “Vector Programming: A proposal for WG21”, presents language constructs for SIMD loops and functions. Independent iterations of such loops can be considered to be execution agents that execute in lockstep.
N3731, “Executors and schedulers, revision 2”, proposes library interfaces to create EAs with different safety and liveness properties (e.g., where tasks created by one executor run one after the other).
Even though the programming abstractions presented in these proposals are different, the conceptual EAs provided by them are often similar, and differ from threads in (1) the forward progress guarantees they provide and (2) how they handle other thread features, notably thread-local data. While different programming abstractions or interfaces can be useful, I believe that it would be beneficial to at least unify the execution concepts being used across these proposals, where possible. This would make the parallelism and concurrency support in the standard easier to grasp for programmers, and it would probably also ease implementing these proposals because they can then be put on top of one common base for shared usage of computing resources.
2.1 Forward progress
Next, I will discuss four classes of forward progress guarantees that EAs can provide.
**Concurrent execution** This class provides the same guarantees as threads: EAs should eventually make progress if they are not blocked. Threads are blocked when they use features of the implementation that make their progress depend on the progress and execution of other EAs (e.g., by blocking on a mutex). If this is not the case, the implementation’s scheduler should eventually let them make forward progress, for each execution step they attempt—indeed, of what other EAs are doing.
This definition uses “should” instead of “will” (as in the wording of §1.10) because there might be implementations based on OS schedulers that cannot give these properties (e.g., in a hard-real-time environment). Nonetheless, for general-purpose implementations, this should be a strong guarantee, I believe (i.e., “will”).
Note that sometimes, this progress guarantee is summarized as being able to synchronize. However, this is not an accurate description because it really is about forward progress and not synchronization in general. While nonblocking synchronization is allowed for EAs providing weaker guarantees (e.g., see parallel execution below), only concurrent execution allows for some kinds of blocking synchronization.
**Parallel execution** Parallel execution is weaker than concurrent execution in terms of forward progress. Specifically, one possible definition would be that such EAs cannot expect other parallel EAs to make progress concurrently.
This definition captures the notion that one would like to let programs define lots of parallel tasks, yet use a bounded set of resources (e.g., CPU cores) to execute those tasks. To give the implementation full flexibility regarding resource usage, this does not reveal how many resources are used (i.e., like in case of a bounded thread pool that is not the black box).
---
1 This should cover spin-waiting. When treating a, for example, spin lock as a black box, blocking using the spin lock is well-defined. When looking at the internals of the spin lock, the guarantee will make sure that the EA keeps spin-waiting.
exposes its specific bound to users). This is easy to implement because we just need to execute all such EAs eventually, in some order and interleaving.
However, this definition does not allow typical uses of critical sections inside of parallel EAs (e.g., to synchronize access to shared state), because then an EA might wait for another EA that is not guaranteed to make progress concurrently.2
A stronger variant of the former definition can be obtained by additionally guaranteeing that a parallel EA will make progress eventually once it has started to execute; in other words, once it starts, it is similar to a concurrent EA. This does allow typical uses of critical sections because as soon as EAs start and might acquire a mutex, they are guaranteed to finish execution and will not block other EAs indefinitely.3 This is easy to implement with a typical thread pool—however, it cannot be implemented with, for example, certain kinds of work-stealing schedulers if work-stealing is allowed to happen during critical sections.4
**SIMD execution** This class attempts to model the guarantees of code that uses SIMD instructions to execute several EAs running the same code (e.g., independent iterations of a SIMD loop as in N3734). To allow such an implementation, SIMD EAs must not expect to make forward progress independently of other EAs in the same context (e.g., in the same SIMD loop). In other words, they execute in lockstep with each other, and the granularity of this is implementation defined.
This guarantee disallows the use of typical forms of critical sections because we cannot expect to execute the critical sections in EAs one after the other.5
Unlike concurrent and parallel execution, SIMD execution is also special in that it not just gives a progress guarantee but can also give a safety guarantee: At least as specified by N3734, iterations of the same SIMD loop will be virtually executed in sequential order. However, it could be argued that this is a property of the mechanism using SIMD EAs (e.g., N3734), rather than a property of SIMD EAs.
While the progress guarantee is satisfied by an implementation that uses several concurrent threads to execute each EA, the safety guarantee is not. Also note that lock-free synchronization is allowed in SIMD EAs, whereas obstruction-free synchronization is unlikely to succeed due to EAs executing in lockstep being likely to interfere with each other.
---
2Specifically, cases like when parallel EAs use the same mutex to protect critical sections. Other cases would still be allowed, such as when an EA uses a critical section that it will never block on (e.g., because nobody else uses the same mutex).
3This still does not allow other uses of mutexes, for example an EA using a mutex to wait for the finished execution of another EA that already owns the mutex before it got started.
4Consider a scheduler that immediately executes a spawned parallel task instead of finishing the spawning task (and keeps using a single OS thread): If the former blocks on a mutex acquired by the latter, then the OS thread used for the two EAs will get deadlocked; if the scheduler isn’t aware of all blocking relationships nor promotes parallel EAs to concurrent EAs after a while, a deadlock will arise.
5However, as for parallel executions, some uses of mutexes might still be allowed; this indicates that specifying the progress guarantees is a more precise way to specify EAs than by trying to disallow the use of certain features (e.g., mutexes) altogether.
**Parallel+SIMD execution** This last class tries to allow for both parallel and SIMD execution, at the choice of the implementation (as in N3724). This means that its forward progress guarantees will not be stronger than parallel or SIMD in isolation (in other words, it’s the weakest guarantee).
Whether this class actually needs to exist is debatable and depends on how parallel and SIMD execution are defined in detail. Both the stronger variant of parallel execution as well as the possible safety guarantee of SIMD execution aren’t possible anymore because they are not provided by the respective other class. However, it may be that without these stronger parts, both classes are actually indistinguishable from each other because they disallow the same code from being executed.\(^6\)
### 2.2 Thread-specific state and features
Besides forward progress guarantees, we also need to consider how light-weight EAs relate to threads and features of threads, in particular state associated with particular thread instances. While programmers often will not need these particular features, we need to at least define the level of compatibility with existing thread-based code.
**Thread-local storage** In N3556, “Thread-Local Storage in X-Parallel Computation”, Pablo Halpern presents a classification of how different parallel execution models treat thread-local storage (TLS). The discussion in this paper applies in a very similar way to EAs; however, it could be argued that some of the concerns are tied to programming abstractions that spawn nested parallelism, whereas EAs could also be created in different ways.\(^7\)
N3556 also mentions “x-local” storage, which would be distinct from TLS and scoped to instances of parallel tasks, for example. From the EA perspective, this seems to be the right approach. Nonetheless, I think that providing EA-local storage might not be ideal because, like TLS, it requires programmers to link the semantics of this state to specific execution mechanisms like EAs or threads. Instead, it might be beneficial to let programmers request a local storage mechanism by describing the intent behind using local storage. For example, programmers currently use TLS as both (1) storage that will not be accessed by multiple threads unless explicitly shared and (2) storage that will likely have good data locality when accessed from this thread. In other words, the use of TLS can be motivated by both wanting certain semantics (e.g., no concurrent accesses to it) and performance considerations (e.g., concurrent access being unlikely to avoid cache misses). TLS is an implementation mechanism that can be used for that, but other implementations are possible as well (e.g., per-workgroup storage on GPUs).
---
\(^6\)Specifically, it might be that they disallow the same kinds of blocking code, but for different reasons:
If we cannot assume that other EAs make progress, then EAs cannot block on each other. Likewise, if EAs cannot make progress independently, then they must not make other EAs block.
\(^7\)For example, when exposing parallel execution opportunities via a parallel loop, then what the loop does is often related to the code that started the loop and thus spawned parallel EAs; in contrast, when spawning concurrent EAs (e.g., in an actor model), these often may not have a immediate relation to the EA that spawned them (i.e., similar to how threads are used today).
Emulating `this_thread::get_id()` This function returns an ID for the current thread of execution. However, a light-weight EA may not be a thread, so either we need to return some distinct value for an imaginary thread or we need to handle this similarly to TLS, as discussed in N3556.
Lock ownership The standard already specifies lock acquisition semantics in terms of lock ownership of EAs, and notes that other EAs than threads may exist (see §30.2.5). Thus, we do not need to define any association to any existing threads as N3556 does for TLS.
However, implementations might have to be changed if they rely on having threads as the only possible EA (e.g., if a mutex stores an OS thread ID to designate the lock holder).
3 Open questions
I believe that it is important to provide light-weight EAs or to at least thoroughly define the semantics if no direct access to them is provided. The number of existing proposals to SG1 that relax execution guarantees compared to OS threads indicates that there is a real need for light-weight EAs.
Nonetheless, this paper is just a first step towards that, so there are many open questions, of course. Some of them are outlined in what follows. Others are not discussed in this paper at all, but are very important: For example, how to make the use of EAs efficient in terms of resource usage, and how to do so with a portable and abstract interface that does not rely on the programmer tightly controlling the specific computing resources that are being used (e.g., the number of OS threads).
Programming abstraction or conceptual entity? As presented so far, EAs are purely a concept used to specify execution properties. Beyond that, one could add ways to directly create instances of all or the most important kinds of EAs, similar to how threads can be created.
One advantage of doing so would be to give programmers full access to the shared resource usage facilities that an implementation would likely do internally anyway (e.g., balancing out the number of OS threads used across the program, independently of which parallel or concurrent abstraction was used to spawn EAs). However, finding a portable, stable, yet powerful interface for that might be difficult. The Executors proposals is about a similar direction, and currently proposes a few specific factories for EAs instead of covering all useful combinations of EA semantics (e.g., forward progress, TLS handling, . . . ) and performance properties (e.g., how many and which resources to use for execution).
Some of the EAs might be better exposed through specialized interfaces. SIMD execution can be such a case when the implementation requires custom code generation for
---
8The standard does not provide a way to look up a thread object based on the ID, so we do not need to create this imaginary thread.
such EAs, or to convey the context of the SIMD EAs (e.g., other iterations in a SIMD loop).
Even proposals that do not require customly generated code might be better served with a specialized interface. For example, cilk_spawn uses a language construct to make spawning parallel EAs look like a function call, yet the return value of this call might not be available until an implicitly associated or explicit cilk_sync. Resumable functions also use a function-call–like language construct to provide virtual concurrent EAs without using OS threads, but the mechanism used for returning values is based on futures.
**Do EAs inherit properties from the spawning EA?** Regarding thread-specific state, I have already discussed this in Section 2.2. However, this also affects forward progress guarantees. For example, consider two concurrent EAs, each spawning a group of parallel EAs: Can a parallel EA from one group rely on at least one parallel EA from the other group to make progress concurrently? The definition for parallel execution given above does not provide this guarantee.
Nonetheless, it might be natural to assume that it is provided, especially in cases where the concurrent EA becomes part of the parallel EAs or blocks for the parallel EAs to finish their tasks. Thus, do EAs need always inherit some forward progress properties, or is this rather controlled by the specific mechanism used to spawn EAs (e.g., if parallel EAs are spawned and yet the concurrent EA continues to execute)?
A similar question is whether weak forward progress guarantees should be restricted to a specific context, for example, a single parallel loop. This should answer the above question, but it could also constrain the implementation regarding how many resources to use for execution.
**Legacy code: Support std::thread or OS threads?** To provide support for legacy code that assumes threads and not lighter-weight EAs, implementations at least need to define guarantees for uses of std::thread. In most implementations, std::thread is probably implemented by just using OS threads, but this may be difficult when implementing some of the ideas in N3556. Thus, should implementations strive for compatibility features for OS threads? While this may remain an implementation-defined choice, I think this needs to be considered.
**How do we expose a thread compatibility mode?** One way to do it is by giving guarantees such as the stronger ones in N3556, which would make compatibility with threads a part of the EA semantics. Another way would be to provide an interface to bind an EA to a thread at runtime, thus effectively transforming it into a stronger EA for a while (e.g., via bind/unbind calls).
|
{"Source-Url": "http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3874.pdf", "len_cl100k_base": 4105, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 14690, "total-output-tokens": 4470, "length": "2e12", "weborganizer": {"__label__adult": 0.0002512931823730469, "__label__art_design": 0.0002007484436035156, "__label__crime_law": 0.0002799034118652344, "__label__education_jobs": 0.0002529621124267578, "__label__entertainment": 4.118680953979492e-05, "__label__fashion_beauty": 9.053945541381836e-05, "__label__finance_business": 0.00010722875595092772, "__label__food_dining": 0.0002560615539550781, "__label__games": 0.00037479400634765625, "__label__hardware": 0.0009016990661621094, "__label__health": 0.0002199411392211914, "__label__history": 0.00016689300537109375, "__label__home_hobbies": 6.93202018737793e-05, "__label__industrial": 0.0003440380096435547, "__label__literature": 0.00012934207916259766, "__label__politics": 0.0002205371856689453, "__label__religion": 0.0003581047058105469, "__label__science_tech": 0.01055908203125, "__label__social_life": 5.835294723510742e-05, "__label__software": 0.00572967529296875, "__label__software_dev": 0.978515625, "__label__sports_fitness": 0.00024247169494628904, "__label__transportation": 0.0004091262817382813, "__label__travel": 0.0001780986785888672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20514, 0.0102]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20514, 0.5277]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20514, 0.94313]], "google_gemma-3-12b-it_contains_pii": [[0, 1816, false], [1816, 4834, null], [4834, 7989, null], [7989, 11531, null], [11531, 14966, null], [14966, 17804, null], [17804, 20514, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1816, true], [1816, 4834, null], [4834, 7989, null], [7989, 11531, null], [11531, 14966, null], [14966, 17804, null], [17804, 20514, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20514, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20514, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20514, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20514, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20514, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20514, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20514, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20514, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20514, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20514, null]], "pdf_page_numbers": [[0, 1816, 1], [1816, 4834, 2], [4834, 7989, 3], [7989, 11531, 4], [11531, 14966, 5], [14966, 17804, 6], [17804, 20514, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20514, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
69192e0b61f4de32f6bd2b8f5638012a76c1f3d8
|
Cache-Timing Attacks and Shared Contexts
Citation
Year
2011
Version
Peer reviewed version (post-print)
Link to publication
TUTCRIS Portal (http://www.tut.fi/tutcris)
License
Unspecified
Take down policy
If you believe that this document breaches copyright, please contact cris.tau@tuni.fi, and we will remove access to the work immediately and investigate your claim.
Cache-Timing Attacks and Shared Contexts
Billy Bob Brumley and Nicola Tuveri
Aalto University School of Science and Technology, Finland
{bbrumley,ntuveri}@tcs.hut.fi
Abstract. Cache-timing attacks recover algorithm state by exploiting the fact that the latency of retrieving data from memory is essentially governed by the availability of said data in the processor’s cache. Efficient and effective countermeasures to these attacks are needed. A shared memory context is a mechanism for reusing dynamically allocated memory. Focusing on public key cryptography within OpenSSL and its implementation of shared contexts, this paper examines the ability of a shared context to aid in mitigation of cache-timing attacks. The results are pessimistic towards this approach.
Keywords: cache-timing attacks, side-channel attacks, countermeasures, memory allocation
1 Introduction
Caches are used in modern computer architectures to improve memory hierarchy performances exploiting the principle of locality, both in the data and in the instruction flows. The registers used to store data onboard the CPU are very fast but limited in number and capacity, while on the other hand main memory has a huge capacity but is several orders of magnitude slower. One or more levels of cache are hence placed between the CPU and the main memory, reducing the load on the memory hierarchy and the average latency of memory references. We refer to [2, Ch. 5] for an extensive reference on cache architectures and related terminology.
Different virtual address spaces reside simultaneously inside the cache and, although protection of the cache contents among different processes is supported by the hardware logic and the operating system, it still remains a shared resource which can be used as a side-channel to leak information through timing of events. As foreboded by Kocher [5, Sect. 11] and Kelsey et al. [4, Sect. 5], cache-timing attacks against cryptosystem implementations are able to recover key material using this side-channel. We refer to [3, Ch. 18] for a good summary of work in this field. The scope of this paper includes trace-based attacks [7, Sect. 2] where the attacker is able to obtain a cache trace detailing hits and misses for memory accesses during execution.
For the purposes of this paper, two specific results from the literature are particularly relevant.
* Supported in part by the European Commission’s Seventh Framework Programme (FP7) under contract number ICT-2007-216499 (CACE).
1. Percival describes an attack on the RSA implementation in OpenSSL 0.9.7c [8]. The cache trace is obtained through a “spy process” which simply loads its own contents from memory, filling the data cache, and then reads them back, measuring, for each set, the time required to read all its lines. The thus obtained trace is then analyzed to identify patterns caused by data dependent memory accesses: i.e. in the case of RSA the key dependent lookups into the precomputation table used by the sliding window exponentiation algorithm.
2. Brumley and Hakala present a framework for processing cache-timing data and use it to attack the ECDSA implementation in OpenSSL 0.9.8k [1]. The attack results suggest that the most significant time variances present in the trace are not due to data dependent memory accesses, but to differences in memory-access footprints among different operations: i.e. in the case of ECDSA the difference in usage of temporary variables between a point doubling and point addition step during a scalar multiplication algorithm.
Dynamic memory allocation is a costly operation for software. To reduce this cost, many software libraries (including [9], [10], [11]) implement mechanisms that allow reusing of dynamically allocated memory across different function calls: this improves performance. We focus exclusively on shared contexts, the solution adopted by OpenSSL [12]. In light of cache-timing attacks, one countermeasure in [1] proposes (but does not implement or evaluate) that the context randomize allocation of its resources.
To this end, this paper examines the ability of shared contexts to act as a countermeasure against cache-timing attacks. This involves the implementation and evaluation of a simple data alignment countermeasure. The results suggest that, in our experiment environment, the allocation policy enforced by the shared context cannot in isolation render the side-channel useless. That is, surprisingly in this case the shared context can do little to mitigate the known attacks.
We structure this paper as follows. Sect. 2 contains an outline of dynamic memory allocation in OpenSSL and relevant data structures. Sect. 3 describes our implementation of a cache-timing attack countermeasure enforced by the shared context. We discuss the evaluation of this countermeasure in Sect. 4 as well as provide some sample side-channel data. We close in Sect. 4.
2 Dynamic memory in OpenSSL BigNum
The core of many cryptosystems implemented by OpenSSL is the BigNum module, which provides arbitrary precision arithmetic. Dynamic memory is handled through OPENSSL_malloc, OPENSSL_realloc and OPENSSL_free which, by default, are mapped to the malloc, realloc and free functions of the standard library. In the following paragraphs we present the basic data unit of the BigNum module and the mechanisms used for allocation of temporary variables.
2.1 BigNum variables
The basic operand type inside the BigNum module is an abstract data type called \texttt{BIGNUM} which represents an arbitrary precision integer and whose internal structure is summarized in Fig. 1.
All the provided arithmetic operations automatically handle resizing of the \texttt{d} array to prevent overflows. To improve performance and avoid useless sequences of shrink and extend operations, \texttt{BIGNUM} variables are not downsized automatically, and an internal value is used to track how many words are actually used among the allocated words for the \texttt{d} array.
It is important to note that, while the \texttt{BIGNUM} structure holds the information needed for control flow and resizing, the actual binary representation of the number is not contained within the \texttt{BIGNUM} structure but resides in separate memory blocks, referenced by the \texttt{d} pointer.
2.2 The \texttt{BN_CTX} structure
Creation and destruction of \texttt{BIGNUM} variables and reallocations of the referenced bit arrays are, in general, quite time-consuming operations therefore the performances of functions using temporary variables and of the whole library may be improved using some sort of caching functionality to minimize the number of these operations. The \texttt{BN_CTX} structure is designed to accomplish this goal and simulates a function stack, with the difference that starting and ending of the frames and allocation of the variables are made explicit.
Internally the \texttt{BN_CTX} type is defined as the structure depicted in Fig. 2, containing the count of currently assigned \texttt{BIGNUM} temporary variables, two variables for internal error handling, and two internal auxiliary structures:
- The \texttt{BN_POOL} structure (depicted in Fig. 3), which provides the caching functionality, preallocating clusters of \texttt{BIGNUM} variables and keeping track of unused variables to be reassigned.
- The \texttt{BN_STACK} structure which provides the per-function stack frame abstraction.
The \texttt{BN_CTX} object is meant to be created once with \texttt{BN_CTX_new()} and then passed to every function that may need to allocate temporary \texttt{BIGNUM} variables, which in turn will:
1. start a new frame inside the \texttt{BN_CTX} through \texttt{BN_CTX_start()}, which pushes the current value of the \texttt{used} variable into the stack through \texttt{BN_STACK_push()}.
2. request all the needed temporary \texttt{BIGNUM} variables using \texttt{BN_CTX_get()} which increases the \texttt{used} counter and returns a new temporary \texttt{BIGNUM} variable obtained through \texttt{BN_POOL_get()} after setting it to zero. \texttt{BN_POOL_get()}, in turn, returns the next unused \texttt{BIGNUM} variable from the pool: the \texttt{used} variable and the \texttt{current} pointer are used to track the first unused \texttt{BIGNUM} variable in the pool, and when no unused variables are available a new \texttt{BN_POOL_ITEM} is automatically created and added to the tail of the internal list.
3. operate on the temporary variables, potentially calling other functions which may require the BN_CTX for their own temporary variables.
4. end the frame started at the first step, using BN_CTX_end(), before returning to the calling function.
BN_CTX_end() retrieves the previous “frame pointer” from the stack through BN_STACK_pop() and, if its value is different from the current used counter, all the temporary variables acquired from the pool through BN_POOL_get() in the scope of the last frame are released calling BN_POOL_release(used - fp); used is then set to fp.
BN_POOL_release(num), in turn, releases the last num used BIGNUM variables in the pool, decreasing the used counter and accordingly updating the current pointer, without removing any BN_POOL_ITEM from the list.
The BN_CTX object is then destroyed, releasing all the allocated memory, with BN_CTX_free(), before the process is terminated.
3 Shared contexts and cache attack countermeasures
An attack on OpenSSL’s implementation of ECC appears in [1]. The scalar multiplication algorithm uses a modified windowed NAF representation with a precomputation phase that potentially exposes key material in each iteration due to table lookups of precomputed points. However, the authors conjecture that the most visible pattern in the collected traces is due instead to dynamic
Internal structure containing a "bundle" of BIGNUMs
Internal structure for simulating "stack frames" inside the shared context
The number of BIGNUMs currently assigned
Internal variables for error handling
Fig. 2. The BN_CTX structure
Number of used BIGNUMs in the pool.
Total number of allocated BIGNUMs inside the pool.
Fig. 3. The BN_POOL structure and the related double-linked list of BN_POOL_ITEMs
memory for variables in the curve arithmetic functions. For example, the unbalanced memory footprint of the point doubling and point addition formulas. This can potentially permeate all the way down to the field arithmetic level, e.g. temporary variables used in field multiplication and field squaring. It is possible that the number and order of these operations can lead to a noticeable pattern in the trace, which can allow an attacker to infer a series of higher level elliptic curve group operations. As such, they propose that the OpenSSL shared context should randomize the allocation of its BIGNUM variables to deter cache-timing attacks. While the approach sounds reasonable, they did not implement the proposed countermeasure to evaluate its effectiveness.
The analysis of the shared context summarized in Sect. 2 reveals that it is not possible to implement this countermeasure while limiting changes only to the BN_CTX internals: the only relevant addresses in the scope of the BN_CTX structure are those of the returned BIGNUM objects and, while it might be possible to arrange mechanisms and structures to randomly serve BIGNUM addresses associated with different cache sets, the addresses of the memory blocks storing the actual binary representation referenced within each BIGNUM cannot be forced by the BN_CTX facility as they are handled by the internal BIGNUM functions that deals with creation, deletion and resizing of the BIGNUM internal binary representation.
We then worked on the hypothesis that aligning all the dynamically allocated memory to the same cache set would be an effective, straightforward countermeasure in this case: we redefined the default OPENSSL_malloc() (alongside the associated free and realloc functions) to use a custom implementation returning addresses aligned to the same cache set and analyzed the resulting cache traces.
3.1 Targeted system
For this paper we focus on a system based on a Intel Pentium 4 processor so that results may be compared with those available in the literature. This processor implements Intel’s Hyper-Threading Technology (HTT), a form of Simultaneous Multithreading (SMT) where the single CPU expose two logical processors, sharing between them the physical execution resources while duplicating the architecture state, thus allowing execution of multiple threads concurrently [6]. HTT is not exclusive to the Intel Pentium 4 family and is featured by most recent Intel processor families for mobile, desktop and server systems (i.e. Atom, Core i3, Core i5, Core i7, Itanium and Xeon). Although HTT is not a requirement for trace-based cache attacks, it relaxes the need to force context switches among the spy process and the victim process since, during execution, the two threads naturally compete for the shared resources, including the data cache.
The L1 data cache geometry of the Intel Pentium 4 processor is as follows: 64 B-long lines, 8 KiB total size, 4-way set associative, and 128 lines divided into 32 associative sets. Therefore in the targeted system each virtual address is split in three sections as depicted in Fig. 4: the offset contains the 6 least significant bits used to address one of the 64 bytes in a cache line, the set index contains the adjacent 5 bits used to select one of the 32 associative sets of the cache, and
the tag contains the remaining bits used to identify the requested address by distinguishing among different addresses that may be associated with the same set.
### 3.2 Single-set aligned addresses wrapper
For the `malloc()` wrapper we used the function `posix_memalign()` (defined in the POSIX.1d standard) to allocate a memory block aligned to a multiple of 0x00000800 so that the set index portion of the returned address is always set to zero. The first `sizeof(size_t)` bytes of the address returned by `posix_memalign()` are used to hold the `request_size` value, which is required for the `realloc()` wrapper, hence the length of the actually allocated block is `request_size + sizeof(size_t)` and the offset portion of the addresses returned by `CRYPTO_amo_malloc()` is set to `sizeof(size_t)`.
The `free()` wrapper simply calculates the address originally returned by `posix_memalign()` and calls the standard C `free()` function on it (the POSIX.1d standard mandates that addresses returned by `posix_memalign()` can be freed by `free()`). The `realloc()` wrapper simply allocates a new memory block of the desired size through the `malloc()` wrapper, copies the contents of the old memory block into the new one and then frees the old block through the `free()` wrapper.
### 4 Results
To analyze the effects of the changes described in Sect. 3 we need a spy process to collect the cache-timing traces. We outline our spy process in Fig. 5; it is adapted from [8] and we refer the reader accordingly for a discussion of its origin and operation. The `movnti` instruction is a move with non-temporal hint; it seeks to avoid cache interference by bypassing the cache when writing the obtained timings to memory. The set-0 aligned input buffer at `ecx` contains 8192/4 copies of 0x00000001 which causes all `imul` instructions to carry out nothing more than a high latency NOP.
The illustrations in Fig. 6 depict the typical data cache traces obtained by the described spy process running concurrently with OpenSSL performing an ECDSA signature operation: the gradient of each cell measures the cache set access time, hence time moves within each cell, then from bottom to top through...
mov $8192,%edi
LOOPA:
sub $4,%edi
mov $1,(%ecx,%edi)
jnz LOOPA
xor %edi,%edi
rdtsc
mov %eax,%esi
LOOPB:
; cache set 00
imul 0x0000(%ecx),%ecx
imul 0x0800(%ecx),%ecx
imul 0x1000(%ecx),%ecx
imul 0x1800(%ecx),%ecx
rdtsc
sub %esi,%eax
movnti %eax,0x00(%ebx,%edi)
add %eax,%esi
; cache set 01
imul 0x0000(%ecx),%ecx
imul 0x0800(%ecx),%ecx
imul 0x1000(%ecx),%ecx
imul 0x1800(%ecx),%ecx
rdtsc
add %esi,%eax
movnti %eax,0x01(%ebx,%edi)
add %eax,%esi
; cache set 02
imul 0x0000(%ecx),%ecx
imul 0x0800(%ecx),%ecx
imul 0x1000(%ecx),%ecx
imul 0x1800(%ecx),%ecx
rdtsc
add %esi,%eax
movnti %eax,0x02(%ebx,%edi)
add %eax,%esi
; cache set 03
imul 0x0000(%ecx),%ecx
imul 0x0800(%ecx),%ecx
imul 0x1000(%ecx),%ecx
imul 0x1800(%ecx),%ecx
rdtsc
add %esi,%eax
movnti %eax,0x03(%ebx,%edi)
add %eax,%esi
; cache set 10
imul 0x0080(%ecx),%ecx
imul 0x0880(%ecx),%ecx
imul 0x1080(%ecx),%ecx
imul 0x1880(%ecx),%ecx
rdtsc
sub %esi,%eax
movnti %eax,0x09(%ebx,%edi)
add %eax,%esi
; cache set 11
imul 0x0040(%ecx),%ecx
imul 0x0840(%ecx),%ecx
imul 0x1040(%ecx),%ecx
imul 0x1840(%ecx),%ecx
rdtsc
add %esi,%eax
movnti %eax,0x0a(%ebx,%edi)
add %eax,%esi
; cache set 12
imul 0x0080(%ecx),%ecx
imul 0x0880(%ecx),%ecx
imul 0x1080(%ecx),%ecx
imul 0x1880(%ecx),%ecx
rdtsc
add %esi,%eax
movnti %eax,0x0b(%ebx,%edi)
add %eax,%esi
add $32,%edi
cmp <buffer len>,%edi
jge END
jmp LOOPB
END:
Fig. 5. Pentium 4 data cache spy process. ecx holds the input buffer address and ebx the output buffer address.
consecutive cache sets and finally from left to right when the measurements are repeated. Although the time flow is thus distributed, it may be easier to consider the data as vectors which length is equal to the number of cache sets and with time simply moving from left to right. That is, one column vector is the output of one iteration of LOOPB in the spy process. High (low) latencies suggest cache misses (hits).
In Fig. 6 we compare a typical trace obtained through the standard version of OpenSSL 0.9.8o with a typical trace obtained through the same code patched to use the described memory wrappers.
Fig. 6. Cache-timing data from a spy process running concurrently with an OpenSSL ECDSA signature operation, using the unmodified OpenSSL 0.9.8o code (Top) and the patched version implementing the memory wrappers described in Sect. 3.2 (Bottom).
In the unmodified OpenSSL case (top), there is clear correlation between the cache-timing trace and the state of the scalar multiplication algorithm, i.e. whether it is performing a point doubling or point addition step. Identified by manual inspection using known inputs, sets 6, 7, 20, and 31 are good indicators. There are eight point additions separated by a number of point doublings. This is to be expected as no countermeasures have since been implemented in any OpenSSL version to prevent the attacks in [1].
Deploying the countermeasure described in Sect. 3.2, we expected the resulting traces to contain mostly cache misses in set 0 and mostly cache hits in all other sets. The intuition is that, if all dynamic memory allocated within OpenSSL is aligned at different addresses mapping to cache set 0, then the remainder of the cache should not show any interesting activity. Essentially the countermeasure seeks to disable all but a single cache set.
Surprisingly, the results disagree with this intuition. That is, in the patched OpenSSL case (bottom), there is still clear correlation between the cache-timing trace and the algorithm state. Set 17 is a good indicator. There are eight point additions separated by a number of point doublings.
5 Conclusion
Cache-timing attacks represent a serious threat to security-critical software. This is evidenced by bustling activity in academic research in the area over the past decade, as well as the response in the software development community. For example, a number of patches to the OpenSSL library are available attempting to mitigate cache-timing attacks and, more generally, microarchitecture attacks. Proposing and evaluating countermeasures to these attacks is an important topic that can yield insight into improving the security of cryptosystem software implementations, or more generally any software with state that should remain secret.
In this paper, we focused on one such proposed countermeasure related to OpenSSL’s handling of dynamically allocated memory using shared contexts. After reviewing OpenSSL’s implementation of shared contexts, we implemented a countermeasure that aligns all dynamically allocated memory within OpenSSL at a single cache set. In contrast to the randomly aligned countermeasure proposed, but not implemented, in [1], our intention was to start with this basic countermeasure and, based on the implementation results, build up to a more secure, efficient, and robust one.
After patching the OpenSSL source code with our implementation and evaluating the countermeasure with respect to OpenSSL’s implementation of the ECDSA, the resulting cache-timing traces reveal that, contrary to our initial assumption, even when aligning all dynamically allocated memory to the same boundary, there is still a significant amount of correlation between the cache-timing data and the algorithm state, particularly outside of said boundaries. These results suggest that the viability of shared contexts in cache-timing attack countermeasures is, at best, limited.
Excluding the shared context, it remains to identify what mechanism is ultimately responsible for said behavior. This could be a software or microarchitecture mechanism, or even a combination of multiple mechanisms. Suspects worth investigating are the stack, the trace cache, and higher level data caches. We defer this task to future work.
References
3. Çetin Kaya Koç (ed.): Cryptographic Engineering. Springer (2009)
|
{"Source-Url": "https://tutcris.tut.fi/portal/files/15671512/cosade2011.pdf", "len_cl100k_base": 4953, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 25241, "total-output-tokens": 6089, "length": "2e12", "weborganizer": {"__label__adult": 0.0006284713745117188, "__label__art_design": 0.0005135536193847656, "__label__crime_law": 0.0024127960205078125, "__label__education_jobs": 0.0004346370697021485, "__label__entertainment": 0.00012803077697753906, "__label__fashion_beauty": 0.0002512931823730469, "__label__finance_business": 0.00045108795166015625, "__label__food_dining": 0.0004940032958984375, "__label__games": 0.0009975433349609375, "__label__hardware": 0.005748748779296875, "__label__health": 0.0011816024780273438, "__label__history": 0.000453948974609375, "__label__home_hobbies": 0.00016546249389648438, "__label__industrial": 0.0011920928955078125, "__label__literature": 0.00031685829162597656, "__label__politics": 0.0005469322204589844, "__label__religion": 0.0006694793701171875, "__label__science_tech": 0.34619140625, "__label__social_life": 0.00010347366333007812, "__label__software": 0.0162200927734375, "__label__software_dev": 0.61962890625, "__label__sports_fitness": 0.0004978179931640625, "__label__transportation": 0.0008649826049804688, "__label__travel": 0.0002377033233642578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23651, 0.06588]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23651, 0.44028]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23651, 0.8721]], "google_gemma-3-12b-it_contains_pii": [[0, 514, false], [514, 3019, null], [3019, 5916, null], [5916, 8967, null], [8967, 10317, null], [10317, 10727, null], [10727, 14060, null], [14060, 16261, null], [16261, 18579, null], [18579, 21638, null], [21638, 23651, null]], "google_gemma-3-12b-it_is_public_document": [[0, 514, true], [514, 3019, null], [3019, 5916, null], [5916, 8967, null], [8967, 10317, null], [10317, 10727, null], [10727, 14060, null], [14060, 16261, null], [16261, 18579, null], [18579, 21638, null], [21638, 23651, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23651, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23651, null]], "pdf_page_numbers": [[0, 514, 1], [514, 3019, 2], [3019, 5916, 3], [5916, 8967, 4], [8967, 10317, 5], [10317, 10727, 6], [10727, 14060, 7], [14060, 16261, 8], [16261, 18579, 9], [18579, 21638, 10], [21638, 23651, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23651, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
7cae9fd375484b90c44f0080bdc4acd5135ef898
|
Identity Credential Issuance with Trusted Computing
Norazah Abd Aziz\textsuperscript{a}, Lucyantie Mazalan\textsuperscript{b}
Cyberspace Security Centre, MIMOS Berhad, Technology Park Malaysia, 57000 Kuala Lumpur
\textsuperscript{a}azahaa@mimos.my, \textsuperscript{b}lucyantie.mazalan@mimos.my
ABSTRACT
In a client-server environment that deals with multiple clients, there is a need to provide a mechanism on the server to manage the issuance of the client credentials for security authorization. Credentials created using a particular own platform identities and functions as an authentication credentials to authenticate the platform itself in a network communication. However, these credentials can easily be shared, copied and stolen. This will lead to an anonymous service sharing and worst to come when the stolen credentials is using for phishing attacks to the original user. One solution to the problem is to use tamper-resistant hardware to which a credential is bound such that a credential can only be generated and used in connection with the hardware. For that, manufacturers have started to embed into computers a tamper-resistant piece of hardware, called trusted platform modules (TPM), as specified by the Trusted Computing Group. This mechanism insures that credentials can only be issued with the TPM existence in the platform thus guarantees the platform origins. This paper describes the component involved in the credential issuance method by the server trusted computing domain. To implement our approach, a client server application is used as an interface through the secure communication channel in credential request. The server acts as a Trusted Third Party to verify authorized users in this environment.
Keyword-Credential, Trusted Computing, Trusted Third Party
1.0 INTRODUCTION
Trusted Computing (TC) is a technology developed and promoted by non-profit industry consortium that aims to enhance the security of the trusted computing hardware and software building blocks. The main goal of the consortium is to come out with the specification for Trusted Platform Module (TPM) \cite{1} and surrounding software architectures like the TCG Software Stack (TSS) \cite{2}. These components have potentials to be used for security and trust related services like remote attestation and key management.
In this paper, we will discuss about credentials to serve the authentication services of a client server network environment. In particular, it describes about the credential issuance and other factors that contribute to the implementation of such activities. Our approach is supported for both Windows and Linux platform. Hence, in the first contribution the question how trust relationships between remote platforms can be established by using TC is addressed. The approach presented in this paper allows establishing trusted communication channels by means of the TCG’s specified remote attestation. The approach introduces a so-called attestation proxy that is placed in front of the actual application and performs a mutual platform attestation of the two communication parties.
This paper is organized in the following way. It starts with brief introduction in part one, followed by part two which detail out the client and server environment in security purpose. Part three of the paper mentions about the credential issuance where much of the credential issuer components is discussed. The development component issue are presented in part four and finally is part five which describes the basic architecture of the attempt containing the basic system requirement continued by the current implementation in part six. The paper ends with a conclusion.
2.0 CLIENT SERVER ENVIRONMENT
Client-server environment is an exciting architecture that helps to redefine the end users role in application systems. It also manages computer resources among multiple end users in a computer network environment. Basically the client-server environment is an approach or network design to split an application’s processing across multiple processors to gain the maximum benefit at the least cost while minimizing the network traffic between machines. The key phase is to split the application processing. In a client-server mode each processing works independently but in cooperation with other processors. Both rely on each other to perform an independent activity to complete the application process. The distinguishing feature of a client-server system environment is that it contains cooperative processing capabilities through the use of networks.
A server acts as a process that resides at the central location of resource data to provide services to one or more clients. The server is connected to the network and is made available to the client. Whereby, a client is an intelligent workstation processor that is capable on making request to servers to establish certain application processes. Since the server is usually the central location for critical data, adequate trusted securities need to be taken to ensure data safety. Security is implemented to cater the robust access control, data integrity, confidentiality and accountability services in client server environment. The natural question resulted as a
corollary of this, is how do we establish trusted securities in such a system?
One of the approaches available is well known as trusted computing network environment demonstrated by the TCG specifications. The goals of trusted computing are to protect the most sensitive information, such as private and symmetric keys, from theft or use by malicious code. Trusted computing assumes that client software is going to be compromised at some time during its life, and provide protection for its sensitive keys in case this should happened [3]. This concept of trusted computing covers a rather vast set of specifications and standards ranging from the core trusted platform module to both processor and operating system. Within this specification a trusted platform is one that behaves in an expected manner for a particular purpose [4].
Security plays a significant role in a client-server network environment. Users are typically identified with a user account, and system-specific controls can be mounted on these accounts to provide security mechanisms. Security services such as access control and accountability can be implemented in this manner, with the accounts providing a form of stable identity. Other services such as authentication and non-repudiation clearly also rely on the establishment and preservation of stable identities. Thus most of the generic security services are reliant on the provision of stable identities. This environment, by definition provides a function of a trusted third party who can provide assurance as to the identities of entities in the network. With a TPM hardware embedded in the platform and a trusted third party in work, the network provides integrity, creation and use of digital signatures and privacy protection mechanisms.
One of the potential security services is called by the name of attestation [5]. Attestation is a process of assuring that information is accurate and obviously a critical concept for the trusted platform. This is because of the trust in the system is based on taking measurements and checking the measurements. If a system is not able to attest the accuracy of that information, then the trust to the platform does not exist. Attestation is closely related to authentication. In the network environment, anonymous authentication access could facilitate the security mechanism. According to [4], the authentication concept performed by the access requestor requires an access to the facilities without necessarily revealing their identities to external parties. This requirement stems from the possible need for each individual to maintain some degree of plausible deniability as to these presences at a convened. One of the approaches to perform this requirement is by using Direct Anonymous Attestation (DAA) [6].
3.0 Credential Issuance
Trusted Third Party
In cryptography, a trusted third party (TTP) is an entity which facilitates interactions between two parties who both trust the third party; they use this trust to secure their own interactions. TTPs are common in cryptographic protocols, for example, a certificate authority (CA) [7]. As an example, imagine two people, Alice and Bob wish to communicate securely that use cryptography. Alice may need to obtain a key that use to encrypt messages in order to send it to Bob. In this case, the CA is the trusted third party which sends the key to Alice and Bob. The key then uses by Alice to send secure messages to Bob as she trust the CA.
The Public Key Infrastructure (PKI) depends on the concept of a TTP. Nowadays, the PKI technology is widely used in computing and networking environment. A PKI enables users of a basically unsecure public network such as the Internet to securely and privately exchange data through the use of a public and a private cryptographic key pair. The key pair is obtained and shared through a TTP. Thus, the public key cryptography is also based on the digital signing of public keys. In this case, however, one central authority signs all the public keys, and everybody trusts the central authority. The authority’s public key is distributed among the users, who can use it to verify the signatures on public keys of other users [8].
With certificate authorities, every user in the system trusts the CA through a process of digital signing since everything that signed by CA is considered trusted. The CA sends its public keys to the users to let them verify the signature. The users must submit their information (names and public keys) to ensure the trust is mutual to enrol with the CA. Then the CA verifies the authenticity of the submitted information and signs the submitted public key with its private key if everything is correct. Finally, all the signed information is sent back to the end users including with the CA signature.
Privacy CA
In a verification process, each TPM has to generate a unique RSA key pair called Endorsement Key (EK). Authentication of a valid TPM that belongs to the platform is done by certification of EK issued by CA or verifier itself which have knowledge about EKs of all genuine TPM. Basically, if a verifier wants to ensure that the platform runs on a secure operating system, TPM will send its measurements about the platform (PCR value) to the verifier. Then, TPM needs to validate the PCR using the EK. So, verifier knows PCR is valid as well as TPM itself. However, there are problems on privacy issue using this way. All the transaction would become linkable to each other since two different verifiers can tell that they talk to the same platform [9]. Figure 1 illustrates this issue.

To solve this problem, trusted computing group (TCG) has proposed two protocols which remotely convince a communication partner that a trusted hardware is indeed a trusted hardware. These protocols will enable two communication partners to establish the other end in a secured computing platform and therefore making sure that it is a safe data exchange. Some degree of privacy is provided by these remote identification protocols to users of the platform. However, the communication partners can only establish the other end using a trusted hardware device but not in a particular device [11]. These specified protocols are Privacy CA and Direct Anonymous Attestation (DAA). In this paper, we are not discussing about DAA protocol since we use Privacy CA in our implementation as prototypes. The Privacy CA protocol was initially used by the group to solve the above mentioned privacy issue for user that needs the verification of the platform containing a TPM.
The Privacy CA involves TTP in each transaction and the party must be fully trusted by all other parties. The Privacy CA is assumed to know the public parts of the Endorsement Keys of all valid TPM. This valid TPM refers to an uncompromised TPM. In contrasts, a rogue TPM is a TPM that has been compromised and had its secrets extracted. Now, whenever a TPM needs to authenticate itself to a verifier, it generates a second RSA key pair, called an Attestation Identity Key (AIK). Then, it sends the AIK public key to the Privacy CA, and authenticates this public key relating to the EK. The Privacy CA will issue a certificate on the TPM’s AIK if it finds the EK in its list. Through this protocol, there are two possibilities to detect a rogue TPM [10]:
1. If the distribution of the EK secret key which was extracted from a TPM is detected and announced as a rogue secret key, the Privacy CA can compute the corresponding public key and remove it from its list of valid Endorsement Keys.
2. If the Privacy CA gets many requests that are authorized using the same Endorsement Key, it might needs to reject that requests. In practice, it’s probably be determined by some risk-management policy due to the exact threshold on requests that are allowed before a TPM is tagged rogue depending on the actual environment and applications.
4.0 TCG SOFTWARE STACK AND TBS
The TCG also defines an accompanying software infrastructure called the TCG Software Stack (TSS) as well as TPM hardware. The TSS is used by TCG as interface between applications and the TPM (through the TPM driver). TSS 1.2 specification is provided and standardized by the TCG. TSS design goals are to supply one single entry point to the TPM functionality (exclusive TPM access), synchronize concurrent TPM access, TPM resource management (key slots, authorization sessions etc.) and building of TPM commands messages according to TPM specification. TSS is designed as a stack of discreet modules with clearly defined interfaces between them.
TCG Device Driver Library (TDDL), TSS Core Services (TCS) and TSS Service Provider (TSP) are software layers in TSS. Every layer provides different sets of functionalities. Figure 2 shows the TCG Software Layering architecture. The TCG Device Driver Library (TDDL) is an intermediate module between the TCS and the kernel mode TPM Device Driver (TDD). The TDDL supplies the conversion between user mode and kernel mode. TDDL commands are sent at byte level as a stream to the TPM Device Driver. There is typically one TDDL per TPM and is provided by the TPM manufacturer. Access to the TPM is exclusive and synchronized via the TDDL.
TSS Service Providers (TSP) is the top-most modules and provides a rich, object-oriented interface for applications to incorporate the full capabilities of a TCG-enabled platform. The interface used by the applications to access the TSP is the TSS Service Provider Interface (TSPI). While not an architecture requirement, it is intended that the TSP obtain many TCG services such as TPM byte stream generation, key management, etc from the TCS [2]. It provides contexts which are used by applications to manage TPM objects such as policy, key, PCR and others. Furthermore, it is provided as a library and used by all high-level applications, for example a Cryptographic Service Provider (CSP) or user application. Therefore, the application developers do not need to have in depth TPM knowledge.
Figure 2: TCG Software Stack Architecture [12]
Services (TBS) which a Remote Procedure Calls (RPC) based service that only accessible from the local machine.
TBS provides virtualization of TPM resources allowing multiple applications (TSS, OS services etc.) to access the TPM. It allows restricted access to TPM commands on a “per command” basis. In TBS, the resources such as key handles and auth handles are replaced by virtual handles. TBS keeps the mappings of handles and if TPM runs out of resources, TBS takes care of swapping out entities from the TPM. In this situation, the virtual resource handles are not affected. If a swapped out resource is used again (via its virtual resource handle) the TBS tries to reload the entity into the TPM [12].
The TBS component is divided into four functional areas. They are Resource Virtualization, Command Scheduling, Power Management and Command Blocking [13]. Each command submitted to the TBS is associated with a specific entity to ensure that different entities cannot access each other's resources. This is accomplished by creating one or more contexts for an entity, which are then associated with each subsequent command submitted by that entity. Then, the TBS can execute TPM commands under the appropriate context after receive the command which includes a context object. An entity creates the context before it accesses the TBS and maintains the context until it is finished performing TBS accesses. For example, in the case of a TSS, the TCG core services (TCS) component of the TSS would create a TBS context when it starts up, and it would keep that context active until it shuts down [13].
5.0 Architecture
The overall architecture of the proposed implementation is depicted in Figure 3. The implementation has the following characteristics:
- A server machine on Linux platform has the TPM chip hardware.
- Client machines on Linux or Windows platform have TPM chip hardware.
- A trusted third party software embedded in the server and acts as the credential issuer uses the Privacy CA protocol.
- A database connected to the server is to manage the information of the client’s credentials.
- The server and the client is connected by the network and coupled by a secured communication.
- TSS and TBS are softwares performing the trusted protocols and are installed in both server and client for TPM application implementation.
The server serves as the manager to the client in regards to the credential issuance procedure. Request made by the client through the web services is acknowledged by the server for the credential procedure in a secured connection communication. The server inquires the credential from the trusted credential issuer that is the trusted third party. The trusted third party grants the credential to the server while the server replies the credential to the client through secure communication channel.
In order to fulfill the management of credentials for clients, the server needs a database to store and clear the information of the credentials which is requested by the clients. The server has the requirements to request and revoke that offers the possibility to record and eliminate the credential’s information from the database.
6.0 Current Implementation
Referring to the above architecture, installation of all components must be established in order to proceed with the development. The current phase of implementation setup is focusing on providing the environment development for server and the trusted third party.
These software packages installation and configuration are required to facilitate the above mentioned components:
- Qt as a cross-platform application framework for desktop and embedded development. This software includes an intuitive API and C++ class library integrated tools for client and server GUI development in this research. More of the Qt can be found here [14]. This research uses Qt4 version.
- Secure communication channel and reliable transport protocol to establish a reliable communication between server and client. This is to ensure the security of the data throughout the application process as well as protecting it during the credential issuance.
- OpenSSL [15] as an open source toolkit implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS v1) protocols, including the necessary cryptographic operations. The functionality provided by an OpenSSL engine allows the implementation of certificates issuance in this research development.
- TrouSerS TCG Software Stack [2] to provide an API to the operating systems and applications to implement the functionality provided by the TPM.
- The TrouSerS TPM Tools [16] is implemented for testing to check TPM capability on TPM hardware and software. This tool is a set of open source programs that provides commands allowing a platform administrator to manage and diagnose the TPM resident on a platform.
- TPM Device Driver is required in order to use the TPM after Linux bootstrapped. This is necessary for both hardware and software TPM implementation. For TPM hardware module implementation, the device
driver is specified to the TPM manufacturer. While for TPM software implementation, the device driver can be obtained by installing the TPM Emulator. This research development is currently been done on an Intel machine provided with Infineon TPM chip [1][2]. The experiment outputs are shown in Figure 4 and 5.
```
# cd /dev
# insmod tpm
tpm_infineon..." 1
tpm... 1 tpm_infineon
tpm... 9537 1 tpm
# cat /sys/local/shim/tpeversion
tpmversion... TPM2.0
# cat /sys/local/shim/tpestatus
tpmstatus... "active"
```
Figure 4: Console command to locate existence of the TPM Device Driver.
```
# cd /sys/local/shim/tcmd -f
tcpproc.conf:<conf> platform_class_list.append: platform_class_list.append: platform_class_list.append: platform_class_list appended.
# cat /sys/local/shim/tcmd -f
tcpproc.conf:<conf> platform_class_list.append: platform_class_list appended: platform_class_list appended.
```
Figure 5: Console display the detected PCR value.
- **TPM Manager** as a realization of open source TPM management software that provides an easy-to-use graphical user interface. TPM Manager is installed and run to for experiment. The experiment outputs are shown in Figure 6, 7, and 8.
```
# cat /sys/local/shim/tcpproc.conf:<conf> platform_class_list.append: platform_class_list appended: platform_class_list appended.
# cat /sys/local/shim/tcpproc.conf:<conf> platform_class_list.append: platform_class_list appended: platform_class_list appended.
```
Figure 6: TPM Manager interface display the TPM TSS Status.
7.0 CONCLUSION AND FUTURE WORKS
This paper describes in general the implementation setup requirement of current research progress in identity credential issuance system. The software components that have been installed and configured are the Qt software, the OpenSSL engine, the TrouSerS TCG Software Stack, the TrouSerS TPM Tools, the TPM Device Driver, the TPM Emulator and the TPM Manager. The future works will focus on the challenge in implementing the trust and secure of the trusted third party in order to generate credentials as well as dealing with different types of attestation protocol. In addition, this research will also reach on the development of the client application in Windows platform.
8.0 REFERENCES
|
{"Source-Url": "http://icoci.cms.net.my/PROCEEDINGS/2009/papers/PID13.pdf", "len_cl100k_base": 4423, "olmocr-version": "0.1.51", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 17678, "total-output-tokens": 5276, "length": "2e12", "weborganizer": {"__label__adult": 0.0004394054412841797, "__label__art_design": 0.0004475116729736328, "__label__crime_law": 0.0013303756713867188, "__label__education_jobs": 0.0004646778106689453, "__label__entertainment": 9.053945541381836e-05, "__label__fashion_beauty": 0.00019443035125732425, "__label__finance_business": 0.0006761550903320312, "__label__food_dining": 0.0003371238708496094, "__label__games": 0.0006427764892578125, "__label__hardware": 0.0074462890625, "__label__health": 0.0008172988891601562, "__label__history": 0.0003116130828857422, "__label__home_hobbies": 0.00013720989227294922, "__label__industrial": 0.0008497238159179688, "__label__literature": 0.00023293495178222656, "__label__politics": 0.0003199577331542969, "__label__religion": 0.0005359649658203125, "__label__science_tech": 0.279052734375, "__label__social_life": 0.00010061264038085938, "__label__software": 0.0484619140625, "__label__software_dev": 0.65625, "__label__sports_fitness": 0.00022912025451660156, "__label__transportation": 0.0006117820739746094, "__label__travel": 0.0002008676528930664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24661, 0.02122]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24661, 0.44939]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24661, 0.92086]], "google_gemma-3-12b-it_contains_pii": [[0, 5258, false], [5258, 10943, null], [10943, 15400, null], [15400, 20485, null], [20485, 22008, null], [22008, 24661, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5258, true], [5258, 10943, null], [10943, 15400, null], [15400, 20485, null], [20485, 22008, null], [22008, 24661, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24661, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24661, null]], "pdf_page_numbers": [[0, 5258, 1], [5258, 10943, 2], [10943, 15400, 3], [15400, 20485, 4], [20485, 22008, 5], [22008, 24661, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24661, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
06c12f03f63a5f987116e94e8f517d27e5ec69d9
|
J2EE and .Net security
1. Introduction
1.1. Document Revision History
<table>
<thead>
<tr>
<th>Author</th>
<th>Document Version</th>
<th>Last Modified Date</th>
<th>Note</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ger Mulcahy</td>
<td>1.2</td>
<td>12/02/2002</td>
<td>Third draft</td>
</tr>
<tr>
<td>Ger Mulcahy</td>
<td>1.1</td>
<td>01/02/2002</td>
<td>Added information on JAAS, JSSE, Project Liberty, DotGNU, etc.</td>
</tr>
<tr>
<td>Ger Mulcahy</td>
<td>1.0</td>
<td>21/01/2002</td>
<td>Second Draft – added contributors</td>
</tr>
<tr>
<td>Ger Mulcahy</td>
<td>0.1</td>
<td>04/01/2002</td>
<td>First draft</td>
</tr>
</tbody>
</table>
1.1.1. Contributors
My thanks to the following for their assistance with this article:
Alan Danziger, Mark Curphey, Alan Faber, Elias Levy, Tony Northrup
1.2. Overview
A number of general comparative articles have been written discussing the pros and cons of these two competing technological platforms. The intention of this paper is to discuss J2EE and .Net at a high level from a security perspective, examining the tools and methodologies the platforms use to provide secure development and deployment environments.
This introduction section covers a brief, incomplete discussion of key features of both platforms. It will not discuss areas that are not analogous between platforms. For more information on both, see the references section of this document.
Note that .Net is a product platform, whereas J2EE is a standard specification, which is implemented to varying degrees of fidelity by a number of vendors. For this reason, direct comparisons may be difficult in certain areas without going into vendor specifics.
For the purposes of this article no real distinction is made between .Net and the .Net Framework, which forms one part of the .Net strategy.
While Microsoft is pushing .Net as their strategy for Web Services, this document will not discuss the two platforms from the point of view of Web Services, nor does it describe COM+, as this is not part of the .Net Framework.
1.3. What is .NET made of?
The .Net initiative, a significant undertaking by Microsoft, is an attempt to tie together applications, development languages, operating systems and data stores into a unified, distributed enterprise platform.
The .Net Framework is a reworking of Windows DNA, Microsoft’s previous enterprise development platform, consisting of the CLR(Common Language Runtime), base classes and presentation through ASP.Net. .Net is tightly integrated into the Windows OS environment (with .Net support integrated into all of Microsoft's enterprise applications), but the portability presented by the CLR/CLS (Common Language Specification)/CTS (Common Type Specification) combination means that if Microsoft wanted, .Net could be implemented on other platforms.
1.3.1. **C#**
C# is an object-oriented language, derived from C++, and syntactically similar (with some exceptions) to the Java language. C# is a “brand new” language, and may suffer from teething troubles as a result. While it is not a part of the .Net framework, it is a part of the .Net strategy, and mirrors Java’s function within J2EE.
1.3.2. **Common Language Runtime (CLR)**
All .Net components are compiled into an interim language called Intermediate Language (IL), which is then executed in the runtime environment provided by the CLR. A growing number of compilers for languages such as C#, Visual Basic.Net, C++, etc. are available for use with .Net, giving programmers the ability to work in .Net using familiar languages. The CLR is part of the core of the .Net Framework, providing a secure execution environment through the principles of what Microsoft describes as managed code and code access security. The CLR also provides housekeeping functionality through garbage collection.
1.3.3. **ASP. Net**
ASP.Net is an onward progression from ASP (Active Server Pages). Differences from ASP include programmatical changes and the fact that ASP.Net pages are now compiled and executed in a CLR.
1.3.4. **ADO.Net**
ADO.Net is the successor to Microsoft’s ActiveX Data Objects (ADO). Whereas ADO provided two-dimensional access to data, through the use of rows and columns, ADO.Net describes data as objects and utilises XML for transmitting data between application tiers.
1.3.5. **Assemblies**
An assembly is a collection of code used as the building block of a .Net framework deployment. Assemblies are designed to include security information (in terms of permissions requested and strong name), versioning information (which enables multiple versions of software to co-exist on a single system, theoretically eliminating DLL conflicts), code and supporting resources. Assemblies can be built using the Visual Studio.Net IDE and the Assembly Generation Tool.
1.4. **What is J2EE made of?**
J2EE is Sun’s reference standard for enterprise development, first introduced in December 1999, and now at version 1.3. Core elements of J2EE are described in this section.
1.4.1. **Java language**
Java is an object-oriented language derived from C++, with features that simplify coding such as memory management through garbage collection, no pointers, etc. Java is designed to be “Write once, run anywhere”, this goal being achieved through the use of portable bytecode.
1.4.2. **JVM/JRE**
The JRE (Java Virtual Machine) consists of the Java Virtual Machine, a just-in-time compiler and some foundation classes. Java classes are compiled into platform-independent bytecode that is executed in the JVM.
1.4.3. **JSPs and Servlets**
Java Server Pages (JSPs) are analogous to ASP technology, providing the capability to build dynamic web pages composed of HTML with embedded dynamic components, e.g. references to Beans. Servlets are described as applets that run on the web server, and are used where CGI would traditionally have been employed to build dynamic web applications.
1.4.4. EJB
Enterprise Java Beans (EJB) are used to build distributed applications by providing the communications and execution framework for distributed components. Critical services provided by an EJB container include transaction management, security, resource management and persistence.
1.4.5. JDBC
Java Database Connectivity (JDBC) is an API for connecting to relational databases, manipulating their contents, and processing the output of issued SQL statements. Numerous database vendors have developed drivers based on the JDBC API.
2. Security Models
Enterprise development requires that security targets be met for the development and deployment of code. Development platforms and runtime environments must provide for authentication and authorisation, data integrity, audit trails, non-repudiation and privacy of data.
The responsibilities for these tasks are spread across multiple platform elements in a typical n-tier application architecture. This section will attempt to discuss the approaches taken by .Net and J2EE, focussing mainly on the runtime environments.
2.1. .NET Framework security architecture
The CLR and base classes are responsible in large part for security functions within the .Net framework. The CLR uses a number of criteria to determine the security permissions of code to be executed, and obtains some security information from XML configuration files.
The base classes control access to resources such as the file system by determining the permissions of the caller.
.Net's two primary security functions are described in the following sections.
2.1.1. Code access security
Code access security is a core part of the CLR’s security function. Code access security determines the level of trust assigned to a piece of code based on its origins, publisher and other factors. Some key concepts used to define code access security are described below.
2.1.1.1. Evidence-based security
.Net uses the concept of “Evidence-based security”, to describe the process by which assemblies are examined by the CLR at runtime. The CLR queries assemblies using the following primary criteria:
- Where did this code originate?
- Who created this assembly?
The first question can be broken out into “Which URL did the assembly originate from?” and “Which zone did the assembly originate from?”. Microsoft uses the concept of zones to describe security environments like the Internet, local networks or intranets, the local machine, etc.
Evidence can also be in the form of a digital certificate signed by the publisher of the assembly or the strong name (a unique identifier, consisting of a text name, digital signature, and a public key) of the assembly.
The second question is reasonably straightforward – “What information is available on the creator of this assembly?".
The evidence gathered is included as part of an assembly’s metadata. Metadata can include information on types, relationships with other assemblies, security permissions requested as well as a description of the assembly. Metadata is examined by the CLR at various points, including by the verifier, which ensures that types are correct prior to compilation of IL to native code. The verifier is also responsible for ensuring that the metadata associated with an assembly is valid.
The CLR examines the metadata to establish an identity for the assembly and determines the permissions allocated to the assembly based on the security policy.
2.1.1.2. Permissions
Permissions within the CLR are the building blocks of access control. A code access permission in this context is the right of a piece of code to access a particular resource or perform a particular operation. When assemblies are built, the permissions that it requires to run can be included as part of its description.
At runtime the assembly requests these permissions, and those permissions that it requests are either granted or denied by the CLR. An assembly will never be given greater permissions than the current security policy allows, but may be given lesser permissions than it requests.
Examples of permissions are SecurityPermission (the permission to view or modify the security policy), UIPermission (the right to create sub-windows and make use of the clipboard) and the RegistryPermission class (which gives access to Windows Registry details).
Permissions can be grouped into permission sets for ease of administration. These permission sets are then associated with code groups, which are described in the next section. Examples of permission sets are Nothing (no permissions at all, so code cannot execute), Internet (the permissions assigned to untrusted code) and Everything (the set of permissions that grants all standard permissions, with the exception of avoiding code verification).
At runtime, the CLR performs what is known as a stack walk to determine whether the calling assembly has permission to access a particular resource, checking requested permissions against granted permissions for each caller on the stack.
If an assembly’s request for access to a resource is denied, a SecurityException is thrown.
2.1.1.3. Security Policy
The security policy is administered using the Code Access Security Policy Tool (Caspol.exe) or the .Net Framework configuration tool. Administrators set the policy for assemblies and application domains, the CLR uses the evidence described above to identify an assembly, and then uses the security policy to determine the permissions the assembly has at runtime.
The policy identifies code by organising code into groups that categorise based on the evidentiary information described, such as the zone from which the code is loaded. If no other information is available, the default policy for the zone from which the code was obtained will be used to determine the permissions that an assembly has.
2.1.2. Role-based security
User membership in a role within a .Net application helps to determine the access that the user has to perform particular operations and access resources. For example, in the case of a financial application, a Broker role might have permission to initiate, authorise and cancel trades on behalf of an individual or financial institution.
Role-based security describes the means by which the .Net framework identifies, authenticates and authorises an individual user from any of a number of authoritative sources. When a user is identified, their authenticated identity (and role membership) is denoted by the term *principal*. A principal can be a member of one or more roles; for example, an individual could be a member of a Broker and an Executive role.
Role-based security employs a permission structure similar to that of code access security, with permissions managed through the PrincipalPermission object.
Authentication and authorisation sources for users can be through Windows machine security, domain security or some custom authentication source.
### 2.1.3. Programmatic security
.Net uses the IsInRole method to determine whether a user belongs to a particular role. For example, to determine if a user has membership of a Broker role, the following could be used:
```
If User.IsInRole("Broker")
' Permit requested function
Else
' Bounce back to login
End If
```
The classes responsible for the determination of principal identity are stored in the System.Security.Principal namespace.
### 2.2. J2EE security architecture
The J2EE security architecture is defined as part of the platform specification document. It details security management roles, and specifies goals of the security architecture, but does not specify security policy or implementation details (such as the use of a particular security technology to meet the described goals).
#### 2.2.1. Code management through the JVM and the class file verifier, the class loader and the Security Manager
Basic Java security employs the concept of a "sandbox" to limit the abilities of untrusted code to cause damage to the system on which it runs. Historically, an untrusted piece of code such as an applet would be disallowed from accessing local disk, opening network connections, etc. With the introduction of certificate support through the Java Plug-In, the origin and author of a signed applet could be established definitively, enabling fine-grained permissions to be assigned to individual applets based on the security policy. This means that applets are no longer confined to the default sandbox.
As described in the coming paragraphs, the sandbox is implemented through the JVM and its class verifier, but also through the class loader and the Security Manager/ACL manager.
#### 2.2.1.1. JVM security
The JVM provides a secure runtime environment by managing memory, providing isolation between executing components in different namespaces, array bounds checking, etc. The dynamic way in which the JVM allocates the various memory areas (method area, GC heap, thread stacks) means that it is almost impossible for a would-be attacker to determine what memory areas to attempt to insert malicious instructions into. Bounds checking on arrays prevent unreferenced memory accesses.
The JVM’s class file verifier examines classes for basic class file structure upon loading. While bytecodes produced by Sun’s compiler should be relatively free of errors (type errors, for example), it is possible for an attacker to manually create malicious bytecode.
The class file verifier examines each loaded class file in four passes; from the most basic check, where the physical attributes of the file are checked (size, magic number, length of attributes) to checking the constant pool to ensure that method and field references have correct attributes, to parsing the instructions for each method. Note that, by default, the only trusted classes are the base classes. All other classes, including those loaded from the application classpath are considered untrusted and must be verified.
### 2.2.1.2. The Class Loader architecture
There are two types of class loader – the primordial class loader, which is a part of the JVM, and class loader objects, which are used to load non-essential classes. There can only be one primordial class loader, which is used to bootstrap the JVM by loading the base classes, sometimes using native OS-dependent means. There can be many instances of class loader objects in the JVM, which can be instantiated on the fly, and used to load objects from sources such as the network, local disk, or data stores. Controlling the creation of class loader objects is therefore important due to the function of the class loader.
Class loaders are responsible for locating classes requested by the JVM for loading into the runtime environment. Part of their responsibility is to prevent unauthorised or untrusted code from replacing trusted code that makes up the base classes. As an example, a class loader should prevent the replacement of the Security Manager class. The attempted replacement of a base class by a maliciously coded class is referred to as class spoofing.
All class loaders, with the exception of the primordial class loader, are loaded by other class loaders, which become the loaded class loader’s parent. Thus, the loading of class loaders describes a tree structure with the primordial class loader at the root and classes as the leaf nodes (obviously class loaders are themselves classes).
If a class loader loads a class, all subsequent requests for related classes are directed through that class loader. For example, if a class loader “A” loads a class “Building”, and “Building” makes calls to methods in a class called “Cubicle”, the JVM will use class loader “A” to load “Cubicle” and any other classes that are referred to by “Building”.
Class loaders prevent class spoofing by passing requests back up the tree through their parent class loader until the class loader that loaded the requested class is reached. In the case of the Security Manager class, the primordial class loader is responsible, and consequently the malicious class described above will not be loaded. Classes are loaded only once, and the base classes are only loaded from the local disk using the system classpath.
Class loaders also provide security by managing namespaces. A class that is loaded by a particular class loader can only reference other classes in the same namespace, i.e., loaded by the same class loader or its parents.
### 2.2.1.3. The Security Manager and Access Controller
The Security Manager was responsible for examining and implementing the security policy, which is specified by policy files. The security policy, as with .Net, determines the permissions that code has at runtime.
In more recent versions of Java, the decisions on whether to grant permissions based on security policy are delegated to the Access Controller. When a class makes a request for permissions, it is received by the Security Manager which passes the request to the Access Controller.
In a similar fashion to the way that .Net groups permissions into permission sets, which it then associates with code groups, permissions in the Java world are grouped into
protection domains, associated with code sources. In other words, groups of permissions are associated with groups of classes, the classes being grouped by their origin.
Signed Java code is assigned permissions based on the system policy as applied to the code’s protection domain. Depending on the permissions associated with the source of the code, the applet may have full access to system resources, or may be restricted to a small subset.
Java applications, by default, have no associated security manager, and therefore have full access to system resources.
### 2.2.2. Platform Roles
The J2EE platform specification describes Organisational or Platform Roles that can be used to divide up responsibilities in a J2EE development and deployment cycle. The Roles described in the platform specification are Product Provider, Application Component Provider, Application Assembler, Deployer and Systems Administrator. These Roles are not absolutes – the responsibilities of the various roles could be divided differently to fit a company’s development and deployment methodologies.
Of these roles, most have clear security implications. The Product Provider and Application Component Provider roles are responsible for developing secure code, the Assembler is responsible for defining method permissions and security roles, the Deployer verifies the security view of the deployed application and assigns principals to roles, and the Systems Administrator administrates principals and ensures that the local security environment is correct for the J2EE platform.
### 2.2.3. Security Roles and the Deployment Descriptor
The deployment descriptor is an XML file that ships with each EJB¹, and describes declaratively many of the aspects of the EJB’s function and makeup, and its relationship with other beans.
One of the elements in the descriptor is the `<security-role-ref>` element. This element type is used by the bean developer to define all of the security roles used in the EJB code. Security role names are associated with links, which are then called elsewhere in the descriptor.
The `<security-role>` element is used to call roles described in the `<security-role-ref>` elements. For example:
```
<security-role-ref>
<role-name>root</role-name>
<role-link>super-user</role-link>
</security-role-ref>
```
```
<security-role>
<description>This is the security-role for the role “root”, defined above</description>
<role-name>super-user</role-name>
</security-role>
```
¹ Deployment descriptors can also be used with other Java components (e.g. Apache SOAP deployment descriptors) but every EJB is required to have a deployment descriptor.
The description field in the <security-role> descriptor element is optional.
Membership of a role confers a set of permissions for the duration of the role membership. Principals can be in several roles at the same time, e.g. employee and manager.
Method permissions are also described in the deployment descriptor.
### 2.2.4. Programmatic security
Role membership can be determined programatically in the J2EE environment using the `isUserInRole` and `getUserPrincipal` methods of the `HttpServletRequest` object for the web container.
As part of the bean-container contract, the container provides the `EJBContext` object. The corresponding EJBContext methods are `isCallerInRole` and `getCallerPrincipal`. `getCallerPrincipal` returns the principal associated with the security context, while, predictably enough, `isCallerInRole` is a boolean method used to determine whether the caller belongs to a specified security role.
### 2.3. Cryptography support
#### 2.3.1. .NET
The .Net base classes provide support for encryption, key generation, hashing and message authentication. Supported algorithms include DES, SHA, AES(Rijndael), RC2, etc.
The .Net Framework provides a number of tools for certificate management and manipulation. `makecert.exe` can be used to create X.509 certificates for testing purposes, `certmgr.exe` (Certificate Manager) is used to manage X.509 certificates, trust lists and revocation lists, and `secutil.exe` can be used to extract public key information for a certificate from an assembly.
#### 2.3.2. J2EE
JCE (Java Cryptography Extension) is a collection of packages that provides support for encryption, key exchange and MAC algorithms. JCE is an optional package for J2SDK 1.3, but has been integrated into v1.4. While the J2SDK includes some cryptography functions, the JCE separates out much of the functionality due to US Government export restrictions. The JCE uses the concept of CSPs (Cryptographic Service Providers) to plug in different encryption algorithm implementations.
Another set of add-on encryption packages for J2SDK 1.3 that are being integrated into v1.4 is JSSE, the Java Secure Sockets Extension, providing SSL and TLS functionality to Java developers.
### 3. Conclusion
J2EE and .Net both provide quite comprehensive security services, though with a different focus. Authentication and authorisation services in .Net are provided through Microsoft OSes and identification repositories. J2EE, on the other hand, does not specify which methods or identification stores should be used to perform these functions, leaving these decisions up to vendors and developers. Although its use is not mandated, authentication and authorisation functionality is provided by Sun through JAAS (Java Authentication and Authorisation Service), which is based on PAM.
Both platforms use similar concepts for handling user and code access to resources, with permissions being key to both, and the concept of roles being used to associate permissions with principals in both environments.
While J2EE uses the concept of Organisational Roles to delineate responsibilities at various stages of the development and deployment process, .Net does not define the hierarchy as clearly.
While .Net provides a solid security model through managed code in the CLR, the ability to run unmanaged code confers the ability to bypass CLR security through direct calls to the underlying OS APIs. In the Java world, signed, trusted code has unrestricted access to system resources. Java calling out to native (C/C++) code through JNI confers the ability to bypass the JRE’s security as surely as running unmanaged code in the CLR can be used to bypass.Net’s security.
The reliance on Microsoft-specific identification stores such as Passport or Windows domains for authentication and authorisation purposes means that .Net applications may require end-users to subscribe to Microsoft’s Passport service. As an example of why this might be a “bad thing”, Microsoft had to temporarily shut down part of its Passport service during November 2001 due to a security issue. Current copyright restrictions prevent non-Microsoft vendors from providing alternatives, forcing .NET application developers to rely upon a single vendor for back-end services. Concern at this stance in the wider developer and Internet communities has resulted in projects like DotGNU and Project Liberty. DotGNU is an FSF (Free Software Foundation) initiative to provide an open-source implementation of the .Net framework, and a non-vendor controlled Passport alternative (Virtual Identities), which is intended to use encryption wherever possible to protect user details. Project Liberty is an alliance whose members include Sun, AOL, Vodafone, American Express, HP and RSA, to name but a few, with the stated aim of creating a federated single-sign-on scheme allowing authentication and authorisation from any type of network connected device.
Java has also had its share of security problems in the past, especially in the area of certain JREs and malicious applets; problems have also been reported in the class loading process, a fundamental part of JVM security, as described above.
One of the crucial challenges for both Microsoft and J2EE vendors when developing their respective platforms is securely handling code obtained from multiple sources, outside of the local machine. The code verification functions of the JVM are quite mature at this stage and mistakes made in the past have been learned from. The CLR model is similar, but the implementation is relatively untested. It will be interesting to see how the .Net environment stands the test of time from a security perspective, once .Net deployments become more widespread.
4. References
4.1. J2EE
Enterprise Security with EJB and CORBA. Hartman, Flinn, Beznosov
JDance – http://www.jdance.com
JGuru – http://www.jguru.com
JMIDDLEWARE – http://www.jmiddleware.com
JavaWorld – http://www.javaworld.com
Sun’s Java site – http://java.sun.com
The J2EE 1.3 Specification (7/27/01)– http://java.sun.com/j2ee/j2ee-1_3-fr-spec.pdf
The ServerSide – http://www.theserverside.com
4.2. .Net
4GuysFromRolla – http://www.4guysfromrolla.com
C-sharp corner – http://www.csharpcorner.com
DevX – http://www.devx.com
Gotdotnet – http://www.gotdotnet.com
MSDN library – http://msdn.microsoft.com/library
4.3. Other
Mono – http://www.go-mono.org
Project Liberty – www.projectliberty.org
|
{"Source-Url": "https://www.cgisecurity.com/lib/J2EEandDotNetsecurityByGerMulcahy.pdf", "len_cl100k_base": 5805, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 21220, "total-output-tokens": 6339, "length": "2e12", "weborganizer": {"__label__adult": 0.0002512931823730469, "__label__art_design": 0.0001519918441772461, "__label__crime_law": 0.0003123283386230469, "__label__education_jobs": 0.0003786087036132813, "__label__entertainment": 3.081560134887695e-05, "__label__fashion_beauty": 8.296966552734375e-05, "__label__finance_business": 0.0002486705780029297, "__label__food_dining": 0.00016355514526367188, "__label__games": 0.0003209114074707031, "__label__hardware": 0.0006456375122070312, "__label__health": 0.0001628398895263672, "__label__history": 0.00010287761688232422, "__label__home_hobbies": 5.1975250244140625e-05, "__label__industrial": 0.0002053976058959961, "__label__literature": 0.00010907649993896484, "__label__politics": 0.00013208389282226562, "__label__religion": 0.00022494792938232425, "__label__science_tech": 0.00415802001953125, "__label__social_life": 5.245208740234375e-05, "__label__software": 0.00939178466796875, "__label__software_dev": 0.982421875, "__label__sports_fitness": 0.000164031982421875, "__label__transportation": 0.0002453327178955078, "__label__travel": 0.0001251697540283203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28718, 0.0355]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28718, 0.66992]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28718, 0.90283]], "google_gemma-3-12b-it_contains_pii": [[0, 2940, false], [2940, 6040, null], [6040, 8847, null], [8847, 12250, null], [12250, 15460, null], [15460, 19199, null], [19199, 21864, null], [21864, 24687, null], [24687, 27617, null], [27617, 28718, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2940, true], [2940, 6040, null], [6040, 8847, null], [8847, 12250, null], [12250, 15460, null], [15460, 19199, null], [19199, 21864, null], [21864, 24687, null], [24687, 27617, null], [27617, 28718, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28718, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28718, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28718, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28718, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28718, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28718, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28718, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28718, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28718, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28718, null]], "pdf_page_numbers": [[0, 2940, 1], [2940, 6040, 2], [6040, 8847, 3], [8847, 12250, 4], [12250, 15460, 5], [15460, 19199, 6], [19199, 21864, 7], [21864, 24687, 8], [24687, 27617, 9], [27617, 28718, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28718, 0.03448]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
47e1be320673f85b024c3d0676d41bfb674aab60
|
CSE 101 Homework 2 Solutions
Winter 2015
This homework is due Friday January 30th at the start of class. Remember to justify your work even if the problem does not explicitly say so. Writing your solutions in \LaTeX is recommended though not required.
**Question 1** (Differing Priority Queue Implementations for Dijkstra, 20 points). Consider running Dijkstra’s algorithm on the following graphs, implementing the priority queue as either an array or as a binary heap. Which one is more efficient, and what is the final runtime?
(a) A graph where \( V \) is a \( \sqrt{n} \times \sqrt{n} \) grid and so that there are edges between any vertex and its four neighbors. [5 points]
(b) A complete graph on \( n \) vertices (there is an edge between every pair of vertices). [5 points]
(c) A graph where \( V \) is a \( \sqrt{n} \times \sqrt{n} \) grid and there are edges between any pair of vertices in the same row or column. For this part, consider implementation of a priority queue using a \( d \)-ary heap for various values of \( d \). Which value of \( d \) gives the best runtime, and what is that runtime? [10 points]
**Solution 1.** In this problem we will investigate the efficiency of different priority queues depending on the graph. The runtime of Dijkstra’s algorithm for an array implementation is \( O(|V|^2) \) while that for a binary heap is \( O((|V| + |E|) \log(V)) \)
(a) Each vertex in this graph has a degree of at most 4. Thus, the total number of edges in the graph is at most \( 2n \).
\[ |V| = n, \quad |E| = O(n) \]
(i) **Array**: The runtime for this implementation would be \( O(|V|^2) = O(n^2) \)
(ii) **Binary Heap**: The runtime for this implementation would be \( O((|V| + |E|) \log(V)) = O(n \log(n)) \)
Thus, a binary heap implementation would be more efficient.
(b) In a complete graph, each vertex has a degree of \( n - 1 \). Thus, the total number of edges is \( |E| = \frac{n(n-1)}{2} \).
\[ |V| = n, \quad |E| = O(n^2) \]
(i) **Array**: The runtime for this implementation would be \( O(|V|^2) = O(n^2) \)
(ii) **Binary Heap**: The runtime for this implementation would be \( O((|V| + |E|) \log(V)) = O(n^2 \log(n)) \)
Thus, an array implementation would be more efficient.
(c) Each vertex is adjacent to every other vertex in the same row or column. Thus, the degree of each vertex is \( 2\sqrt{n} - 2 \). Thus, the total number of edges in the graph is \( n(\sqrt{n} - 1) \).
\[ |V| = n, \quad |E| = O(n^{3/2}) \]
(i) **Array**: The runtime for this implementation would be \( O(|V|^2) = O(n^2) \)
(ii) **d-ary Heap**: The runtime for each operation when using a \( d \)-ary heap is as follows:
Insert, decreasing priority, increasing priority: \( O(\log(n)/\log(d)) \)
Extract min: \( O(d \log(n)/\log(d)) \)
In Dijkstra’s algorithm, we perform the extract min operation at most \( n \) times. We do the insert/decrease key operation only when we find edges which cause relaxation. In the worst case,
every edge may cause a relaxation and hence we have to do at most $O(|E|)$ insert/decrease key operations.
Thus, we get a runtime of $O(nd\log(n)/\log(d) + n^{3/2}\log(n)/\log(d))$
Clearly, it is not possible to get a runtime better than $O(n^{3/2})$ because of the second term and because $d < n$. When $d = n^{1/2}$, we get a runtime of $O(n^{3/2}\log(n)/\log(n^{1/2}) + n^{3/2}\log(n)/\log(n^{1/2})) = O(n^{3/2})$.
Thus, a d-ary heap implementation with $d = n^{1/2}$ would be more efficient.
**Question 2** (Shortest Paths to Nearby Vertices, 30 points). Find an algorithm to do the following: Given a graph $G$ with positive integer edge weights, a vertex $s$ and a positive integer $L$ find the set of vertices within distance $L$ of $s$. Your algorithm should run in time $O(|V| + |E| + L)$. Hint: Modify Dijkstra’s algorithm to use an array whose $i^{th}$ entry holds a list of all vertices at distance $i$ for each $0 \leq i \leq L$. [Algorithm 10 points, Analysis 10 points, Runtime analysis 10 points]
**Solution 2.** The bottleneck in the array implementation of Dijkstra’s algorithm is finding the element with the minimum key. Therefore, to achieve linear time we avoid “searching” for the minimum element at every round.
To compute the set of vertices within distance $L$ of $s$, we maintain an array $\text{dist}_s$ where $\text{dist}_s[i]$ is a linked list containing vertices at distance $i$ from $s$. Our array will be of size $L + 1$ since we are only interested in distances less than or equal to $L$. We will have temporary distance labels initially on all the vertices of the graph corresponding to the “unseen” portion of the graph. As vertices are added to the “seen” portion, their labels will be made permanent. Therefore, we actually maintain two copies of $\text{dist}_s$, one for the unseen portion and one for the seen portion.
The key to this implementation is that the as-yet-unseen minimum distances are monotone increasing, since any path to a vertex $u$ must be at least as long as the paths to all vertices preceding $u$, and these are necessarily discovered first. Thus, rather than computing the minimum distance at every iteration of the algorithm, we simply maintain walk along the $\text{dist}_s$ array. Below is the full algorithm:
**Algorithm NearbyVertices(G, s, L):**
// input: G, a vertex s, a positive integer distance L
// output: a list of vertices within distance L of s
initialize two arrays, $\text{dist}_s$ and $\text{dist}_s_{\text{TEMP}}$, of size $L+1$
initialize array dist of size $n$
dist[s] = 0
$\text{dist}_s_{\text{TEMP}}[0] = s$
curr_min = 0 // pointer to minimum distance
while curr_min < L+1:
while $\text{dist}_s_{\text{TEMP}}[\text{curr_min}]$ is not empty:
u = first vertex in $\text{dist}_s_{\text{TEMP}}[\text{curr_min}]
// update distances
for w adjacent to u:
if dist[w] > dist[u] + l(u,w):
if w in $\text{dist}_s_{\text{TEMP}}$:
remove w from $\text{dist}_s_{\text{TEMP}}$
dist[w] = dist[u] + l(u,w)
if dist[w] < L+1:
add w to $\text{dist}_s_{\text{TEMP}}[\text{dist}[w]]$
// add u to the seen portion of the graph
add u to $\text{dist}_s[\text{dist}[u]]$
curr_min += 1
return all vertices in $\text{dist}_s$
We show the algorithm correctly computes minimum distances by analogy to Dijkstra. Namely, the array \( \text{dist}_{\text{from s TEMP}} \) acts as a priority queue for as-yet-unseen vertices in the graph. This is true because any unseen vertices cannot have their distances updated to be less than the value of \( \text{curr min} \) since edge lengths are all positive. Therefore, after one round of updates, the minimum distance will be the index of the first non-empty array element in \( \text{dist}_{\text{from s TEMP}} \).
The initialization of \( \text{dist}_{\text{from s}} \) and \( \text{dist}_{\text{from s TEMP}} \) each take time \( O(L) \), and the initialization of \( \text{dist} \) is \( O(|V|) \). As discussed above, finding minimum elements involves one linear scan of the array, taking time \( O(L) \), and updating distances takes a total of \( O(|E|) \). Therefore the total running time is \( O(|V|+|E|+L) \), as desired.
Question 3 (Commodity Trading, 50 points). Charlene is a commodities trader. She trades in \( n \) different types of goods. She knows \( m \) other merchants, each of which are willing to trade one specific good for one other good at a specified exchange rate. Say that the \( i \)th merchant is willing to trade one unit of good \( g_i \) for \( r_i \) units of good \( g'_i \).
(a) Given two specific goods, find an efficient algorithm by which Charlene can find a sequence of trades to exchange the first type for the second at the most favorable possible rate. Hint: Adapt Bellman-Ford. [Algorithm 5 points, Analysis 5 points]
(b) In some circumstances, it might be possible for Charlene to make some sequence of trades and eventually end up with strictly more of a good than she started with. Give an algorithm to determine whether or not this is possible. [Algorithm 5 points, Analysis 5 points]
(c) One way in which you might be able to show that the above is impossible is if there is a way to assign prices to every good in such a way that no merchant allows you to trade some collection of goods for a more valuable collection. Find a mathematical formulation of this condition [5 points] and show that it implies that it rules out sequences of trades that would allow Charlene to end up with more than she started with. [10 points]
(d) In fact, if the situation discussed in part (b) is impossible, it is always possible to assign prices as described in part (c). Show that this is the case. You may assume that given any two goods, there is some sequence of trades that allows you to exchange one for the other. Hint: pick some particular good \( g \) and find the best ways of exchanging it for each other type of good. Set prices so that all of these trades are cost-neutral. [15 points]
Solution 3 (Commodity Trading).
(a) We first abstract the market as a graph. Each commodity is a node, and each merchant is a directed edge \((u, v)\), where \( u \) is the commodity that the merchant wants to buy, and \( v \) is the commodity they're selling. The weight of edge \((u, v)\) represents the amount of \( u \) they charge for one unit of \( v \).
With this abstraction, we can adapt the Bellman-Ford algorithm so that each commodity keeps track of the most favorable rate at which it can be exchanged with all of the other commodities. Each commodity initially knows its exchange rates with its neighbors, and a rate of 1 with itself. On each round of the algorithm, each commodity updates its knowledge by making use of the knowledge of its reachable neighbors. Consider edge \((u, v)\) with rate \( r_{u,v} \) and commodity \( w \). Suppose \( v \) knows that it can be exchanged with \( w \) at rate \( r_{v,w} \). Then \( u \) knows it can be exchanged with \( w \) at rate \( r_{u,w} = r_{u,v} \times r_{v,w} \). In this way, commodity \( u \) updates its most favorable (smallest) known rate at which it can be exchanged with all other commodities through its neighbors.
The only modification necessary to the Bellman-Ford algorithm is the method by which each node updates its most favorable rate. Instead of adding the edge weight to the neighbor’s value, they multiply with each other, since this operation reflects the multiplicative nature of composite trading rates. Another approach is to set the edge weight to the logarithm of the merchant’s exchange rate, in which case no modification of BF is necessary at all, since multiplying the rates is the same as adding their logarithms:
\[
\log(r_x \cdot r_y) = \log(r_x) + \log(r_y)
\]
The number of multiplications (or additions) and comparisons does not change — $O(n \cdot m)$. Note that since the logarithms of the rates can be negative (or equivalently because some rates can be less than 1), we need to use Bellman-Ford rather than Dijkstra’s algorithm.
(b) We are looking for a commodity which finds a rate of exchange with itself which is less than 1; $r_{u,u} < 1$. In the logarithm-edge-weight formulation, this would be exactly the same as finding a negative cycle in a graph; $r_{u,u} < 1 \Rightarrow \log(r_{u,u}) < 0$. The Bellman-Ford algorithm knows that such a cycle exists if an update occurs at round $n + 1$, at which point we can look at each commodity and find out which one has found a favorable rate of exchange with itself. Again, the runtime for this algorithm is the same as BF, $O(n \cdot m)$.
(c) Each merchant wants to trade with Charlene so that the monetary value of their goods after the trade is greater than or equal to their value before the trade:
$$m_2 \geq m_1$$
Suppose merchant $i$ is selling $v$ for $u$. He sells $x$ units of $v$ for $r_i \cdot x$ units of $u$. The monetary value of $x$ units of a commodity $y$ is $x \cdot p_y$, so the above condition becomes:
$$m_2 = r_i \cdot x \cdot p_u \geq x \cdot p_v = m_1$$
Dividing out $x$ and rearranging to solve for $r_i$, we have:
$$r_i \geq \frac{p_v}{p_u}$$
Let us examine the implications of this. Suppose Charlene trades with one merchant between goods $u$ and $v$ at rate $r_1$, and then with another merchant between goods $v$ and $w$ at rate $r_2$. Then the composite rate of the exchange is $r_c = r_1 \cdot r_2$. We observe the following result:
$$r_1 \geq \frac{p_v}{p_u} \text{ and } r_2 \geq \frac{p_w}{p_v} \Rightarrow r_c = r_1 \cdot r_2 \geq \frac{p_v}{p_u} \cdot \frac{p_w}{p_v} = \frac{p_w}{p_u}$$
Thus, assuming all merchants trade to their advantage, the composite rate of exchange between two goods along any path must be greater than the ratio of their prices.
Thus, if Charlene makes any series of exchanges between good $g$ and itself, we get the following result:
$$r_c \geq \frac{p_g}{p_g} = 1$$
Thus, if there exist prices $p_1, p_2, \ldots, p_n$ for each commodity such that, for all merchants $i$ who trade commodities $(u_i, v_i)$ at rate $r_i$, $r_i \geq \frac{p_{v_i}}{p_{u_i}}$, then the best possible exchange rate between any good and itself, $\min_{g,g}$, must be greater than 1.
(d) Choose one good arbitrarily, $c$, as the “currency” for the market, such that one unit of that good is worth 1 milli-bitcoin (mBTC). One can then set the price of all goods in the rest of the market relative to the currency. For instance, to assign a price to good $g$, find an exchange path from $c$ to $g$; find the composite rate, $r_{c,g}$ of exchange between the currency and the other good along that path (that is, the product of the rates on each edge traversed, or the sum of their logarithms); and set the price $p_g = r_{c,g} \cdot p_c$.
Obviously, it is possible to find multiple paths from $c$ to $g$, or from any two goods. Suppose one path has composite rate $r_1$, and another has rate $r_2 > r_1$. If we follow one path, then we set $p_g = r_1 \cdot p_c$, and by the other path we set $p_g = r_2 \cdot p_c$. It turns out that in order for all merchants to trade to their benefit, we need to set $p_g$ according to the smallest possible rate $r_{c,g}$. To see this, suppose that we instead set the price according to the larger rate. Then we have:
$$p_g = r_2 \cdot p_c$$
Rearranging the equation, and taking into consideration the earlier supposition that \( r_1 < r_2 \), we have
\[
\frac{r_1}{r_2} = \frac{p_g}{p_c}
\]
So if we set prices according to an exchange rate which is NOT the smallest, then there is at least one merchant that trades to their own disadvantage. If on the other hand we set rates according to the smallest possible exchange rate, then we have the opposite situation:
\[
\frac{r_2}{r_1} = \frac{p_g}{p_c}
\]
Which is acceptable for all merchants.
Thus, our price setting algorithm is as follows: pick a currency \( c \) with arbitrary price \( p_c \); calculate the smallest possible exchange rate between the currency \( c \) and all other goods, \( r_{\text{min}}_{c,g} \), by adapting BF algorithm as we did in part A (this will be possible since, by assumption, the situation in part B is impossible); and set the price of good \( g \) to \( p_g = r_{\text{min}}_{c,g} \cdot p_c \).
We need to verify that this is an appropriate pricing scheme. Given the condition from part (c), this means that we need to show that for each merchant willing to trade \( g \) for \( g' \) at rate \( r \) that
\[
r \geq \frac{p_g}{p_{g'}} = \frac{r_{\text{min}}_{c,g}}{r_{\text{min}}_{c,g'}}
\]
This must be the case though since Chalene has a sequence of trades getting her from \( c \) to \( g \) by first making the optimal sequence of trades to turn \( c \) into \( g' \) and then using this merchant to exchange \( g' \) for \( g \). This gives an exchange rate of \( r \cdot r_{\text{min}}_{c,g'} \), which therefore, must be at least \( r_{\text{min}}_{c,g} \).
|
{"Source-Url": "http://cseweb.ucsd.edu/~dakane/CSE101%20Problem%20Archive/W15/Solutions2.pdf", "len_cl100k_base": 4372, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 19068, "total-output-tokens": 4751, "length": "2e12", "weborganizer": {"__label__adult": 0.0006666183471679688, "__label__art_design": 0.0008063316345214844, "__label__crime_law": 0.0011568069458007812, "__label__education_jobs": 0.04779052734375, "__label__entertainment": 0.00022351741790771484, "__label__fashion_beauty": 0.00037217140197753906, "__label__finance_business": 0.004482269287109375, "__label__food_dining": 0.0013103485107421875, "__label__games": 0.0034694671630859375, "__label__hardware": 0.0019168853759765625, "__label__health": 0.0010194778442382812, "__label__history": 0.0011310577392578125, "__label__home_hobbies": 0.0003707408905029297, "__label__industrial": 0.0021991729736328125, "__label__literature": 0.0011196136474609375, "__label__politics": 0.0008635520935058594, "__label__religion": 0.0007739067077636719, "__label__science_tech": 0.24365234375, "__label__social_life": 0.0003376007080078125, "__label__software": 0.01284027099609375, "__label__software_dev": 0.67041015625, "__label__sports_fitness": 0.0006365776062011719, "__label__transportation": 0.0020904541015625, "__label__travel": 0.0005087852478027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15984, 0.01689]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15984, 0.61557]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15984, 0.89465]], "google_gemma-3-12b-it_contains_pii": [[0, 2969, false], [2969, 6315, null], [6315, 10836, null], [10836, 14365, null], [14365, 15984, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2969, true], [2969, 6315, null], [6315, 10836, null], [10836, 14365, null], [14365, 15984, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 15984, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15984, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15984, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15984, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 15984, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15984, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15984, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15984, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15984, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15984, null]], "pdf_page_numbers": [[0, 2969, 1], [2969, 6315, 2], [6315, 10836, 3], [10836, 14365, 4], [14365, 15984, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15984, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
98f0ea1ee7164bbff5d07e81ed9a849caaa8179e
|
Visual Interfaces for Model Mapping – Large Mapping Visualization
António Painha, Hugo Manguinhas, José Borbinha
INESC-ID, Rua Alves Redol 9, Apartado 13069, 1000-029 Lisboa, Portugal
IST – Department of Information Science and Engineering, Instituto Superior Técnico, Lisbon Technical University, Portugal
(antonio.h.painha, hugo.manguinhas, jlb}@ist.utl.pt
Abstract. The mapping of information models has become an important part of the data integration and interoperability research areas, which aim to enable the reuse of data (i.e. information) in different contexts, and often resort to visual mapping tools as the simplest and fastest way to define the needed relationships between two different models and create the desired mapping model. However, the current approaches adopted by visual mapping tools, to visualize the mappings between two models, have difficulty to work properly when the models or the mappings between them become large. This work then proposes seven new, or improved, visualization techniques (called interface paradigms) to address this problem, which aim to simplify the viewing, making it possible for the user to effectively deal with much larger models and maps. This novel approach was integrated into the XMApper prototype, a visual mapping tool. A user study was conducted to evaluate the proposed interface paradigms, and results led to the conclusion that most of the new paradigms are very useful and effective, when applied to the interface of a visual mapping tool. The primary contribution of this work is a demonstration of new ways to effectively present highly complex mapping information.
Keywords: Visual Mapping Tools; Visualization Techniques; Visualization of Complex Information; Visual Interfaces; XMApper.
1 Introduction
Currently, information modeling can be used to describe rather extensive domains, thus requiring proportionally extensive information models to describe them. This all transcends to the mapping between models process, where, for complex models and mappings, defining a mapping model is very difficult [1]. One current, well-received solution to this problem are visual mapping tools, which are visual modeling tools with the purpose of making it easy for a designer to establish mappings [2]. An existing problem with such applications is that they are unable to cope well with the growth in size and complexity of both the models and the mappings that can be
established between them. Figure 1 is an example of such failure. In this MapForce’s example, details of interest are lost in a maze of complexity.

The motivation for this work comes from our efforts to develop a Metadata Registry (MDR), an information system, with a web-based interface, designed to store and maintain in a controlled environment the range of information models used within an organization, and how these models relate to others [3]. As a result, the MDR promotes a common understanding of the information managed within an organization and assists organizations in the sharing and exchanging of mutually agreed information. To achieve this, the MDR requires a visual mapping tool with a web interface that can cope with the diversity and complexity of each domain. This tool is an under development prototype named ‘XMApper’.
This work proposes with five new visualization techniques, or interface paradigms, and two improvements over existing ones to approach the issues of large mappings visualization. This paper starts by making a small overview of the researched related work, followed with the description of the new interface paradigms and, after that, a report of results of the user study that verifies the their usefulness, usability and effectiveness.
## 2 Related Work
Visual mapping tools are software applications which can be used to visually establish mappings between models, defined within a schema (usually in XML¹). In such programs, mappings are specified by allowing the users to make connections between visual representations of source and target schemas’ entities. The associations can range from very simple, such as a direct line between two elements, to more complex, such as an association of two or more elements through a graphical box that denotes a function application. In the ambit of this work, many visual mapping tools were studied. MapForce² is a commercial tool, developed by Altova, allowing the definition of mappings between many formats, like XML or EDI, using a visual design interface; Microsoft’s BizTalk Mapper³ – a module included in BizTalk Server bundle – is another visual editor that allows mapping between XML schemas,
---
¹ [http://www.w3.org/XML/](http://www.w3.org/XML/)
and has had a lot of recent research and development dedicated to its interface [1]. The XML Mapper\(^4\) – a part of Stylus Studio’s XML IDE – is a many-to-one visual schema mapper dedicated to a XML environment; Clio\(^5\) is an IBM’s prototype system for managing and facilitating data transformations and integrations, with the particular focus of the semi-automatic definition of the mappings, and includes a graphical interface, the Clio Schema Viewer, to allow user corrections and information viewing; and finally, the COMA++\(^6\), another research prototype – developed at the University of Leipzig – that aims to study automatic schema and ontology matching, and possesses a graphical interface, which allows the user to view and influence the matching process in many ways. The interfaces of all this tools have some common parts: the Schema View, which represents the models usually through a hierarchical tree; the Mapping Board, which is where all the mappings are made and visually represented through mapping links an function boxes; and the function Toolbox, where all the available mappings functions are stored. Therefore, the interface paradigms (representation techniques that are applied to an application’s interface with the goal of displaying information with unusual characteristics, which create representational problems for the interface) for the representation of large mappings (this work’s problem) will be separated into four different areas: Mapping Board Improvement, Schema View Improvement, Navigation and General Improvement.
A study was conducted to list the interface paradigms present in the previously introduced five visual mapping tools, revealing many unique ones and other that were present in more than one application. This study drawn from various sources (more precisely [1] [4] [5] [6] [7] [8] [9] [10]) and from its analysis came some thoughts and ideas, like the dominance of paradigms dedicated to Mapping Board Improvement and the value of some navigation techniques present in many of the applications. Also, many areas remain problematic. On the Mapping Board, correct identification of connections, without selecting them, remains the biggest issue, while in the Schema Views the main problems are finding and viewing specific information about the models and relevant information about the mappings.
Finally, to create the new interface, where any new paradigms developed in this work shall be tested, a search was conducted to find the right JavaScript framework for the job. Many solutions were analyzed and compared: Prototype + Scripty2, MooTools, jQuery, Ext JS, Dojo, GWT and Vaadin. GWT (or Google Web Toolkit) was the elected choice, because, although it will demand the investment of more time (due to a harsher learning curve and ease of use), it will allow for a more robust, versatile and easier-to-debug solution.
### 3 Proposed Interface Paradigms
The study of the problem and the analysis of current visual mapping tools interfaces, led to the idealization of seven new interface paradigms, divided between Mapping
---
\(^4\) http://www.stylusstudio.com/xml_mapper.html
\(^5\) http://www.almaden.ibm.com/cs/projects/criollo/
\(^6\) http://dbs.uni-leipzig.de/Research/coma.html
Board Improvement and Schema View Improvement. Paradigms [SP1.1] and [SP2.2] are improvements on already existing ones, while the other four are new.
3.1 Mapping Board Improvement
[SolutionParadigm1.1] Connection Visibility – Current iterations of mappers are starting to implement paradigms that aim to clean the view of the Mapping Board by manipulating the visibility of the shown connections with user toggled functions. This work’s approach to this paradigm is to implement a similar paradigm as being the mapper natural behavior. The idea is to transmit the user the impression that if both end of the connection aren’t visible then that connection doesn’t matter at the time (is hidden), or if only one end is visible then the connection appears out of focus (greyed-out and dashed) just to inform the user where the other end can be found (by following the line). The definition of visibility status for a schema element has also been improved: an element isn’t visible not only when it is scrolled out of the view, but also when its parent element is collapsed. This is important because it means that connection cluttering will be reduced even further, by removing focus from connections that go to elements hidden by collapsed parents (but with the other end of the connection still visible) and even hiding them completely (if the other end is also hidden or scrolled out of view).
[SP1.2] Connection Render – A mapping overloaded with connections will often lead to user confusion when he tries to identify overlapping connections or even connections overlapped by function boxes (does it come from that function box? Or it has another source and it’s just crossing the function box’s space?). Techniques like bendable links have tried to address this problem, but still with limited success. This dissertation proposes to allow the user to change how the connection is rendered at any time, i.e. if he’s having problems comprehending links with the current connection shape he shall be able to change it on-the-fly and try with a different one. For now, two complimentary approaches shall be tested: a rectilinear connection render, which will plot a path from A to B with only horizontal and vertical lines; and a direct connection render, which will go directly from A to B. The first method is cleaner, but may cause confusion with overlapping connections, while the second might be messier, but simpler to (in most cases) identify the connections’ targets.
[SP1.3] Connection Argument Visibility – Minimizing the space occupied by the mapping functions’ visual representation (usually a small box) is and effective way of simplifying the view of a Mapping Board (as can be seen in BizTalk Mapper and Stylus Studio XML Mapper), but doing so often makes it impossible or impractical to have the function inputs and output directly show on the its representation. A possible way to address this dilemma, and thinking of what MapForce does with its connection’s annotations, is to show the inputs and output of the function boxes near the docking point where the connections meet them, and call them simply connection arguments. This way the connection arguments will only show when needed, i.e. when a connection is effectively using it, instead of being always shown in the Mapping Board. Although being a nice and clean method in the early stages of a mapping, this technique can also originate much cluttering of the Mapping Board on
Visual Interfaces for Model Mapping – Large Mapping Visualization
3.2 Schema View Improvement
[SP2.1] Quick Filters – Inspired by the thought of joining the paradigms of information toggling and information suppression, like sibling coalescence, quick filtering is a simple way to address the problems that come from a Schema View with a high number of elements. The idea is to have quick access, by toggles, to some filters that allow the user to isolate important elements or show views of interest. One of these filters will be the ability to hide all unconnected elements, so it’s possible to easily know what the current state of the mapping is, and other the inverse, i.e. show only unconnected elements, with the objective of knowing what isn’t mapped. Furthermore, other filters like “hide all but the user selection” would certainly be worth investigating.
[SP2.2] Tree Element Connection Status – Having visual clues of the current connection status of the tree schema elements in the tree element representation itself, can be of precious aid to navigate and assert the current state of a mapping. Currently, some mappers (like Biztalk’s) already use a solid line and a small icon in the tree element, to represent if that element is connected or not, but this technique can be expanded as well as its purposes. The connection line in element the can be further customized in terms of color and shape (e.g. solid or dashed) to transmit more information to the user (like nested elements connections or connection visibility) without needing extra space. Furthermore, a small underscore under an element’s text will also transmit more information about an element’s (or its children’s) connection status.
[SP2.3] Sorting – Doing the right sorting of the tree elements can be another way to quickly find what you are looking for in a heavy populated tree. Two types of sorting seem to be relevant when trying to find a specific tree element: Alphabetical and by Connection Status. The simple alphabetical sorting is a valid option, being a common and intuitive method the users are used to deal with, which can be employed to find elements by their name. On the other hand, there is the more mapping-specific Connection Status ordering, which, as the name says, consists of ordering the tree elements by their number of connections, isolating (in the same tree level) unconnected elements from the connected without having to remove one of the groups. The main drawback of sorting the tree elements is changing the ‘natural’ order of the schemas, which could cause some disorientation issues.
[SP2.4] Connection Statistics – Statistics present a different view of what is first perceived in a situation. Showing schemas’ connection statistics, like the ratio of connected and unconnected elements of a schema, can be helpful for a mapping user to know aspects such as the completion of the current mapping model. Also, those statistics could be used to directly navigate or change views in trees or in the Mapping Board. The idea of this paradigm is to simply convert the statistic data to graphs and having the different portions of the graph hyperlinked to the action wished to be
applied upon the elements of the mapping (Example: clicking the “connected” portion of a pie graph showing the ratio of connected schema elements, would filter all the connected schema elements in the mapper). This way the user will not only be informed about the mapping, but will also have a quick way to navigate it.
4 User Study
As the developed solution aimed to change the users experience while using the XMApper’s interface, a user study was conducted to evaluate its usability and usefulness. To this end, different versions of the prototype standalone version were built. For all versions, even the baseline one, the [SP2.2] Tree Element Connection Status was active due to its simple nature and valuable help in defining the experiment’s tasks. In total there were four versions of the XMApper prototype in this study: Version A is the baseline version, with no new interface paradigms (besides [SP2.2]). Version B is version A plus the Mapping Board Improvement paradigms. Version C is version B plus the Schema View Improvement paradigms, except the [SP2.4] Connection Statistics. Version D is the final version, with all of the version C paradigms plus the [SP2.4].
The [SP2.4] paradigm’s synergy with the other paradigms led to the choice of creating a final version to collect data about it specifically. This version was tested separately, with a specific task set, and compared only to Version C.
So, the main study consisted in all participants experimenting with the first three versions of the interface with a small study of Version D being conducted right after every test of Version C. To control for order effects, the order in which participants experienced each of the three main versions of the mapper was counterbalanced using a Latin Square design. After all the prototype tests were done, each participant completed a background and satisfaction survey.
Participants
The data for this study was collected from a population of 10 male computer users. From the analysis of the background questionnaire, the participants had an average of 25 years (ranging from 24 to 26), 16 years of computer experience and 70% had a degree in Multimedia and/or Information Systems. Although most participants didn’t have previous experience with the XMApper due to its recent nature, 60% had experience using visual mapping tools.
Tasks
For the main study, four task sets with four tasks each were devised, involving finding elements and information in the source and target schema, their related function boxes and connections. For the statistics study, two task sets of two tasks each were created, involving finding and counting schema elements. An effort was made to keep the task sets isomorphic so that the participants experienced similar tasks as they viewed each version of the interface. To ensure that no one task set was accidentally more difficult that the rest, however, they were rotated through
---
http://mathworld.wolfram.com/LatinSquare.html
visualizations. Two of the tasks sets used in the main study are show in Table 1 as an example. The two task sets used for the statistics study are show in Table 2. All sessions were conducted online (the four different versions of the prototype were online) in video conference, with a single participant at any one time and lasted on average 50 minutes. Because the tests were performed online, the Properties Panel on the prototype’s interface was disabled, so that the users couldn’t name a related element before they navigated to it.
<table>
<thead>
<tr>
<th>Task Set A</th>
<th>Task Set B</th>
</tr>
</thead>
<tbody>
<tr>
<td>1. Go to the first “NOT” function box (from above) and find out what it’s connected to in both the source and target schemas.</td>
<td>1. Go to the first “AND” function box (from below) and find out what it’s connected to in both the source and target schemas.</td>
</tr>
<tr>
<td>2. Go to the last connected element of the target schema and find out what it’s connected to in the source schema.</td>
<td>2. Go to the first connected element of the source schema and find out what it’s connected to in the target schema.</td>
</tr>
<tr>
<td>3. Go to the second connected element of the source schema and find out the name of the function box’s argument it is connected to.</td>
<td>3. Go to the second to last connected element in the target schema and find out the name of the function box’s argument it is connected to.</td>
</tr>
<tr>
<td>4. Determine which root element from the target schema has fewer connections.</td>
<td>4. Determine which element from the source schema has more connections.</td>
</tr>
</tbody>
</table>
Table 1. Two of the task sets used in the main study.
<table>
<thead>
<tr>
<th>Extra Task Set A</th>
<th>Extra Task Set B</th>
</tr>
</thead>
<tbody>
<tr>
<td>1. Determine how many connected elements the source schema has.</td>
<td>1. Determine how many connected elements the target schema has.</td>
</tr>
<tr>
<td>2. Tell me, which is the first connected element in the target schema.</td>
<td>2. Tell me, which is the last connected element in the target schema.</td>
</tr>
</tbody>
</table>
Table 2. The two task sets used in the statistics study.

Test Map
The map that was used in the experiment is shown in Figure 2. The schemas used were MODS 3.0 (source) and ESE 3.3 (target). Although the used mappings were fictional and specially created for the tests, special care was taken to maintain them valid, respecting the data types. The aspect ratio of the window used for the study, as shown in Figure 2 example, was chosen so the Schema View maximum visible height was equal to the Mapping Board height. In this prototype the Mapping Board scrolling wasn’t implemented. This reduced space of the Mapping Board and limited window size, helped simulating the behavior required for a more traditional aspect ratio on larger schemas and a larger map.
4.1 Results and Discussion
Task times
The post-hoc analysis of all obtained task times allowed the plotting of the chart shown in Figure 3, representing the average task times by version and their standard deviation (σ). The average task time for the prototype Version A (base version plus [SP2.2]) was significantly higher than that of Version C (all paradigms), being 28.4 (σ=3.2) and 17.9 (σ=4.0) seconds respectively. The prototype Version B (baseline plus [SP1.1], [SP1.2] and [SP1.3]) with an average task time of 22.1 seconds (σ=3.2), stood in the middle of the other two. These results are shown in Figure 3.
As for the statistics study, task 1 was devised to study the impact of having a quick access to item count and task 2 was devised to validate the idea of having interactive charts. While the time for the first task was reduced to less than a third of the original, showing the importance of having a profile with general information of the schemas, the second task results were suboptimal and, in fact, having interactive charts proved to be slower in some occasions and many users didn’t even use them.
---
Figure 4. Average times by prototype version and task.
Figure 5. Average task times for prototype version without Statistics (Version C) and with Statistics (Version D) for the two studied types of task.
Satisfaction Data
A user satisfaction questionnaire was completed by the participants at the end of the session. To improve the methodological rigor, some statements were asked in a favorable way toward the prototypes tested and some were phrased in a negative manner. Responses were collected using a 7-point Likert scale with 1 = Very Low and 7 = Very High [11]. In order to improve readability, questions which required a lower response to reflect a positive satisfaction were flipped prior to analysis (e.g. if the user rated a question with 1, meaning the highest possible value, it was flipped to 7). A post-hoc analysis of the satisfaction ratings allowed the plotting of the chart shown in Figure 6, with the averages of the user satisfaction ratings and their standard deviation (σ). The Version C of the prototype was rated significantly higher than each of the other versions, and Version B was slightly higher rated than Version A. The standard deviations were low for all versions (A: σ=0.8; B: σ=0.4; C: σ=0.5), representing a good accuracy for the results. All of the satisfaction data is included in Table 3 (Note: the questions are the original from the survey, but, for this table, higher ratings always indicate higher satisfaction).
Looking at Table 3, the overall higher satisfaction of Version C is noticeable not only directly in the results for question 7, but also in the answers of the other questions, being Version C the only with ratings above the 6 points. Besides the overall satisfaction question (Question 7), these higher ratings were also obtained in the performance (Q.6) and frustration (Q.8) of the user, which are directly related, as well as in the questions about the time consume (Q.5) and difficulty of finding related elements (Q.1), which are also related between them. When compared to these, the discoverability of features (Q.3) scored poorly, barely passing the 5 points, and is an issue to look into in future. A final note to the satisfaction rating of the user performance (Q.6), which was not only one of the highs scoring in Version C, but also the highest scoring in A and B, showing that although most users didn’t have previous experience with the XMApper, they felt secure about how they used it during the tests.
Usability issues
Some usability issues were observed during the study, which need to be addressed in future designs. For instance, in Version C and D, many users used the Show Connected filter and then the Expand All button in quick succession, as this has
proven to be a fruitful combination in many situations. Some users suggested that the option to change the connection render should be more accessible (currently in the preferences menu), and maybe due to this, a few users would just apply as the first thing every time it was available. In addition, one user felt like reaching the sorting options for the Schema Views was also not intuitive, and suggested that the most useful (or even all) of the sorts were moved to the toolbar of the Schema Views, like the filters. Two users said that the direct render (for the connections) should be active by default, as it was simpler to identify the different connections. Several users had problems determining the number of connections of a root element in the base version, and some of the more experienced with mapping tools resorted to deleting or editing connections to count them, instead of trying to identify the overlapped connections. One user suggested having an option that tells you how many connections a singular element of the map has. Some users also reported that the background color of the schema elements when filters are applied (light green) make it hard to tell which element was selected (light blue) because of the color similarity.
5 Summary and Future Work
Addressing the visualization issues created by large mappings was the goal of this work. To that end, seven new, or improved, interface paradigms were designed and then tested in a user study. The study results revealed a significant time advantage for using the new paradigms over the baseline version. In addition, user satisfaction ratings corroborated those performance results, with the new interface versions receiving significantly higher ratings than the prototype’s baseline version. Comments from study participants assured the validity and usefulness of the new interface paradigms, but still some usability issues were observed and should be addressed in future designs. Overall results show that, although being clearly a work in progress, the new paradigms are valid in helping the Visualization of Large Mappings, even if some better than others:
[SP1.1] Connection Visibility − Made distinguishing connections and knowing their related elements visibility and easier job for the users; Improved task times.
[SP1.2] Connection Render − Users liked the ability to change renders, but it should be more accessible to be effective, and have a more varied render selection.
[SP1.3] Connection Argument Visibility − The option to “show on selected connections” was considered a great improvement over the “show always”, and the configuration options were almost unnecessary.
[SP2.1] Quick Filters − The paradigms that produced the most positive feedback, because of its dramatic view change and ease of toggling on and off. Users liked the prospect of having new different filters.
[SP2.2] Tree Element Connection Status − Helped the users all through tests, although many had problems in grasping the difference between the meaning of a solid and dashed underscore.
[SP2.3] Sorting − Also a popular paradigm, the best one to count connections, but less used than the quick filters due to its more hidden location.
[SP2.4] Connection Statistics − The value of having information about the map was proven, but the use of the charts to apply effects to the view needs work.
There is still much to do to improve the new interface paradigms. Some examples: more connection renders; expand statistics to have Mapping Board statistics and work with Mapping Board interface paradigms; more quick filters can be included in tree drop down menu, and choosing those that can be accessed by toggling should be possible. Test participants also suggested many interface changes for the paradigms, especially in their accessibility. Some of the already existing interface paradigms, that weren’t implemented in the XMApper, could work well together with the new ones. More development and testing are required. Also, due to time issues, the scrolling of the Mapping Board wasn’t implemented and thus the hiding of function boxes couldn’t be tested for the [SP1.1] Connection Visibility paradigm. Similarly, the “Show unconnected” filter couldn’t be implemented and tested in time for the writing of this paper.
6 References
|
{"Source-Url": "https://fenix.tecnico.ulisboa.pt/downloadFile/395144226979/resumo.pdf", "len_cl100k_base": 5956, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 33042, "total-output-tokens": 7033, "length": "2e12", "weborganizer": {"__label__adult": 0.0003941059112548828, "__label__art_design": 0.002964019775390625, "__label__crime_law": 0.00047206878662109375, "__label__education_jobs": 0.00582122802734375, "__label__entertainment": 0.0001646280288696289, "__label__fashion_beauty": 0.0002397298812866211, "__label__finance_business": 0.0005736351013183594, "__label__food_dining": 0.0004222393035888672, "__label__games": 0.0005354881286621094, "__label__hardware": 0.0012521743774414062, "__label__health": 0.0006422996520996094, "__label__history": 0.0008845329284667969, "__label__home_hobbies": 0.00014972686767578125, "__label__industrial": 0.0006961822509765625, "__label__literature": 0.0007038116455078125, "__label__politics": 0.0002980232238769531, "__label__religion": 0.0006351470947265625, "__label__science_tech": 0.2357177734375, "__label__social_life": 0.0002104043960571289, "__label__software": 0.10296630859375, "__label__software_dev": 0.64306640625, "__label__sports_fitness": 0.00020933151245117188, "__label__transportation": 0.0005640983581542969, "__label__travel": 0.0002949237823486328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30675, 0.02135]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30675, 0.32886]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30675, 0.93487]], "google_gemma-3-12b-it_contains_pii": [[0, 2442, false], [2442, 4919, null], [4919, 8177, null], [8177, 11633, null], [11633, 14830, null], [14830, 17815, null], [17815, 19881, null], [19881, 21858, null], [21858, 23318, null], [23318, 24595, null], [24595, 27966, null], [27966, 30675, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2442, true], [2442, 4919, null], [4919, 8177, null], [8177, 11633, null], [11633, 14830, null], [14830, 17815, null], [17815, 19881, null], [19881, 21858, null], [21858, 23318, null], [23318, 24595, null], [24595, 27966, null], [27966, 30675, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30675, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30675, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30675, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30675, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30675, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30675, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30675, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30675, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30675, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30675, null]], "pdf_page_numbers": [[0, 2442, 1], [2442, 4919, 2], [4919, 8177, 3], [8177, 11633, 4], [11633, 14830, 5], [14830, 17815, 6], [17815, 19881, 7], [19881, 21858, 8], [21858, 23318, 9], [23318, 24595, 10], [24595, 27966, 11], [27966, 30675, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30675, 0.09709]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
f7199927d0c9f3ac78ad5216f9368b9738cc2f39
|
Solr
- Solr in DSpace
- Connecting to Solr
- Bypassing localhost restriction temporarily
- Instructions specific to Tomcat 7 and newer
- Instructions specific to Tomcat 6 and older
- Bypassing localhost restriction permanently
- Accessing Solr
- Solr cores
- Solr admin interface
- Solr queries
- Solr responses
- PHP example
- Examples
- Date of last deposited item
- Top downloaded items by a specific user
- Number of items in a specific community
- Breakdown of submitted items per month
- Statistics breakdown per event type
- Statistics: breakdown of downloads per month
- Statistics: number of downloads (item views) for a specific item per month
- Statistics: number of total downloads in a given time span
- Querying Solr from XMLUI
- Examples
- Date of last deposited item
- Multicore join queries
- "AND" search as default
- Deleting Solr index data
- Solr delete query
- Manually delete Solr index files
- Set up Solritas (VelocityResponseWriter)
- Guidepost
Solr in DSpace
DSpace uses Solr as a part of Discovery as index to speed up access to content metadata and data about access to DSpace (for statistics). It also provides faceting, search results filtering and in newer versions of DSpace also hit highlighting and "More like this". If Discovery is enabled, the DSpace search field accepts Solr search syntax.
Discovery is an optional part of DSpace since 1.7 (with big improvements and configuration format changes in 1.8). When enabled, Discovery replaces DSpace Search and Browse and provides Solr-based statistics. Since DSpace 3, it is also the default storage for the DSpace OAI-PMH provider (server) responses.
Do I need to read this page?
To gain the benefits of faceting and filtering in XMLUI, all you need to do is enable Discovery. The rest of these page describes some advanced uses of Solr - if you want to query Solr directly for theme customization or read DSpace metadata from outside DSpace.
Please, note, that to get data from Solr, you don't technically need to enable the Discovery aspect, but you do need to populate the index. The statistics core is populated automatically in DSpace 1.6+. To populate the search core (DSpace 1.7+), you need to run `[dspace]/bin/dspace index-discovery` (you will probably want to schedule it in cron to run periodically, too). In DSpace versions older than 4.x, the command was called `[dspace]/bin/dspace update-discovery-index`. There should be no reason to access the oai core (DSpace 3.0), because it contains the same information as the search core, but if you want to populate it, run `[dspace]/bin/dspace oai import`.
Connecting to Solr
By default, the DSpace Solr server is configured to listen only on localhost, port 8080 (unless you specified another port in Tomcat configuration and the `dspace/config/modules/discovery.cfg` config file). That means that you cannot connect from another machine to the dspace server port 8080 and request a Solr URL - you'll get a HTTP 403 error. This configuration was done for security considerations - Solr index contains some data that is not accessible via public DSpace interfaces and some of the data might be sensitive.
Before you try to follow the advice below to bypass the localhost restriction, please note:
- Exposing the Solr interface means, that any restricted metadata such as dc.description.provenance and non-anonymized usage statistics (client IPs, user agent strings) will be accessible.
- Exposing the Solr interface also means that it will be exposed for write access. There is no easy way to expose only read access.
- Never expose Solr to the internet. If you're exposing it to an IP within your network, add it as an exception to the LocalHostRestrictionFilter. If you have to expose Solr to a public IP, use a SSH tunnel or a VPN for the connection.
Bypassing localhost restriction temporarily
While you could make Solr publicly accessible by changing this default configuration, this is not recommended, because Solr indexes may contain some data you might consider private. Instead, use one of following simple means to bypass this restriction temporarily. All of them will make Solr accessible only to the machine you're connecting from for as long as the connection is open.
1. **OpenSSH client - port forwarding**
connect to DSpace server and forward its port 8080 to localhost (machine we're connecting from) port 1234
```
ssh -L 1234:127.0.0.1:8080 mydspace.edu
```
makes mydspace.edu:8080 accessible via localhost:1234 (type `http://localhost:1234` in browser address bar); also opens ssh shell
exit ssh to terminate port forwarding
Alternatively:
```
ssh -N -f -L 1234:127.0.0.1:8080 mydspace.edu
```
run with -N and -f flags if you want ssh to go to background
kill the ssh process to terminate port forwarding
2. **PuTTY client - port forwarding**
Local port forwarding:
```
Connection - SSH - Tunnels
Source port: 1234
Destination: localhost:8080
Local
Auto
Add
```
Once you're connected in PuTTY, visit `http://localhost:1234/solr/` and you should see Solr's web interface. No browser configuration is necessary.
Dynamic port forwarding/ SOCKS proxy*:
```
Connection - SSH - Tunnels
Source port: 1234
Dynamic
Auto
Add
```
Once you're connected in PuTTY, you'll need to configure your browser to use localhost:1234 as a SOCKS proxy (and remove "localhost" and "127.0.0.1" from addresses to bypass this proxy - like in the next step)
3. **OpenSSH client - SOCKS proxy**
connect to DSpace server and run a SOCKS proxy server on localhost port 1234; configure browser to use localhost:1234 as SOCKS proxy and remove "localhost" and "127.0.0.1" from addresses that bypass this proxy
all browser requests now originate from dspace server (source IP is dspace server's IP) - dspace is the proxy server
type `http://localhost:8080` in browser address bar - localhost here is the dspace server
```
ssh -D 1234 mydspace.edu
```
*Note about PuTTY as SOCKS proxy - while it can be configured, it raises a security exception when Solr is accessed. If you figure this out, please add this method here.
Bypassing localhost restriction permanently
**Privacy warning**
Before you read this chapter, make sure you read Connecting to Solr and understand the consequences of any changes.
**Instructions specific to Tomcat 7 and newer**
Here's how you can:
1. turn off the localhost filter in Tomcat
2. replace it with a RemoteAddrValve and allow an enumerated set of IP addresses or subnets (in the following example the 127.0.0.1, 123.123.123.123 IPs and the 111.222.333.* subnet would be allowed):
Change your server.xml or alternatively your context fragment (i.e. conf/Catalina/localhost/solr.xml) like this:
```xml
<Context path="/solr" reloadable="true">
<Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="127\.0\.0\.1|123\.123\.123\.123|111\..222\..233\..d+/">
<Parameter name="LocalHostRestrictionFilter.localhost" value="false" override="false" />
</Valve>
</Context>
```
Do not forget to include localhost (i.e. 127.0.0.1) in the allowed list, otherwise Discovery, OAI 2.0 and other things depending on Solr won’t work.
See also:
- Unable to locate Jira server for this macro. It may be due to Application Link configuration.
**Instructions specific to Tomcat 6 and older**
Please, note that the syntax of the "allow" attribute changed in Tomcat 7 to a single regular expression. In Tomcat 6 and older, it was a comma-separated list of regular expressions, therefore this worked in Tomcat 6, but does not work in Tomcat 7+
```xml
<Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="111.222.233.*, 123.123.123.123, 127.0.0.1" />
```
See also: [Tomcat 6 documentation: Remote Address Filter](#)
**Accessing Solr**
**Solr cores**
DSpace contains a so-called multicore installation of Solr. That means that there are multiple Solr indexes and configurations sharing one Solr codebase. If you’re familiar with Apache HTTPD, it is analogous to multiple virtual hosts running on one Apache server (separate configuration and webpages), except that individual Solr cores are accessible via different URL (as opposed to virtualhost IP:port).
The two Solr instances in DSpace Discovery are called "search" and "statistics". search contains data about communities, collections, items and bitstreams. statistics contains data about searches, accessing users, IPs etc. The two instances are accessible at following URLs (relative to the dspace server):
- http://localhost:8080/solr/search/
**Solr admin interface**
Both Solr cores have separate administration interfaces which let you view their respective schemas, configurations, set up logging and submit queries. The schema browser here is very useful to list fields (and their types) included in each index and even see an overview of most common values of individual fields with their frequency.
- http://localhost:8080/solr/search/admin/
**Solr queries**
The base URL of the default Solr search handler is as follows:
- http://localhost:8080/solr/search/search
Using the knowledge of particular fields from Solr Admin and Solr syntax ([SolrQuerySyntax, CommonQueryParameters](#)) you can make your own search requests. You can also read [a brief tutorial](#) to learn the query syntax quickly.
You can also look at the solr log file (in older dspace versions, this was logged to catalina.out) to see queries generated by XQLUI in real time:
Solr responses
By default, Solr responses are returned in XML format. However, Solr can provide several other output formats including JSON and CSV. Discovery uses the javabin format. The Solr request parameter is wt (e.g. &wt=json). For more information, see Response Writers, QueryResponseWriters. An interesting option is to specify an XSLT stylesheet that can transform the XML response (server-side) to any format you choose, typically HTML. Append &wt=xslt&tr=example.xsl to the Solr request URL. The .xsl files must be provided in the [dspace]/solr/search/conf/xslt directory. For more information, see XsltResponseWriter.
PHP example
```
$solr_baseurl_dspace = "http://localhost:8080/solr/search/query?";
$solr_query = "test";
$solr_URL_dspace = $solr_baseurl_dspace. wt=phps&".urlencode($solr_query." AND withdrawn:false"); // use withdrawn:false with DSpace newer than 1.8
$response_dspace = file_get_contents($solr_URL_dspace, false, stream_context_create(array('http' => array
('timeout' => 10))));
$result_dspace = unserialize($response_dspace);
$num_dspace = $result_dspace['response']['numFound'];
echo $num_dspace;
```
Keep in mind that although using the phps writer may be faster, it's not recommended for untrusted user data (see PHP unserialize() notes).
Examples
**Date of last deposited item**
To get all items (search.resourceType:2) sorted by date accessioned (dc.date.accessioned_dt) in order from newest to oldest (desc; %20 is just an url-encoded space character):
```
http://localhost:8080/solr/search/select?q=search.resourceType:2&sort=dc.date.accessioned_dt%20desc
```
Note:
<table>
<thead>
<tr>
<th>search.resourceType:2</th>
<th>items</th>
</tr>
</thead>
<tbody>
<tr>
<td>search.resourceType:3</td>
<td>communities</td>
</tr>
<tr>
<td>search.resourceType:4</td>
<td>collections</td>
</tr>
</tbody>
</table>
To get only the first (newest) item (rows=1) with all but the date accessioned field filtered out (fl=dc.date.accessioned) and without the Solr response header (omitHeader=true):
```
http://localhost:8080/solr/search/select?q=search.resourceType:2&sort=dc.date.accessioned_dt%20desc&rows=1&fl=dc.date.accessioned&omitHeader=true
```
**Top downloaded items by a specific user**
```
http://localhost:8080/solr/statistics/select?indent=on&start=0&rows=10&fl=*
20score&qt=standard&w=standard&explainOther=4h1.fl&facet=true&facet.field=epersonid&q=type:0
```
Note:
<table>
<thead>
<tr>
<th>facet.field=epersonid</th>
<th>You want to group by epersonid, which is the user id</th>
</tr>
</thead>
<tbody>
<tr>
<td>type:0</td>
<td>Interested in bitstreams only</td>
</tr>
</tbody>
</table>
**Number of items in a specific community**
```
tail -f /dspace/log/solr.log
```
(depending on your OS, Tomcat installation method and logging settings, the path may be different)
Community here is specified by its "community_id" - the identifier from the "community" table in database. The result is the "numFound" attribute of the "result" element. This example returns number of items (search.resourceType:2) in community with community_id=85 (location.comm:85):
http://localhost:8080/solr/search/select/?q=location.comm:85+AND+search.resourceType:2&start=0&rows=0&indent=on
Breakdown of submitted items per month
Show breakdown of items (search.resourceType:2) submitted (facet.date=dc.date.accessioned_dt) per month (facet.date.gap=+MONTH) in the year 2016 (facet.date.start=2016-01-01T00:00:00Z&facet.date.end=2017-01-01T00:00:00Z):
http://localhost:8080/solr/search/select?indent=on&rows=0&facet=true&facet.date=dc.date.accessioned_dt&facet.date.start=2016-01-01T00:00:00Z&facet.date.end=2017-01-01T00:00:00Z&facet.date.gap=%2B1MONTH&q=search.resourceType:2
Statistics breakdown per event type
Starting from DSpace 3, there is a statistics_type field in the statistics core that contains the "usage event type". Currently, the available types are search, view, search_result and workflow. Here's how to get event breakdown by type, excluding robots (isBot:false):
http://localhost:8080/solr/statistics/select?indent=on&rows=0&facet=true&facet.field=statistics_type&q=isBot=false
Statistics: breakdown of downloads per month
Show breakdown of bitstream (type:0) downloads per month in the year 2016, excluding robots (isBot:false):
http://localhost:8080/solr/statistics/select?indent=on&rows=0&facet=true&facet.date=time&facet.date.start=2016-01-01T00:00:00Z&facet.date.end=2017-01-01T00:00:00Z&facet.date.gap=%2B1MONTH&q=type:0+AND+isBot:false
Statistics: number of downloads (item views) for a specific item per month
Show bitstream (type:0) downloads per month in the year 2016, excluding robots (isBot:false), for a specific item (2163 in the example):
http://localhost:8080/solr/statistics/select?indent=on&rows=0&facet=true&facet.date=time&facet.date.start=2016-01-01T00:00:00Z&facet.date.end=2017-01-01T00:00:00Z&facet.date.gap=%2B1MONTH&q=type:0+owningItem:2163&fq=-isBot:true&fq=-(bundleName:[*+TO+*]-bundleName:ORIGINAL)&fq=-(statistics_type:[*+TO+*]+-statistics_type:view)
Statistics: number of total downloads in a given time span
Show the total repository-wide bitstream (type:0) downloads, excluding robots (isBot:false), for a specific duration (September 1 2017 through September 1 2018). No need for faceting to get a total count:
http://localhost:8080/solr/statistics/select?indent=on&rows=0&q=time:[2017-09-01T00:00:00Z+TO+2018-09-01T00:00:002]+AND+type:0+AND+isBot:false
Querying Solr from XMLUI
Since Solr returns its responses in XML, it's possible and easy to call custom Solr queries from XMLUI, process the XML response with XSLT and display the results in human-readable form on the HTML page.
There are two ways how to do that - synchronously in Cocoon or asynchronously using AJAX (JavaScript) after the page is loaded. Solr queries are usually very fast, so only synchronous calls will be shown here.
You can include another XML document to be processed by XSLT using the document() function. The parameter to this function is a string with the path to the XML document to process. This can be either a static .xml file stored on the server filesystem or a URL, which will be fetched at time of processing. For Solr, the later is what we need. Furthermore, we need to distinguish templates for processing this external XML document as opposed to the input XML document. We'll do this using the mode attribute and define a different processing mode for each query.
Now we need to define a template with the same mode that matches elements contained in the Solr response XML:
```xml
<xsl:template match="/response/result/doc/date" mode="solr-response">
Last item was imported: <xsl:value-of select="text()"/>
</xsl:template>
```
Furthermore, we don’t want to hardcode the http://localhost:8080 Solr URL, because this can be changed in config file and that would break the template. So we’ll call a Java function from XSLT to retrieve the configured Solr URL. See the complete example in the next section.
**Examples**
**Date of last deposited item**
For description of the query parameters, see [above](#).
1. Add the confman namespace and “confman” to exclude-result-prefixes. (For explanation, see how to [Call Java methods from XSLT (Manakin)](#).
```xml
<xsl:stylesheet
... xmlns:confman="org.dspace.core.ConfigurationManager"
exclude-result-prefixes="... confman">
..
</xsl:stylesheet>
```
2. Add this simple template to process the Solr query result. More complex date formatting can be done easily in XSLT 2.0 (see XSLT 2.0 spec), however Cocoon still uses XSLT 1.0 (see DS-995). It is currently also possible to call Java functions to do date formatting.
```xml
<xsl:template match="/response/result/doc/date" mode="lastItem">
Last item was imported: <xsl:value-of select="substring(text(), 1, 10)"/>
</xsl:template>
```
3. Add the following code to the place where you want the resulting text to appear:
```xml
<xsl:variable name="solr-search-url" select="confman:getProperty('discovery', 'search.server')"/>
<xsl:apply-templates select="document(concat($solr-search-url, '/select?q=search.resourceType:2&sort=dc.date.accessioned_dt%20desc&rows=1&fl=dc.date.accessioned_dt&omitHeader=true'))" mode="lastItem"/>
```
For example, to add it after the list of Recent items in Mirage, override its template like this:
Multicore join queries
Solr supports join queries across multiple cores since Solr 4.0. Thus it’s also supported in DSpace 4.0 (which includes Solr 4.4).
**Example query (not tested)**
http://localhost:8080/solr/search/select/?q=*:*&fq={!join from=owningItem to=search.resourceid fromIndex=statistics}title:"Testing title"
"AND" search as default
Up to and including DSpace 5 (see DS-2809), Discovery uses the "OR" operator as default if you don’t specify an operator between your query keywords. So searching for "John Doe" will also return entries like "Jane Doe" and "John Connor". If you want to change that, you have to edit the **schema.xml** file of the Solr search core:
In [dspace]/solr/search/conf/schema.xml, find this line:
```xml
<solrQueryParser defaultOperator="OR"/>
```
and change it to
```xml
<solrQueryParser defaultOperator="AND"/>
```
Then restart your servlet container (Tomcat).
**Warning**
It’s not officially recommended to change the `defaultOperator` setting. Some unrelated Discovery features might stop working if you do this. I haven’t noticed anything wrong, but you might. If something breaks, make sure to notify us and we’ll try to fix it or remove this tip.
Deleting Solr index data
If for whatever reason you need to delete the data in your index (which would normally be followed by running `[dspace]/bin/dspace index-discovery` in DSpace versions older than 4.x, it was called `[dspace]/bin/dspace update-discovery-index`), but you can use the `-b` parameter instead to reindex everything), here’s how you can do it:
**Solr delete query**
If Solr is running, you can access the following URL from the server where Solr is installed (remember the default localhost restriction):
This will delete all documents in the **search (Discovery) core**.
You can verify the number of documents in the core by running the following query and checking the value of the `numFound` attribute in the output:
```
$ curl "http://localhost:8080/solr/search/select/?q=*:*&rows=0"
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">5</int><lst name="params"><str name="rows">0</str><str name="q">*:*</str></lst></lst><result name="response" numFound="0" start="0"/>
</response>
```
The URL listed in the examples is the default Solr URL in DSpace. If you changed it, you can find it in `search.server` in `[dspace]/config/modules/discovery.cfg` (DSpace 1.8+) or in `solr.log.server` in `[dspace]/config/dspace.cfg` (DSpace 1.7).
Source: Solr Wiki FAQ: How can I delete all documents from my index?
**Manually delete Solr index files**
If your Solr is broken and you can't issue queries, you can still delete the index files manually:
```
$ rm -rf [dspace]/solr/search/data/
```
Then restart the servlet container or reload the solr webapp.
See also:
- Solr: How can I delete all documents from my index?
- DSpace: deleted wrong directory
**Set up Solritas (VelocityResponseWriter)**
Solritas is a generic search interface on top of a Solr index. It can be useful if you want to explore the contents of a Solr index (core) using facets.
To set it up in DSpace 3.0 (which uses Solr 3.5.0):
- download `apache-solr-3.5.0.tgz` from [http://archive.apache.org/dist/lucene/solr/3.5.0/](http://archive.apache.org/dist/lucene/solr/3.5.0/)
- `tar xzvf apache-solr-3.5.0.tgz`
- `mkdir [dspace]/solr/lib`
- `cp ./apache-solr-3.5.0/dist/apache-solr-velocity-3.5.0.jar [dspace]/solr/lib`
- `cp ./apache-solr-3.5.0/contrib/velocity/lib/{commons-beanutils-1.7.0.jar,commons-collections-3.2.1.jar,velocity-1.6.4.jar,velocity-tools-2.0.jar} [dspace]/solr/lib`
- edit `[dspace]/solr/solr.xml` and add the `sharedLib` attribute:
```
<solr persistent="false" sharedLib="lib"/>
```
- edit the `solrconfig.xml` file of each core where you want to use Solritas. Example for the "search" core: add the velocity ResponseWriter and `requestHandler` in `[dspace]/solr/search/conf/solrconfig.xml`:
It should also be possible to use it in other versions of DSpace (starting from 1.6), but these use different versions of Solr, so modify the procedure accordingly (and expect other caveats):
<table>
<thead>
<tr>
<th>DSpace 6</th>
<th>Solr 4.10.2</th>
</tr>
</thead>
<tbody>
<tr>
<td>DSpace 5</td>
<td>Solr 4.10.2</td>
</tr>
<tr>
<td>DSpace 4</td>
<td>Solr 4.4.0</td>
</tr>
<tr>
<td>DSpace 3</td>
<td>Solr 3.5.0</td>
</tr>
<tr>
<td>DSpace 1.8</td>
<td>Solr 3.3.0</td>
</tr>
<tr>
<td>DSpace 1.7</td>
<td>Solr 1.4.1</td>
</tr>
<tr>
<td>DSpace 1.6</td>
<td>Solr 1.3.0</td>
</tr>
</tbody>
</table>
Note: In older versions, you may need to specify the queryResponseWriter class as `org.apache.solr.request.VelocityResponseWriter` (I haven't tested it, though)
Resources:
Guidepost
Other pages on this wiki describing Solr and Discovery.
- Discovery: Official DSpace 3.x documentation
- DSpace Discovery: Discovery proposal & purpose, intro video, Discovery 1.8 changes & configuration
- DSpace Discovery HowTo: Discovery screenshots (before Discovery was included in DSpace), most content obsolete (pre-1.7.0)
See also:
- Solr Tutorial
- ajax-solr, a JavaScript library for creating user interfaces to Solr.
- /var/log/tomcat6/catalina.out
|
{"Source-Url": "https://wiki.lyrasis.org/download/export/pdfexport-20231018-181023-1235-101249/Solr_7f4140eee5e34e12b565a7fdf8a82f03-181023-1235-101250.pdf?contentType=application/pdf", "len_cl100k_base": 6209, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 21407, "total-output-tokens": 7087, "length": "2e12", "weborganizer": {"__label__adult": 0.00024187564849853516, "__label__art_design": 0.0004892349243164062, "__label__crime_law": 0.00022590160369873047, "__label__education_jobs": 0.0009326934814453124, "__label__entertainment": 0.0001809597015380859, "__label__fashion_beauty": 0.00011855363845825197, "__label__finance_business": 0.0003709793090820313, "__label__food_dining": 0.00020503997802734375, "__label__games": 0.0010099411010742188, "__label__hardware": 0.0006780624389648438, "__label__health": 0.00014972686767578125, "__label__history": 0.0002963542938232422, "__label__home_hobbies": 0.0001163482666015625, "__label__industrial": 0.00018596649169921875, "__label__literature": 0.0003044605255126953, "__label__politics": 0.0001933574676513672, "__label__religion": 0.0003676414489746094, "__label__science_tech": 0.01375579833984375, "__label__social_life": 0.00020766258239746096, "__label__software": 0.283447265625, "__label__software_dev": 0.69580078125, "__label__sports_fitness": 0.00018727779388427737, "__label__transportation": 0.0001671314239501953, "__label__travel": 0.0002777576446533203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23630, 0.02402]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23630, 0.22892]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23630, 0.74815]], "google_gemma-3-12b-it_contains_pii": [[0, 4019, false], [4019, 6955, null], [6955, 9948, null], [9948, 12698, null], [12698, 16337, null], [16337, 18274, null], [18274, 19866, null], [19866, 22391, null], [22391, 23630, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4019, true], [4019, 6955, null], [6955, 9948, null], [9948, 12698, null], [12698, 16337, null], [16337, 18274, null], [18274, 19866, null], [19866, 22391, null], [22391, 23630, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 23630, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23630, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23630, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23630, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23630, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23630, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23630, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23630, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23630, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23630, null]], "pdf_page_numbers": [[0, 4019, 1], [4019, 6955, 2], [6955, 9948, 3], [9948, 12698, 4], [12698, 16337, 5], [16337, 18274, 6], [18274, 19866, 7], [19866, 22391, 8], [22391, 23630, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23630, 0.05017]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
071ac6c465a049e0ec1b950486cc469e081a4a29
|
Graph Compression
Lecture 17
CSCI 4974/6971
31 Oct 2016
Today’s Biz
1. Reminders
2. Review
3. Graph Compression
Reminders
- Project Update Presentation: In class November 3rd
- Assignment 4: due date November 10th
- Setting up and running on CCI clusters
- Assignment 5: due date TBD (before Thanksgiving break, probably 22nd)
- Assignment 6: due date TBD (early December)
- Office hours: Tuesday & Wednesday 14:00-16:00 Lally 317
- Or email me for other availability
- Tentative: No class November 14 and/or 17
Today’s Biz
1. Reminders
2. Review
3. Graph Compression
Quick Review
**Graph Re-ordering:**
- Improve cache utilization by re-organizing adjacency list
- Many methods
- Random
- Traversal-based
- Traversal+sort-based
- Optimize for bandwidth reduction? Gap minimization?
- NP-hard for common problems, heuristics for days
1. Reminders
2. Review
3. **Graph Compression**
Graph Compression
- Basic idea: graph is very large, can’t fit in shared (or even distributed) memory
- Solutions:
- External memory
- Streaming algorithms
- **Compress adjacency list**
- Why compression: always faster to work on data stored closer to core (usually even with the additional computational overheads)
- Similarly - compress to use fewer nodes in distributed environment
Graph Compression
- (lossless) Compression solutions:
- Delta/gap compression (general) - sort then compress adjacency list using delta methods
- Webgraph framework (exploit web structure - specialized form of delta)
- For general graphs? Open Question?
- Lossy compression: clustering, etc. - can still perform some general computations
The WebGraph Framework: Compression Techniques
Slides from Paolo Boldi and Sebastian Vigna, DSI, Università di Milano, Italy
The WebGraph Framework: Compression Techniques
Paolo Boldi Sebastiano Vigna
DSI, Università di Milano, Italy
“The” Web graph
- Given a set $U$ of URLs, the graph induced by $U$ is the directed graph having $U$ as set of nodes, and an arc from $x$ to $y$ iff the page with URL $x$ has a link that points to URL $y$.
- The transposed graph can be obtained by reversing all arcs.
- The symmetric graph can be obtained by “forgetting” the arc orientation.
- The Web graph is huge.
What does it mean... “to store (part of) the Web graph”?
- Being able to know the successors of each node (the successors of $x$ are those nodes $y$ for which an arc $x \rightarrow y$ exists);
- this must be happen in a reasonable time (e.g., much less than 1 ms/link);
- having a simple way to know the node corresponding to a URL (e.g., minimal perfect hash).
- having a simple way to know the URL corresponding to a node (e.g., front-coded lists).
We shall denote all nodes using natural numers ($0, 1, \ldots, n - 1$, where $n = |U|$).
Why... to store the Web graph?
- Many algorithms for ranking and community discovery require visits of the Web graph;
- Web graphs offer real-world examples of graphs with the *small-world* property, and as such they can be used to perform experiments to validate small-world theories.
- Web graphs can be used to validate Web graph models (not surprisingly).
- It’s fun.
- It provides new, challenging mathematical and algorithmic problems.
WebGraph is...
- Algorithms for compressing and accessing Web graphs.
- New instantaneous codes for distributions commonly found when compressing Web graphs.
- Java documented reference implementation (Gnu GPL’d) of the above (http://webgraph.dsi.unimi.it/).
- Freely available large graphs.
- Few such collections are publicly available, and, as a matter of fact, WebGraph was .’/d when it went public.
Previous history
- Connectivity Server (Bharat, Broder, Henzinger, Kumar, and Venkatasubramanian), $\approx 32$ bits/link.
- LINK database (Randall, Stata, Wickremesinghe, and Wiener), $\approx 4.5$ bits/link.
- WebBase (Raghavan and Garcia–Molina), $\approx 5.6$ bits/link.
- Suel and Yuan, $\approx 14$ bits/link.
- Theoretical analysis and experimental algorithms (Adler and Mitzenmacher), $\approx 10$ bits/link.
- Algorithms for separable graphs (Blandford, Blelloch, Kash), $\approx 5$ bits/link.
Currently, WebGraph codes at $\approx 3$ bits/link.
The offset vector tells us from where successors of a given node start. Implicitly, it contains the outdegree of the node.
First simple idea
Use a variable-length representation, choosing it so that
- it is easy to decode;
- minimises the expected length.
And the offsets?
- bit displacement vs. byte displacement (with alignment)
- we must express explicitly the outdegree.
Variable-length representation
Variable-length representations are a basic technique in full-text indexing.
Instantaneous codes
- An *instantaneous code* for $S$ is a mapping $c : S \rightarrow \{0, 1\}^*$ such that for all $x, y \in S$, if $c(x)$ is a prefix of $c(y)$, then $x = y$.
- Let $\ell_x$ be the length in bits of $c(x)$.
- A code with lengths $\ell_x$ has *intended distribution*
$$p(x) = 2^{-\ell_x}.$$
- The choice of the code depends, of course, on the data distribution.
Unary coding
- If $S = \mathbb{N}$, we can represent $x \in S$ writing $x$ zeroes followed by a one.
- Thus $\ell_x = x + 1$, and the intended distribution is
$$p(x) = 2^{-x-1}$$
geometric distribution.
<table>
<thead>
<tr>
<th>$x$</th>
<th>Code</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>01</td>
</tr>
<tr>
<td>2</td>
<td>001</td>
</tr>
<tr>
<td>3</td>
<td>0001</td>
</tr>
<tr>
<td>4</td>
<td>00001</td>
</tr>
</tbody>
</table>
The \( \gamma \) coding of \( x \in \mathbb{N}^+ \) can be obtained by writing the index of the most significant bit of \( x \) in unary, followed by \( x \) (stripped of the MSB) in binary.
Thus
\[
\ell_x = 1 + 2\lfloor \log x \rfloor \implies p(x) \propto \frac{1}{2x^2} \text{(Zipf)}
\]
<p>| | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>010</td>
</tr>
<tr>
<td>3</td>
<td>011</td>
</tr>
<tr>
<td>4</td>
<td>00100</td>
</tr>
<tr>
<td>5</td>
<td>00101</td>
</tr>
</tbody>
</table>
Degrees have a Zipf distribution with exponent \( \approx 2.7 \): use \( \gamma \)!
Successors & locality
- Since many link are *navigational*, the URLs they point to share a large prefix.
- Thus, if we order lexicographically URLs, for many arcs $x \rightarrow y$ often $|x - y|$ will be small.
- So, we represent the successors $y_1 < y_2 < \cdots < y_k$ using their gaps
$$y_1 - x, y_2 - y_1 - 1, \ldots, y_k - y_{k-1} - 1$$
which are distributed as a Zipf with exponent $\approx 1.2$.
- Commonly used: *variable-length nibble coding*, a list of 4-bit blocks whose MSB specifies whether the list has ended (it is redundant).
- WebGraph uses by default $\zeta_k$, a new family of non-redundant codes with intended distribution close to a Zipfian with exponent $< 1.6$ ($\zeta_3$ is the default choice).
Similarity
URL that are close in lexicographic order are likely to have similar successor lists, as they belong to the same site, and probably to the same level of the site hierarchy. So, we code a list by referentiation:
- an integer \( r \) (reference): if \( r > 0 \), the list is described as a difference from the list of \( x - r \): a bit string tells us which successors must be copied, and which not;
- a list of *extra nodes*, for the remaining nodes.
Referentiation: an example
<table>
<thead>
<tr>
<th>Node</th>
<th>Outdegree</th>
<th>Successors</th>
</tr>
</thead>
<tbody>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>15</td>
<td>11</td>
<td>13, 15, 16, 17, 18, 19, 23, 24, 203, 315, 1034</td>
</tr>
<tr>
<td>16</td>
<td>10</td>
<td>15, 16, 17, 22, 23, 24, 315, 316, 317, 3041</td>
</tr>
<tr>
<td>17</td>
<td>0</td>
<td></td>
</tr>
<tr>
<td>18</td>
<td>5</td>
<td>13, 15, 16, 17, 50</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Node</th>
<th>Outd.</th>
<th>Ref.</th>
<th>Copy list</th>
<th>Extra nodes</th>
</tr>
</thead>
<tbody>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>15</td>
<td>11</td>
<td>0</td>
<td>01110011010</td>
<td>13, 15, 16, 17, 18, 19, 23, 24, 203, 315, 1034</td>
</tr>
<tr>
<td>16</td>
<td>10</td>
<td>1</td>
<td>01110011010</td>
<td>22, 316, 317, 3041</td>
</tr>
<tr>
<td>17</td>
<td>0</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>18</td>
<td>5</td>
<td>3</td>
<td>111100000000</td>
<td>50</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
Differential compression
WebGraph pushes much farther this idea: we code use a list of *copy blocks*, which specify by inclusion/exclusion the sublists that must be alternatively copied or discarded.
<table>
<thead>
<tr>
<th>Node</th>
<th>Outdegree</th>
<th>Successors</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>15</td>
<td>11</td>
<td>13, 15, 16, 17, 18, 19, 23, 24, 203, 315, 1034</td>
</tr>
<tr>
<td>16</td>
<td>10</td>
<td>15, 16, 17, 22, 23, 24, 315, 316, 317, 3041</td>
</tr>
<tr>
<td>17</td>
<td>0</td>
<td></td>
</tr>
<tr>
<td>18</td>
<td>5</td>
<td>13, 15, 16, 17, 18, 19, 23, 24, 203, 315, 1034</td>
</tr>
<tr>
<td></td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Node</th>
<th>Outd.</th>
<th>Ref.</th>
<th># blocks</th>
<th>Copy blocks</th>
<th>Extra nodes</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>15</td>
<td>11</td>
<td>0</td>
<td>7</td>
<td>0, 0, 2, 1, 1, 0, 0</td>
<td>13, 15, 16, 17, 18, 19, 23, ...</td>
</tr>
<tr>
<td>16</td>
<td>10</td>
<td>1</td>
<td>7</td>
<td>0, 0, 2, 1, 1, 0, 0</td>
<td>22, 316, ...</td>
</tr>
<tr>
<td>17</td>
<td>0</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>18</td>
<td>5</td>
<td>3</td>
<td>1</td>
<td>4</td>
<td>50</td>
</tr>
<tr>
<td></td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
Consecutivity
- WebGraph exploits the fact that many links within a page are consecutive (with respect to the lexicographic order). This is due to at least two distinct phenomena.
- First of all, most pages contain sets of navigational links which point to a fixed level of the hierarchy.
- Second, in the transposed Web graph pages that are high in the site hierarchy (e.g., the home page) are pointed to by most pages of the site.
- More in general, consecutivity is the dual of distance-one similarity. If a graph is easily compressible using similarity at distance one, its transpose must sport large intervals of consecutive links, and vice versa.
Intervalisation
To exploit consecutivity, WebGraph uses a special representation for extra nodes.
- if there are enough large intervals, they are coded using their left extreme and their length;
- the remaining extra nodes, called residuals, are represented separately.
### Intervalisation: an example
<table>
<thead>
<tr>
<th>Node</th>
<th>Outdegree</th>
<th>Successors</th>
</tr>
</thead>
<tbody>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>15</td>
<td>11</td>
<td>13, 15, 16, 17, 18, 19, 23, 24, 203, 315, 1034</td>
</tr>
<tr>
<td>16</td>
<td>10</td>
<td>15, 16, 17, 22, 23, 24, 315, 316, 317, 3041</td>
</tr>
<tr>
<td>17</td>
<td>0</td>
<td></td>
</tr>
<tr>
<td>18</td>
<td>5</td>
<td>13, 15, 16, 17, 50</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Node</th>
<th>Outd.</th>
<th>Ref.</th>
<th># bl.</th>
<th>Copy bl.s</th>
<th># int.</th>
<th>Lft extr.</th>
<th>Lth</th>
<th>Residuals</th>
</tr>
</thead>
<tbody>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>15</td>
<td>11</td>
<td>0</td>
<td>2</td>
<td>0, 2</td>
<td>3, 0</td>
<td>5, 189, 111, 718</td>
<td></td>
<td></td>
</tr>
<tr>
<td>16</td>
<td>10</td>
<td>1</td>
<td>1</td>
<td>0, 0, ...</td>
<td>1</td>
<td>600</td>
<td>0</td>
<td>12, 3018</td>
</tr>
<tr>
<td>17</td>
<td>0</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>18</td>
<td>5</td>
<td>3</td>
<td>4</td>
<td>0</td>
<td></td>
<td>50</td>
<td></td>
<td></td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td></td>
<td>...</td>
</tr>
</tbody>
</table>
Choices in the reference scheme
- How do you choose the reference node for $x$?
- You consider the successor lists of the last $W$ nodes, but... you do not consider lists which would cause a recursive reference of more than $R$ chains.
- The parameter $R$ is essential for deciding the ratio compression/speed. $W$ essentially decreases compression time only.
Implementation
- Random access to successor lists is implemented \textit{lazily} through a cascade of \textit{iterators}.
- Each series of interval and each reference cause the creation of an iterator; the same happens for references.
- The results of all iterators are then merged.
- The advantage of laziness is that we never have to build an actual list of successors in memory, so the overhead is limited to the number of \textit{actual reads}, not to the number of successors lists that would be necessary to re-create a given one.
Access speed
- Access speed to a compressed graph is commonly measured in the time required to access a link (≈ 300 ns for WebGraph).
- This quantity, however, is strongly dependent on the architecture (e.g., cache size), and, even more, on low-level optimisations (e.g., hard-coding of the first codewords of an instantaneous code).
- To compare speeds reliably, we need public data, that anyone can access, and a common framework for the low-level operations.
- A first step is http://webgraph-data.dsi.unimi.it/. We provide freely available data to compare compression techniques.
WebGraph combines new codes, new insights on the structure of the Web graph and new algorithmic techniques to achieve a very high compression ratio, while still retaining a good access speed (but it could be better).
Our software is highly tunable: you can experiment with dozens of codes, algorithmic techniques and compression parameters, and there is a large unexplored space of combinations.
A theoretically interesting question is how to combine optimally differential compression and intervalisation: we do not know whether is current greedy approach (first copy as much as you can, then intervalise) is necessarily the best one.
Today: graph compression
- Implement basic compressed graph representation
- Examine effects of various ordering schemes
Graph Compression
Blank code and data available on website
(Lecture 17)
www.cs.rpi.edu/~slotag/classes/FA16/index.html
|
{"Source-Url": "http://www.cs.rpi.edu/~slotag/classes/FA16/slides/lec17-comp.pdf", "len_cl100k_base": 4612, "olmocr-version": "0.1.53", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 51087, "total-output-tokens": 5377, "length": "2e12", "weborganizer": {"__label__adult": 0.0003638267517089844, "__label__art_design": 0.00058746337890625, "__label__crime_law": 0.0004870891571044922, "__label__education_jobs": 0.007076263427734375, "__label__entertainment": 0.00010889768600463869, "__label__fashion_beauty": 0.00020253658294677737, "__label__finance_business": 0.00020992755889892575, "__label__food_dining": 0.00052642822265625, "__label__games": 0.000579833984375, "__label__hardware": 0.0011472702026367188, "__label__health": 0.0008087158203125, "__label__history": 0.0003812313079833984, "__label__home_hobbies": 0.00017821788787841797, "__label__industrial": 0.0006041526794433594, "__label__literature": 0.0004703998565673828, "__label__politics": 0.00030303001403808594, "__label__religion": 0.0006628036499023438, "__label__science_tech": 0.085205078125, "__label__social_life": 0.0002551078796386719, "__label__software": 0.0108489990234375, "__label__software_dev": 0.8876953125, "__label__sports_fitness": 0.0004200935363769531, "__label__transportation": 0.0006327629089355469, "__label__travel": 0.0002675056457519531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14000, 0.0575]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14000, 0.1031]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14000, 0.83806]], "google_gemma-3-12b-it_contains_pii": [[0, 58, false], [58, 115, null], [115, 520, null], [520, 577, null], [577, 850, null], [850, 898, null], [898, 1290, null], [1290, 1635, null], [1635, 1761, null], [1761, 1875, null], [1875, 2244, null], [2244, 2787, null], [2787, 3230, null], [3230, 3635, null], [3635, 4192, null], [4192, 4315, null], [4315, 4571, null], [4571, 4680, null], [4680, 5067, null], [5067, 5379, null], [5379, 5838, null], [5838, 6562, null], [6562, 7026, null], [7026, 8361, null], [8361, 9587, null], [9587, 10241, null], [10241, 10513, null], [10513, 11638, null], [11638, 11999, null], [11999, 12537, null], [12537, 13122, null], [13122, 13760, null], [13760, 13882, null], [13882, 14000, null]], "google_gemma-3-12b-it_is_public_document": [[0, 58, true], [58, 115, null], [115, 520, null], [520, 577, null], [577, 850, null], [850, 898, null], [898, 1290, null], [1290, 1635, null], [1635, 1761, null], [1761, 1875, null], [1875, 2244, null], [2244, 2787, null], [2787, 3230, null], [3230, 3635, null], [3635, 4192, null], [4192, 4315, null], [4315, 4571, null], [4571, 4680, null], [4680, 5067, null], [5067, 5379, null], [5379, 5838, null], [5838, 6562, null], [6562, 7026, null], [7026, 8361, null], [8361, 9587, null], [9587, 10241, null], [10241, 10513, null], [10513, 11638, null], [11638, 11999, null], [11999, 12537, null], [12537, 13122, null], [13122, 13760, null], [13760, 13882, null], [13882, 14000, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14000, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 14000, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14000, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14000, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 14000, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14000, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14000, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14000, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14000, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14000, null]], "pdf_page_numbers": [[0, 58, 1], [58, 115, 2], [115, 520, 3], [520, 577, 4], [577, 850, 5], [850, 898, 6], [898, 1290, 7], [1290, 1635, 8], [1635, 1761, 9], [1761, 1875, 10], [1875, 2244, 11], [2244, 2787, 12], [2787, 3230, 13], [3230, 3635, 14], [3635, 4192, 15], [4192, 4315, 16], [4315, 4571, 17], [4571, 4680, 18], [4680, 5067, 19], [5067, 5379, 20], [5379, 5838, 21], [5838, 6562, 22], [6562, 7026, 23], [7026, 8361, 24], [8361, 9587, 25], [9587, 10241, 26], [10241, 10513, 27], [10513, 11638, 28], [11638, 11999, 29], [11999, 12537, 30], [12537, 13122, 31], [13122, 13760, 32], [13760, 13882, 33], [13882, 14000, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14000, 0.28054]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
46ff41798074855e75f62bd1794072e6d8ba7cd3
|
8 Practical Recursion: the Leap of Faith
When people first meet the idea of recursive procedures, they almost always think there is some sort of magic involved. “How can that possibly work? That procedure uses itself as a subprocedure! That’s not fair.” To overcome that sense of unfairness, the combining method works up to a recursive procedure by starting small, so that each step is completely working before the next step, to solve a larger problem, relies on it. There is no mystery about allowing `downup5` to rely on `downup4`.
The trouble with the combining method is that it’s too much effort to be practical. Once you believe in recursion, you don’t want to have to write a special procedure for a size-one problem, then another special procedure for a size-two problem, and so on; you want to write the general recursive solution right away. I’m calling this the “leap of faith” method because you write a procedure while taking on faith that you can invoke the same procedure to handle a smaller subproblem.
Recursive Patterns
Let’s look, once more, at the problem we were trying to solve when writing the `downup` procedure. We wanted the program to behave like this:
```scheme
? downup "hello
hello
hell
hel
he
h
he
hel
hell
hello
```
The secret of recursive programming is the same as a secret of problem solving in general: see if you can reduce a big problem to a smaller problem. In this case we can look at the printout from `downup` this way:
```
hello
hell
hel
he
downup "hell"
h
he
hel
hell
hello
```
What I’ve done here is to notice that the printout from applying `downup` to a five-letter word, `hello`, includes within itself the printout that would result from applying `downup` to a smaller word, `hell`.
This is where the leap of faith comes in. I’m going to pretend that `downup` already `works` for the case of four-letter words. We haven’t begun to write the procedure yet, but never mind that. So it seems that in order to evaluate the instruction
```
downup "hello"
```
we must carry out these three instructions:
```
print "hello"
downup "hell"
print "hello"
```
(The two `print` instructions print the first and last lines of the desired result, the ones that aren’t part of the smaller `downup` printout.)
To turn these instructions into a general procedure, we must use a variable in place of the specific word `hello`. We also have to figure out the general relationship that is exemplified by the transformation from `hello` into `hell`. This relationship is, of course, simply `butlast`. Here is the procedure that results from this process of generalization:
```
to downup :word
print :word
downup butlast :word
print :word
end
```
As you already know, this procedure won’t quite work. It lacks a stop rule. But once we have come this far, it’s a relatively simple matter to add the stop rule. All we have to do is ask ourselves, “What’s the smallest case we want the program to handle?” The answer is that for a single-letter word the `downup` should just print the word once. In other words, for a single-letter word, `downup` should carry out its first instruction and then stop. So the stop rule goes after that first instruction, and it stops if the input has only one letter:
```
to downup :word
print :word
if equalp count :word 1 [stop]
downup butlast :word
print :word
end
```
Voilà!
The trick is not to think about the stop rule at first. Just accept, on faith, that the procedure will somehow manage to work for inputs that are smaller than the one you’re interested in. Most people find it hard to do that. Since you haven’t written the program yet, after all, the faith I’m asking you to show is really unjustified. Nevertheless you have to pretend that someone has already written a version of the desired procedure that works for smaller inputs.
Let’s take another example from Chapter 7.
```
? one.per.line "hello
h
e
l
l
o
```
There are two different ways in which we can find a smaller pattern within this one. First we might notice this one:
```
h (first of hello)
one.per.line
"ello
1
1
o
```
This pattern would lead to the following procedure, for which I haven’t yet invented a stop rule.
to one.per.line :word
print first :word
one.per.line butfirst :word
end
Alternatively we might notice this pattern:
```
one.per.line { h
e
"hell"
l
o (last of hello)
```
In that case we’d have a different version of the procedure. This one, also, doesn’t yet have a stop rule:
```
to one.per.line :word
one.per.line butlast :word
print last :word
end
```
Either of these procedures can be made to work by adding the appropriate stop rule:
```
if emptyp :word [ stop]
```
This instruction should be the first in either procedure. Since both versions work, is there any reason to choose one over the other? Well, there’s no theoretical reason but there is a practical one. It turns out that first and butfirst work faster than last and butlast. It also turns out that procedures that are tail recursive (that is, with the recursion step at the end) can survive more levels of invocation, without running out of memory, than those that are recursive in other ways. For both of these reasons the first version of one.per.line is a better choice than the second. (Try timing both versions with a very long list as input.)
Rewrite the say procedure from page 95 recursively.
---
**The Leap of Faith**
If we think of
```
to one.per.line :word
print first :word
one.per.line butfirst :word
end
```
152
Chapter 8 Practical Recursion: the Leap of Faith
merely as a statement of a true fact about the “shape” of the result printed by \texttt{one.per.line}, it’s not very remarkable. The amazing part is that this fragment is \textit{runnable}.* It doesn’t \textit{look} runnable because it invokes itself as a helper procedure, and—if you haven’t already been through the combining method—that looks as if it can’t work. “How can you use \texttt{one.per.line} when you haven’t written it yet?”
The leap of faith method is the assumption that the procedure we’re in the middle of writing already works. That is, if we’re thinking about writing a \texttt{one.per.line} procedure that can compute \texttt{one.per.line} “hello, we assume that \texttt{one.per.line} "ello will work.
Of course it’s not \textit{really} a leap of faith, in the sense of something accepted as miraculous but not understood. The assumption is justified by our understanding of the combining method. For example, we understand that the five-letter \texttt{one.per.line} is relying on the four-letter version of the problem, not really on itself, so there’s no circular reasoning involved. And we know that if we had to, we could write \texttt{one.per.line} through \texttt{one.per.line}4 “by hand.”
The reason that the technique in this chapter may seem more mysterious than the combining method is that this time we are thinking about the problem top-down. In the combining method, we had already written \texttt{whatever4} before we even raised the question of \texttt{whatever5}. Now we start by thinking about the larger problem and assume that we can rely on the smaller one. Again, we’re entitled to that assumption because we’ve gone through the process from smaller to larger so many times already.
The leap of faith method, once you understand it, is faster than the combining method for writing new recursive procedures, because you can write the recursive solution immediately, without bothering with many individual cases. The reason I showed you the combining method first is that the leap of faith method seems too much like magic, or like “cheating,” until you’ve seen several believable recursive programs. The combining method is the way to learn about recursion; the leap of faith method is the way to write recursive procedures once you’ve learned.
\section*{The Tower of Hanoi}
One of the most famous recursive problems is a puzzle called the Tower of Hanoi. You can find this puzzle in toy stores; look for a set of three posts and five or six disks. You
* Well, almost. It needs a base case.
start out with the puzzle arranged like this:

The object of the puzzle is to move all of the disks to the second post, like this:

This looks easy, but there are rules you must follow. You can only move one disk at a time, and you can’t put a disk on top of a smaller disk. You might start trying to solve the puzzle this way:
*first move:*

*second move:*

After that, you could move disk number 1 either onto post A, on top of disk 3, or onto post C, on top of disk 2.
I’m about to describe a solution to the puzzle, so if you want to work on it yourself first, stop reading now.
In the examples of downup and one_per_line, we identified each problem as one for which a recursive program was appropriate because within the pattern of the overall solution we found a smaller, similar pattern. The same principle will apply in this case. We want to end up with all five disks on post B. To do that, at some point we have to move...
disk 5 from post A to post B. To do that, we first have to get the other four disks out of the way. Specifically, “out of the way” must mean onto post C. So the solution to the problem can be represented graphically this way, in three parts:
The first part of the solution is to move disks 1 through 4 from post A to post C. The second part is a single step, moving disk 5 from post A to post B. The third part, like the first, involves several steps, to move disks 1 through 4 from post C to post B.
If you’ve developed the proper recursive spirit, you’ll now say, “Aha! The first part and the third part are just like the entire puzzle, only with four disks instead of five!” I hope that after this example you’ll develop a sort of instinct that will let you notice patterns like that instantly. You should then be ready to make a rough draft of a procedure to solve the puzzle:
```
to hanoi :number
hanoi :number-1
movedisk :number
hanoi :number-1
end
```
*The Tower of Hanoi*
Of course, this isn’t at all a finished program. For one thing, it lacks a stop rule. (As usual, we leave that part for last.) For another, we have to write the subprocedure \texttt{movedisk} that moves a single disk. But a more important point is that we’ve only provided for changing the disk number we’re moving, not for selecting which posts to move from and to. You might want to supply \texttt{hanoi} with two more inputs, named \texttt{from} and \texttt{to}, which would be the names of the posts. So to solve the puzzle we’d say
\begin{verbatim}
\texttt{hanoi 5 "A "B}
\end{verbatim}
But that’s not quite adequate. \texttt{Hanoi} also needs to know the name of the \textit{third} post. Why? Because in the recursive calls, that third post becomes one of the two “active” ones.
For example, here are the three steps in solving the five-disk puzzle:
\begin{verbatim}
\texttt{hanoi 4 "A "C}
\texttt{movedisk 5 "A "B}
\texttt{hanoi 4 "C "B}
\end{verbatim}
You can see that both of the recursive invocations need to use the name of the third post. Therefore, we’ll give \texttt{hanoi} a fourth input, called \texttt{other}, that will contain that name. Here is another not-quite-finished version:
\begin{verbatim}
ton \texttt{hanoi :number :from :to :other}
\texttt{hanoi :number-1 :from :other :to}
\texttt{movedisk :number :from :to}
\texttt{hanoi :number-1 :other :to :from}
\texttt{end}
\end{verbatim}
This version still lacks a stop rule, and we still have to write \texttt{movedisk}. But we’re much closer. Notice that \texttt{movedisk} does \textit{not} need the name of the third post as an input. Its job is to take a single step, moving a single disk. The unused post really has nothing to do with it. Here’s a simple version of \texttt{movedisk}:
\begin{verbatim}
ton \texttt{movedisk :number :from :to}
\texttt{print (sentence \texttt{[Move disk] :number "from :from "to :to)}}
\texttt{end}
\end{verbatim}
What about the stop rule in \texttt{hanoi}? The first thing that will come to your mind, probably, is that the case of moving disk number 1 is special because there are no preconditions. (No other disk can ever be on top of number 1, which is the smallest.) So you might want to use this stop rule:
\begin{verbatim}
\texttt{if equalp :number 1 [movedisk 1 :from :to stop]}
\end{verbatim}
Indeed, that will work. (Where would you put it in the procedure?) But it turns out that a slightly more elegant solution is possible. You can let the procedure for disk 1 go ahead and invoke itself recursively for disk number 0. Since there is no such disk, the procedure then has nothing to do. By this reasoning the stop rule should be this:
if equalp :number 0 [stop]
You may have to trace out the procedure to convince yourself that this really works. Convincing yourself is worth the effort, though; it turns out that very often you can get away with allowing an “extra” level of recursive invocation that does nothing. When that’s possible, it makes for a very clean-looking procedure. (Once again, I’ve left you on your own in deciding where to insert this stop rule in hanoi.)
If your procedure is working correctly, you should get results like this for a small version of the puzzle:
? hanoi 3 "A "B "C
Move disk 1 from A to B
Move disk 2 from A to C
Move disk 1 from B to C
Move disk 3 from A to B
Move disk 1 from C to A
Move disk 2 from C to B
Move disk 1 from A to B
If you like graphics programming and have been impatient to see a turtle in this book, you might want to write a graphic version of movedisk that would actually display the moves on the screen.
More Complicated Patterns
Suppose that, instead of downup, we wanted to write updown, which works like this:
? updown "hello
h
he
hel
hell
hello
hell
hel
he
h
More Complicated Patterns
It’s harder to find a smaller subproblem within this pattern. With downup, removing the first and last lines of the printout left a downup pattern for a shorter word. But the middle lines of this updown pattern aren’t an updown. The middle lines don’t start with a single letter, like the h in the full pattern. Also, the middle lines are clearly made out of the word hello, not some shortened version of it. You might want to try to find a solution yourself before reading further.
There are several approaches to writing updown. One thing we could do is to divide the pattern into two parts:
```
up "hello
hell
hello
hell
hel
he
h
```
It is relatively easy to invent the procedures up and down to create the two parts of the pattern.
```
to up :word
if emptyp :word [stop]
up butlast :word
print :word
end
to down :word
if emptyp :word [stop]
print :word
down butlast :word
end
```
Then we can use these as subprocedures of the complete updown:
```
to updown :word
up :word
down butlast :word
end
```
Another approach would be to use numbers to keep track of things, as in the `inout` example of Chapter 7. In this case we can consider the middle lines as a smaller version of the problem.
\[
\begin{aligned}
&\text{h} \\
&\text{updown1 "hello 1} \\
&\text{updown1 "hello 2} \\
&\text{h} \\
&\text{he} \\
&\text{hel} \\
&\text{hell} \\
&\text{hello} \\
&\text{hell} \\
&\text{hel} \\
&\text{he} \\
&\text{? updown1 "hello 3} \\
&\text{hel} \\
&\text{hell} \\
&\text{hello} \\
&\text{hell} \\
&\text{hel} \\
&\text{? updown1 "hello 5} \\
&\text{hello}
\end{aligned}
\]
In this point of view all the inner, smaller `updown` patterns are made from the same word, `hello`. But each invocation of `updown1` (which is what I’ll call this version of `updown`) will use a second input, a number that tells it how many letters to print in the first and last lines:
\[
? \text{ updown1 "hello 3} \\
\text{hel} \\
\text{hell} \\
\text{hello} \\
\text{hell} \\
\text{hel} \\
? \text{ updown1 "hello 5} \\
\text{hello}
\]
We need a subprocedure, `truncate`, that prints the beginning of a word, up to a certain number of letters.
```lisp
(to truncate :word :size
(if equalp count :word :size [print :word stop]
(truncate butlast :word :size)
end)
(to updown1 :word :size
(truncate :word :size
(if equalp count :word :size [stop]
(updown1 :word :size+1
(truncate :word :size)
end)
```
More Complicated Patterns
(The helper procedure **trunc**ate is the sort of thing that should really be an operation, for the same reason that **second** was better than **prsecond** on page 76. We’ll come back to the writing of recursive operations in Chapter 11.)
Finally, we can write a new superprocedure called **updown** that uses **updown1** with the correct inputs. (If you try all these approaches on the computer, remember that you can have only one procedure named **updown** in your workspace at a time.)
```scheme
(to updown :word
updown1 :word 1
end)
```
A third approach, which illustrates a very powerful technique, also uses an initialization procedure **updown** and a subprocedure **updown1** with two inputs. In this version, though, both inputs to the subprocedure are words: the partial word that we’re printing right now and the partial word that is not yet to be printed.
```
updown1 "he "llo
```
In this example, to print an updown pattern for the word **hello**, the two subprocedure inputs would be **h** (what’s printed on the first line) and **ello** (what isn’t printed there). For the inner pattern with the first and last lines removed, the two inputs would be **he** and **llo**. Here is the program:
```scheme
(to updown :now :later
print :now
if emptyp :later [stop]
updown1 (word :now first :later) butfirst :later
print :now
end
(to updown :word
updown1 first :word butfirst :word
end)
```
160 Chapter 8 Practical Recursion: the Leap of Faith
This program may be a little tricky to understand. The important part is updown1. Read it first without paying attention to the stop rule; see if you can understand how it corresponds to the updown pattern. A trace of its recursive invocations might help:
```
updown "hello
updown1 "h "ello
updown1 "he "llo
updown1 "hel "lo
updown1 "hell "o
updown1 "hello
```
The innermost level of recursion has been reached when the second input is the empty word. Notice how first, butfirst, and word are used in combination to calculate the inputs.
\[ \text{Write a recursive procedure } \textit{slant} \text{ that takes a word as input and prints it on a diagonal, one letter per line, like this:} \]
```
? \texttt{slant "salami}
s
a
l
a
m
i
```
\[ \textbf{A Mini-project: Scrambled Sentences} \]
Just as Logo programs can be iterative or recursive, so can English sentences. People are pretty good at understanding even rather long iterative sentences: “This is the farmer who kept the cock that waked the priest that married the man that kissed the maiden that milked the cow that tossed the dog that worried the cat that killed the rat that ate the malt that lay in the house that Jack built.” But even a short recursive (nested) sentence is confusing: “This is the rat the cat the dog worried killed.”
\[ \text{Write a procedure that takes as its first input a list of noun-verb pairs representing actor and action, and as its second input a word representing the object of the last action in the list. Your procedure will print two sentences describing the events, an iterative one and a nested one, following this pattern:} \]
? scramble [[girl saw] [boy owned] [dog chased] [cat bit]] "rat
This is
the girl that saw
the boy that owned
the dog that chased
the cat that bit
the rat
This is
the rat
the cat
the dog
the boy
the girl
saw
owned
chased
bit
You don't have to worry about special cases like “that Jack built”; your sentences will follow this pattern exactly.
Ordinarily the most natural way to program this problem would be as an operation that outputs the desired sentence, but right now we are concentrating on recursive commands, so you'll write a procedure that prints each line as shown above.
**Procedure Patterns**
Certain patterns come up over and over in programming problems. It’s worth your while to learn to recognize some of them. For example, let’s look again at `one.per.line`:
to one.per.line :word
if emptyp :word [stop]
print first :word
one.per.line butfirst :word
end
This is an example of a very common pattern:
A procedure pattern is different from the result patterns we examined earlier in this chapter. Before we were looking at what we wanted a not-yet-written procedure to accomplish; now we are looking at already-written procedures to find patterns in their instructions. A particular procedure might look like this pattern with the blanks filled in. Here’s an example:
```
to procedure :input
if emptyp :input [stop]
do.something.to first :input
procedure butfirst :input
end
```
Do you see how fits the pattern?
Continuing our investigation of literary forms, write a procedure to compose love poems, like this:
```
? lovepoem "Mary"
M is for marvelous, that’s what you are.
A is for awesome, the best by far.
R is for rosy, just like your cheek.
Y is for youthful, with zest at its peak.
Put them together, they spell Mary,
The greatest girl in the world.
```
The core of this project is a database of deathless lines, in the form of a list of lists:
```
make "lines [[A is for albatross, around my neck.] [B is for baloney, your opinions are dreck.] [C is for corpulent, ...] ...]
```
and a recursive procedure select that takes a letter and a list of lines as inputs and finds
the appropriate line to print by comparing the letter to the beginning of each line in the
list.
Another common pattern is a recursive procedure that counts something numerically, like countdown:
to countdown :number
if equalp :number 0 [stop]
print :number
countdown :number-1
end
And here is the pattern:
to procedure :number
if equalp :number 0 [stop]
do.something
procedure :number-1
end
A procedure built on this pattern is likely to have additional inputs so that it can do
something other than just manipulate the number itself. For example:
to manyprint :number :text
if equalp :number 0 [stop]
print :text
manyprint :number-1 :text
end
? manyprint 4 [Lots of echo in this cavern.]
Lots of echo in this cavern.
Lots of echo in this cavern.
Lots of echo in this cavern.
Lots of echo in this cavern.
to multiply :letters :number
if equalp :number 0 [stop]
print :letters
multiply (word :letters first :letters) :number-1
end
One way to become a skillful programmer is to study other people’s programs carefully. As you read the programs in this book and others, keep an eye open for examples of patterns that you think might come in handy later on.
**Tricky Stop Rules**
Suppose that instead of one per line we’d like a procedure to print the members of a list two per line. (This is plausible if we have a list of many short items, for example. We’d probably want to control the spacing on each line so that the items would form two columns, but let’s not worry about that yet.)
The recursive part of this program is fairly straightforward:
```lisp
to two.per.line :stuff
print list (first :stuff) (first butfirst :stuff)
two.per.line butfirst butfirst :stuff
end
```
The only thing out of the ordinary is that the recursive step uses a subproblem that’s smaller by two members, instead of the usual one.
But it’s easy to fall into a trap about the stop rule. It’s not good enough to say
```lisp
if emptyp :stuff [stop]
```
because in this procedure it matters whether the length of the input is odd or even. These two possibilities give rise to two stop rules. For an even-length list, we stop if the input is empty. But for an odd-length list, we must treat the case of a one-member list specially also.
```lisp
to two.per.line :stuff
if emptyp :stuff [stop]
if emptyp butfirst :stuff [show first :stuff stop]
print list (first :stuff) (first butfirst :stuff)
two.per.line butfirst butfirst :stuff
end
```
It’s important to get the two stop rules in the right order; we must be sure the input isn’t empty before we try to take its butfirst.
Why does this procedure include one show instruction and one print instruction? Why aren’t they either both show or both print?
|
{"Source-Url": "https://people.eecs.berkeley.edu/~bh/pdf/v1ch08.pdf", "len_cl100k_base": 6236, "olmocr-version": "0.1.46", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 34806, "total-output-tokens": 7160, "length": "2e12", "weborganizer": {"__label__adult": 0.0003097057342529297, "__label__art_design": 0.0002455711364746094, "__label__crime_law": 0.0001995563507080078, "__label__education_jobs": 0.0005817413330078125, "__label__entertainment": 7.271766662597656e-05, "__label__fashion_beauty": 0.00010097026824951172, "__label__finance_business": 7.432699203491211e-05, "__label__food_dining": 0.0003650188446044922, "__label__games": 0.0005965232849121094, "__label__hardware": 0.0006318092346191406, "__label__health": 0.0002225637435913086, "__label__history": 0.00013303756713867188, "__label__home_hobbies": 8.130073547363281e-05, "__label__industrial": 0.00023746490478515625, "__label__literature": 0.00035381317138671875, "__label__politics": 0.00013959407806396484, "__label__religion": 0.0004258155822753906, "__label__science_tech": 0.003875732421875, "__label__social_life": 8.946657180786133e-05, "__label__software": 0.0044708251953125, "__label__software_dev": 0.986328125, "__label__sports_fitness": 0.0002639293670654297, "__label__transportation": 0.0003123283386230469, "__label__travel": 0.0001285076141357422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24453, 0.00892]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24453, 0.38137]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24453, 0.91154]], "google_gemma-3-12b-it_contains_pii": [[0, 1255, false], [1255, 2709, null], [2709, 4245, null], [4245, 5612, null], [5612, 8152, null], [8152, 9224, null], [9224, 10224, null], [10224, 12543, null], [12543, 14012, null], [14012, 15039, null], [15039, 16474, null], [16474, 17945, null], [17945, 19628, null], [19628, 20550, null], [20550, 21647, null], [21647, 22679, null], [22679, 24190, null], [24190, 24453, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1255, true], [1255, 2709, null], [2709, 4245, null], [4245, 5612, null], [5612, 8152, null], [8152, 9224, null], [9224, 10224, null], [10224, 12543, null], [12543, 14012, null], [14012, 15039, null], [15039, 16474, null], [16474, 17945, null], [17945, 19628, null], [19628, 20550, null], [20550, 21647, null], [21647, 22679, null], [22679, 24190, null], [24190, 24453, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24453, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24453, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24453, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24453, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 24453, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24453, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24453, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24453, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24453, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24453, null]], "pdf_page_numbers": [[0, 1255, 1], [1255, 2709, 2], [2709, 4245, 3], [4245, 5612, 4], [5612, 8152, 5], [8152, 9224, 6], [9224, 10224, 7], [10224, 12543, 8], [12543, 14012, 9], [14012, 15039, 10], [15039, 16474, 11], [16474, 17945, 12], [17945, 19628, 13], [19628, 20550, 14], [20550, 21647, 15], [21647, 22679, 16], [22679, 24190, 17], [24190, 24453, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24453, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-23
|
2024-11-23
|
2024a43722e344a2a046046cdb9dc7776b36cad6
|
[REMOVED]
|
{"Source-Url": "http://www.cister.isep.ipp.pt/docs/provably_good_multiprocessor_scheduling_with_resource_sharing/574/attach.pdf", "len_cl100k_base": 4328, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 24659, "total-output-tokens": 5283, "length": "2e12", "weborganizer": {"__label__adult": 0.0003817081451416016, "__label__art_design": 0.0005178451538085938, "__label__crime_law": 0.0005011558532714844, "__label__education_jobs": 0.0010051727294921875, "__label__entertainment": 0.00011801719665527344, "__label__fashion_beauty": 0.0001852512359619141, "__label__finance_business": 0.0006017684936523438, "__label__food_dining": 0.0004549026489257813, "__label__games": 0.0005693435668945312, "__label__hardware": 0.00629425048828125, "__label__health": 0.001056671142578125, "__label__history": 0.00037479400634765625, "__label__home_hobbies": 0.00016546249389648438, "__label__industrial": 0.0009984970092773438, "__label__literature": 0.0002340078353881836, "__label__politics": 0.00042128562927246094, "__label__religion": 0.0005946159362792969, "__label__science_tech": 0.40869140625, "__label__social_life": 9.351968765258788e-05, "__label__software": 0.01203155517578125, "__label__software_dev": 0.56298828125, "__label__sports_fitness": 0.0003085136413574219, "__label__transportation": 0.0009031295776367188, "__label__travel": 0.0002474784851074219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17302, 0.03897]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17302, 0.33624]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17302, 0.86131]], "google_gemma-3-12b-it_contains_pii": [[0, 157, false], [157, 662, null], [662, 2370, null], [2370, 5792, null], [5792, 8329, null], [8329, 11628, null], [11628, 14361, null], [14361, 16690, null], [16690, 17302, null]], "google_gemma-3-12b-it_is_public_document": [[0, 157, true], [157, 662, null], [662, 2370, null], [2370, 5792, null], [5792, 8329, null], [8329, 11628, null], [11628, 14361, null], [14361, 16690, null], [16690, 17302, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17302, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17302, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17302, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17302, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17302, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17302, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17302, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17302, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17302, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17302, null]], "pdf_page_numbers": [[0, 157, 1], [157, 662, 2], [662, 2370, 3], [2370, 5792, 4], [5792, 8329, 5], [8329, 11628, 6], [11628, 14361, 7], [14361, 16690, 8], [16690, 17302, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17302, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
5dcf443f766a7ce0d5fbe12fbde7a547d2a7cf5e
|
Abstract
In the following, we describe our system developed for the Semeval2019 Task 8. We fine-tuned a BERT checkpoint on the qatar living forum dump and used this checkpoint to train a number of models. Our hand-in for subtask A consists of a fine-tuned classifier from this BERT checkpoint. For subtask B, we first have a classifier deciding whether a comment is factual or non-factual. If it is factual, we retrieve intra-forum evidence and using this evidence, have a classifier deciding the comment’s veracity. We trained this classifier on ratings which we crawled from qatarliving.com.
1 Introduction
This paper contains our system description for the SemEval2019 task 8 about Fact Checking in Community Forums. The task 8 is divided into two subtasks: In subtask A, the goal is to determine whether a question asks for a factual answer, an opinion or is just posed to socialize. In subtask B, if we have a question asking for a factual answer, we classify the answers to such a question into three categories, namely the answer is either true, false or non-factual, i.e. it does not answer the question in a factual way.
For subtask A, we trained a BERT classifier on the training set and optimized hyper-parameters on the development set. For subtask B, we decided to tackle the challenge with two binary classifiers: Firstly, we decide whether a comment is factual or not. If our classifier decides that a comment is factual, we retrieve intra-forum evidence to determine the comment’s veracity using a textual entailment approach. Given the small training set for subtask B, we decided to leverage openly available information on qatarliving.com to create a medium-sized training set. We found that comments on qatarliving.com are sometimes associated with ratings\(^1\) (ranging from 1 to 5) and discovered that high ratings often correspond to replies answering the question in a true way. If a comment has received a low rating, we inferred that the comment was most likely not helpful to answer the question and therefore we decided to treat it as a false reply.
2 Related Work
Automated Fact Checking is recently mostly perceived as a number of tasks which can be pipelined together. In the FEVER shared task, most participating systems would first find evidence and then train textual entailment models (Thorne et al., 2018). Related work for Fact Checking in community forums considers a multi-faceted approach incorporating firstly what is said, how it is said and by whom and secondly external evidence from either the web or from the forum itself (Mihaylova et al., 2018). An SVM is trained on top of these features to decide the veracity of a comment.
In our system, we took a similar approach by first retrieving possible evidence, secondly filtering such evidence (through another classifier) and eventually train a system which decides the veracity of a comment based on whether the comment is entailed by the found evidence or not.
3 System Description
Recent progress in natural language understanding shows that pre-training transformer decoders on language modelling tasks leads to remarkable transferable knowledge which boosts performance on a wide range of NLP tasks (Radford et al., 2018). The most recent development then is the
\(^1\)We learnt after the deadline of the shared task that these ratings were automatically generated: https://www.qatarliving.com/forum/technology-internet/posts/searching-information-qatar-living-has-just-grown-faster
Deep Bidirectional Transformers (BERT) which is jointly pre-trained on a masked language modelling task (therefore bidirectional) and on a next-sentence prediction task pushing already impressive results even further (Devlin et al., 2018). All our classifiers in our hand-in are fine-tuned BERT models.
3.1 Domain Adaptation
We firstly fine-tuned a BERT checkpoint (pre-trained on uncased English data only) on the unannotated dataset from Qatar Living with 189,941 questions and 1,894,456 comments (Nakov et al., 2016). Fine-tuning a BERT checkpoint on a new domain consists of further training it jointly on the masked language modelling task and the next-sentence prediction task. For this dataset, it is not always trivial to decide what a sentence is and we use whole comments later on anyways, so we replaced the next-sentence prediction task by a next-comment prediction task, that is our model has to guess whether two comments are appearing consecutively in a thread or not.
Given the peculiarities of the BERT tokenizer, we cleaned the dataset through the following steps:
- we lowercased all characters
- we replaced a character which appears more than three times consecutively to only appear three times ("!!!!!!!!!!!" then becomes "!!!")
- we removed user specific quotes
- we removed comments containing a type/token ratio\(^2\) of less than 0.15 (because we noticed that they are mostly spam)
- we replaced urls with a special token "url", phone numbers with a special token "tel" and email addresses with a special token "email"
In Table 1, we show the masked language modelling accuracy (MLM) and next-comment prediction accuracy (NC) for the uncleaned and the cleaned version, both fine-tuned for 100k steps. We also show results for training a task-specific model for subtask A (accuracy on the development set) with the stand-alone BERT model, a fine-tuned model on the raw data and a fine-tuned model on the cleaned data.
<table>
<thead>
<tr>
<th>System</th>
<th>MLM</th>
<th>NC</th>
<th>task A</th>
</tr>
</thead>
<tbody>
<tr>
<td>not fine-tuned</td>
<td>-</td>
<td>-</td>
<td>0.80</td>
</tr>
<tr>
<td>fine-tuned raw data</td>
<td>0.68</td>
<td>1</td>
<td>0.79</td>
</tr>
<tr>
<td>fine-tuned cleaned data</td>
<td>0.57</td>
<td>0.89</td>
<td>0.84</td>
</tr>
</tbody>
</table>
Table 1: Effect of cleaning the dataset
We capped characters to only appear maximum three times consecutively. If they appear more often, they would form a subword anyways and we think it is too easy for the model to guess such subwords in longer sequences (consider the sequence "!!!!!<MASKED>!!!!!"). Users in the forum can add specific quotes which are appended to their posts, e.g. one user chose the ending "life’s too short so make the most of it: you only live but once..." which appears 3865 times in the data. We refer to this as "user specific quotes" and removed them as we believe the model would overfit on such quotes during fine-tuning and would not learn useful knowledge about the domain while doing so. Lastly, we believe that there is not much value to be gained in learning urls, phone numbers and emails, and they often get splitted into a long series of subword units (the vocabulary is managed through byte-pair encoding). We think, these reasons combined make the model learn such patterns very well (resulting in a higher accuracy for the BERT tasks for the model trained on the raw data), but it does not gain much transferable knowledge by doing so, resulting in a lower accuracy for sub-task A.
3.2 Subtask A
For subtask A, we trained a task-specific BERT classifier from the fine-tuned BERT checkpoint explained above. Fine-tuning such a classifier consists of learning embeddings for a special classification token, let the model compute self-attention over its 12 layers and finally gather the hidden representation of the classification token (the first token in the sequence usually). This hidden representation is fed into one hidden layer and lastly one classification layer. The input to the model is the concatenation of the question’s subject and its body and we regularize the model by applying a dropout of 0.1 on the classification layer. We grid-searched over the proposed hyper-parameter range in the BERT paper (that is initial learning rate, batch-size and number of fine-tuning epochs) (Devlin et al., 2018).
\(^2\)https://en.wikipedia.org/wiki/Lexical_density
In Table 2, we report the accuracy on the development set for a number of experiments with different features. RelQBody (the opening post by the thread creator) is the question’s body, RelQSubject the question’s subject (the title of a thread) and RelQCategory its category (the name of the sub-board it has been posted in). We concatenated the different features with whitespaces in between.
<table>
<thead>
<tr>
<th>Feature</th>
<th>acc</th>
</tr>
</thead>
<tbody>
<tr>
<td>RelQBody</td>
<td>0.82</td>
</tr>
<tr>
<td>RelQSubject + RelQBody</td>
<td>0.84</td>
</tr>
<tr>
<td>RelQCategory + RelQSubject + RelQBody</td>
<td>0.83</td>
</tr>
</tbody>
</table>
Table 2: Accuracy for different features for subtask A
Using only the question’s body results in slightly worse results than the concatenated subject and body. We also tried to add the category, that is the name of the sub-forum a question has been posted in. The rationale here is that one sub-board on qatarliving is called “Socialising” and we thought it might give the model a cue that questions there are more prone to be of the class socializing. However, we get slightly worse results by including it. Our final hand-in eventually consists of an ensemble of 5 models (the voting strategy is majority voting) which are trained on the concatenation of the subject and the body of a question.
Our system ranked fifth with an accuracy of 82% on the test set.
3.3 Subtask B: Overview
As we described earlier, we decided to tackle subtask B as a series of different tasks and for each, we trained different models:
1. decide whether a comment is factual or non-factual
2. retrieve related threads (based on the question of a thread)
3. filter for relevant comments in related threads
4. train a textual entailment system, that is whether the evidence entails a claim or not
For the first step, we have fine-tuned a BERT checkpoint on the SQuAD question answering corpus (Rajpurkar et al., 2016). If a comment contains the answer to a question, we consider it as factual and have to check its veracity in a further step. If the answer to a question can not be found in the comment, we label it as non-factual. If the answer can be found in a comment, i.e. we have a factual comment, we continue with steps 2-4.
For the second step, we search for intra-forum evidence in the qatar living forum dump (Nakov et al., 2016). We concatenate the subject and body of each thread. We lowercase all the tokens, remove all characters except the letters a-z and use the snowball stemmer (Porter, 2001) for stemming the tokens. Afterwards, we search for the most similar threads using TF-IDF and keep the five most similar threads.
We also manually evaluated whether gigablast and the duckduckgo API would yield useful evidence, but after having checked 15 sampled questions from the development set manually, we decided to not pursue this any further. First of all, if we just use the question’s subject concatenated with its body as the query for the search engine, it would not be precise and most such queries would not return relevant web pages. One has to summarize this large text of the question automatically into a query suitable for a web search engine. We manually created search-engine searchable queries for the 15 sampled questions and found that only two of such queries returned relevant results. This may be because there is less information available on the internet for queries regarding living in Qatar except for the forum qatarliving.com itself. Hence, we decided to let go of the idea of using publicly available web search engines with automatically summarized questions for this task.
For the third step, we trained a BERT model on the concatenation of the SemEval2016 task 3 subtask A and subtask C data to filter the intra-forum evidence. The input to the model is the original question (the one we want to fact-check comments for) and the found replies in the most similar threads. The output is whether a comment answers that question in a relevant way (yes or no). For the test set for task B, we found 642 comments via the TF-IDF search engine and after filtering the comments, we are left with 162 comments as evidence (24% of these 642 comments).
For the fourth and last step, we also used a BERT model. This model should predict the veracity of a comment given the retrieved evidence in step two and three. However, given the small
---
3https://en.wikipedia.org/wiki/Textual_entailment
4https://radimrehurek.com/gensim/
5https://www.gigablast.com/
6https://duckduckgo.com/api
size of the training set for subtask B (135 false and 166 true comments), we did not manage to find a suitable hyper-parameter configuration which would yield a model with decent performance on the development set.
### 3.4 Subtask B: Textual Entailment Model
While looking at the forum online, we noticed that some comments in the forum are associated with ratings (Figure 1). Such ratings can range from 1 to 5 and we found that comments with a rating of 5 tend to answer questions in a true way and comments with a rating of 2 or 3 tend to have not been that helpful (we did not find any comments with a rating of 1).
Hence, we have crawled the threads from the forum dump (Nakov et al., 2016) online so that we get the corresponding ratings. We found that the url of a thread is a combination of the sub-forum a thread has been posted in and its subject (with whitespaces replaced with a "+" and some stopwords removed) and reverse engineered the name of the urls. We ignored the threads for which we couldn’t find the corresponding web page automatically. After having crawled the website for one night (with short pauses after each call to the website), we ended up with 19'000 comments with a rating of 5 and 13'000 comments with a rating of 2 or 3, resulting in a corpus with 32'000 examples. With this corpus, we trained a textual entailment system which predicts whether a comment is associated with a rating of 2-3 or 5 (we left out comments with a rating of 4 and comments without a rating).
We then retrieved intra-forum evidence as described above for all these 32’000 comments and trained our BERT checkpoint (which was pretrained on the forum dump) on that corpus and obtain "question-comment-evidence" triplets. Let us assume the question is "Where can I get Potassium Nitrate?", the comment is "Try Metco industrial area. 465 1234” and we retrieve two evidence texts "potassium nitrate are not allowed to buy here in qatar. you have to ask a permission from the police department or to the civil defense...” and "not sure if same as what you want; but i got potassium before from pharmacies...”. We then form two triplets (one for each evidence text) and let the model predict an output for each.
Since the different retrieved evidence for each claim is independent, we thought that it would be a bad idea to just concatenate all the evidence and use that as input to our classifier. We therefore decided to aggregate the outputs of each triplet using the logsumexp function (Eq. 1) which is a smooth version of the max function and allows the model to back-propagate dense gradients (Verga et al., 2018). We think this lets the model also figure out on its own which evidence it should look out for.
\[
\text{scores}(i) = \log \sum \exp(A_{ij})
\]
\(A\) is a matrix with two columns (bad rating or good rating) in which we stack the predictions for each "question-comment-evidence” triplet. That is, each row in that matrix is the prediction for a comment with a rating given one evidence comment found in the forum. In comparison to the normal max function (which back-propagates sparse gradients), we learn from each comment-evidence pair and not only from the one with the highest scores.
We trained that model with a batch-size of 8 answers and for each answer, we retrieve 4 evidence comments (resulting in 32 triplets). During test time, we retrieve up to 8 evidence comments, predict results for each triplet and aggregate the predictions for each triplet using the logsumexp function to yield a final classification. In Table 3, we show the results of our two classifiers on the training set of subtask B (because we did not use that set for training at all).
<table>
<thead>
<tr>
<th>class</th>
<th>pr</th>
<th>rc</th>
<th>F1</th>
</tr>
</thead>
<tbody>
<tr>
<td>non-factual</td>
<td>0.43</td>
<td>0.53</td>
<td>0.47</td>
</tr>
<tr>
<td>factual</td>
<td>0.63</td>
<td>0.53</td>
<td>0.58</td>
</tr>
<tr>
<td>factual false</td>
<td>0.36</td>
<td>0.52</td>
<td>0.42</td>
</tr>
<tr>
<td>factual true</td>
<td>0.71</td>
<td>0.56</td>
<td>0.62</td>
</tr>
</tbody>
</table>
Table 3: Results on training set of subtask B
The first two rows show the results of our BERT model trained on the SQuAD corpus. The factual class contains the examples which are true or false. After having performed a manual error analysis for the factual and non-factual class, we conclude that we disagree with some of the annotations in the training corpus. The last two rows show the performance of our classifier trained on ratings on the training set. For the true answers, it performs better than for the false answers (which might be due to a slight imbalance of training examples in our compiled corpus).
3.5 Subtask B: Contrastive Runs
We only handed in contrastive runs for subtask B. The difference to our original hand-in is solely the classifier deciding whether a comment is factual or non-factual. In our first contrastive run, we used the BERT model pre-trained on the concatenation of the SemEval2016 task 3 subtask A and C data (the same we use to filter evidence). For our second run, we used a ranking model to get a similarity score between a question and a comment based on the ratings. We minimized the following loss:
\[
\text{loss} = \sum_i \max(0, \delta - \cos(q_i, \text{comment}_5) + \cos(q_i, \text{comment}_0))
\]
where \(i\) is a data point from the web-crawled corpus, \(\text{comment}_5\) is the vector obtained by the model for a comment with rating 5, \(\text{comment}_0\) is the obtained vector by the model for a comment without a rating, \(q_i\) is the model obtained vector for the corresponding question, \(\delta(=0.1)\) is the allowed margin between a positive similarity and a negative similarity which is chosen as a hyper-parameter. All vectors are obtained by max pooling the hidden states of an encoder BI-LSTM on the input text (question/comment). Our assumption is that the comment with rating 5 will be a factual answer in most of the cases (noisily labelled). Furthermore, we fine-tuned this model for the answer classification task on the training dataset for the labels 'non-factual' and 'true/false'. In table 4, we report the results for our different runs on the test set.
We also submitted an all non-factual baseline run on the test set and it scored 83% accuracy. We think this biased test set hence does not reflect the model’s ability to fact check comments. We reckon that in further work on this dataset, one should therefore not focus on accuracy but on a different metric.
4 Conclusion
We described our hand-in for the semeval2019 task 8. For subtask A, we fine-tuned a BERT checkpoint pretrained on a cleaned qatar living forum dump. For subtask B, we decided to use two classifiers. One classifier decides whether a comment is factual or non-factual. If it is factual, a second classifier makes a prediction about the comment’s veracity. Given the small size of the training dataset, we crawled qatarliving.com to generate a medium sized, weakly supervised training corpus based on ratings in the forum. To train our model, we searched for intra-forum evidence for every comment and fine-tuned a BERT classifier for each question-comment-evidence triplet. Since the retrieved evidence is independent of each other, we did not concatenate all the evidence for a question but aggregated results for each triplet using the logsumexp function. We de-
Table 4: Results of different runs for subtask B on test set
<table>
<thead>
<tr>
<th>run</th>
<th>acc (%)</th>
<th>F1</th>
<th>AvgRec</th>
<th>MAP</th>
</tr>
</thead>
<tbody>
<tr>
<td>main</td>
<td>0.72</td>
<td>0.4</td>
<td>0.44</td>
<td>0.27</td>
</tr>
<tr>
<td>1. Contrastive run</td>
<td>0.81</td>
<td>0.48</td>
<td>0.53</td>
<td>0.21</td>
</tr>
<tr>
<td>2. Contrastive run</td>
<td>0.48</td>
<td>0.21</td>
<td>0.31</td>
<td>0.29</td>
</tr>
<tr>
<td>non-factual baseline</td>
<td>0.83</td>
<td>0.28</td>
<td>0.33</td>
<td>0.29</td>
</tr>
</tbody>
</table>
cided to use this function for aggregation because it allows the model to send back dense gradients and learn from all the comment-evidence pairs and not only the evidence with the highest score.
5 Acknowledgements
This work was partially supported by the German Federal Ministry of Education and Research (BMBF) through the project DEEPLEE (01IW17001).
References
|
{"Source-Url": "https://www.dfki.de/fileadmin/user_upload/import/10425_76_Paper.pdf", "len_cl100k_base": 4938, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 17212, "total-output-tokens": 5660, "length": "2e12", "weborganizer": {"__label__adult": 0.0005440711975097656, "__label__art_design": 0.0011835098266601562, "__label__crime_law": 0.00070953369140625, "__label__education_jobs": 0.00627899169921875, "__label__entertainment": 0.0005826950073242188, "__label__fashion_beauty": 0.0004069805145263672, "__label__finance_business": 0.0006494522094726562, "__label__food_dining": 0.0005021095275878906, "__label__games": 0.0013990402221679688, "__label__hardware": 0.001064300537109375, "__label__health": 0.0008249282836914062, "__label__history": 0.0006303787231445312, "__label__home_hobbies": 0.00014889240264892578, "__label__industrial": 0.0006499290466308594, "__label__literature": 0.00492095947265625, "__label__politics": 0.0007185935974121094, "__label__religion": 0.0006890296936035156, "__label__science_tech": 0.3232421875, "__label__social_life": 0.0003838539123535156, "__label__software": 0.0699462890625, "__label__software_dev": 0.5830078125, "__label__sports_fitness": 0.00033092498779296875, "__label__transportation": 0.0006566047668457031, "__label__travel": 0.00027489662170410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22379, 0.0432]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22379, 0.13274]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22379, 0.89693]], "google_gemma-3-12b-it_contains_pii": [[0, 3494, false], [3494, 7864, null], [7864, 12391, null], [12391, 16919, null], [16919, 20059, null], [20059, 22379, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3494, true], [3494, 7864, null], [7864, 12391, null], [12391, 16919, null], [16919, 20059, null], [20059, 22379, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22379, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22379, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22379, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22379, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22379, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22379, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22379, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22379, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22379, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22379, null]], "pdf_page_numbers": [[0, 3494, 1], [3494, 7864, 2], [7864, 12391, 3], [12391, 16919, 4], [16919, 20059, 5], [20059, 22379, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22379, 0.21359]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
805eced6b8676bfcc4311f1935598107c55fa942
|
Encyclopedia of Virtual Communities and Technologies
Subhasish Dasgupta
George Washington University, USA
Intellectual Property Rights in Open Source Software Communities
Chitu Okoli
Concordia University, Canada
Kevin Carillo
Concordia University, Canada
INTRODUCTION
Intellectual property is an old concept, with the first recorded instances of patents (1449) and copyrights (1662) both occurring in England ("Intellectual property", Wikipedia, 2004). The first piece of software was submitted for copyright to the United States Copyright Office in 1961, and was accepted as copyrightable under existing copyright law (Hollaar, 2002).
The open source movement has relied upon controversial intellectual property rights that are rooted in the overall history of software development (Lerner & Tirole, 2002; von Hippel & von Krogh, 2003). By defining specific legal mechanisms and designing various software licenses, the open source phenomenon has successfully proposed an alternative software development model whose approach to the concept of intellectual property is quite different from that taken by traditional proprietary software.
A separate article in this encyclopedia treats open source software communities in general as a type of virtual community. This article takes a historical approach to examining how the intellectual property rights that have protected free/open source software have contributed towards the formation and evolution of virtual communities whose central focus is software projects based on the open source model.
However, things began to change in the early 1980s as computers became more ubiquitous, as physical sizes shrank and prices dropped while computing power simultaneously increased dramatically. Computing-based enterprises and even not-for-profit shops such as AIL began to realize the commercial value of software, and they started to enforce their copyrights and began to restrict sharing of software code strictly to their own organizations. Richard Stallman, a hacker at MIT's AIL, opposed these moves to no avail. He finally quit in 1984 in protest against the restrictions on sharing among computer programmers, which he considered inimical to the hacker culture. He founded the Free Software Foundation (FSF) and with legal consultation created the concept of the "copyleft", proclaimed in the GNU Manifesto (FSF, 1985) and legally enshrined in 1989 in the GPL (FSF, 1991).
Copyleft as expressed by the GPL has had a critical effect on shaping the very existence of open source software virtual communities. Open source software uses copyright law to preserve certain freedoms (hence the name, "free software") regarding the creation, modification, and sharing of software. Specifically, all open source software grants users the following key rights:
1. **The right to full access to the source code:** When a computer programmer sees how a piece of software actually works, as specified in the source code, they can fully understand the inner workings and can intelligently modify the software as they deem appropriate.
2. **The right for anyone to run the program for any purpose without restriction:** There are no restrictions against commercial, military, foreign, or any other use, and discrimination against users for any reason is expressly forbidden.
3. **The right to modify the source code:** This includes absorbing the software, in whole or in part, into other pieces of software created by other developers.
4. **The right to distribute both the original software and the modified software:** A key difference be-
between “free software” and “freeware” is that while freeware generally permits and encourages free distribution of the software, it does not permit sale of the distributed software beyond reasonable distribution costs.
5. The right to know about their open source rights:
The open source license must be prominently displayed and distributed to users, so that they are aware of their rights (including access to the source code).
The GPL, the first legal document to license open source software, grants users and developers these rights with the intention that developers would modify the software and share it with others with similar liberality. This is a distinct concept beyond simple “open source” that the FSF calls “copyleft”. To guarantee this goal, the GPL grants the privileges mentioned above as long as a key condition is observed: The obligation to distribute derivatives under copyleft. Any software modified under the GPL can be redistributed for sale, but it must be licensed under a copyleft license; that is, modified derivative works must also be made available under an open source license. While it does not have to be licensed under the GPL itself, the chosen license may not restrict any of the five rights listed above.
These copyleft terms are critical to the very existence of OSS virtual communities. When Richard Stallman posted his manifesto and invited software developers to join him in his crusade for free software, there was no lack of sympathetic and willing hackers who wanted a return to the days of free sharing. However, there was a grave concern that, corporate interests could easily take these programs, add their proprietary extensions, and withdraw the software from public access. With its copyleft mechanism, the GPL guaranteed that any person or corporation who wanted to benefit from the liberal efforts of computer programmers would be legally bound to share their work in the same spirit of camaraderie. Considering the climate in which the free software movement was founded, it is unlikely that the movement could have gotten off the ground without such a radical clarion call to mobilize devoted followers in the first place.
IMPORTANT OPEN SOURCE SOFTWARE LICENSES, AND THEIR EFFECTS ON OPEN SOURCE SOFTWARE COMMUNITY LIFE
As detailed earlier, the GNU GPL was the first open source software license, and with its strong copyleft provisions, it enabled the possibility of open source software communities to form. One particularly strong feature of the GPL is its requirement that not only must derivates of licensed software be copylefted (that is, made available under GPL-like terms), but all software programmatically linked together with GPL-licensed software must also be copylefted. This requirement, inspired by the Free Software Foundation’s stated goal of eventually ridding the world of proprietary software, has been widely considered excessive. In fact, no other organization has issued such restrictive open source software licenses. However, in spite of its strictness, the GPL remains the most popular licenses for open source software.
Based largely on the GPL, open source development communities such as SourceForge.net have flourished, protected by open source licenses that permit free creation and sharing of open source software. The most important addition to the GPL camp was Linux, which provided a long-sought kernel for the operating system being built by the GNU Project and that has now being proven to be powerful, fast, efficient, stable, reliable, and scalable (Edwards, 1998).
Loosening Up: Open Source Becomes More Commercial
In the 1990s, largely resulting from the phenomenal success of Linux, many of the organizations who had gradually commercialized their software in the 1970s and 80s came to appreciate the quality and quantity of work that could be done with their software when released to open source communities under the protection of appropriate licensing structures (West & Dedrick, 2001). However, few of these organizations felt comfortable with according rights as broad-sweeping as the GPL, and so gradually a wide variety of licenses were developed as various large software developers, both commercial and academic, began to experiment with releasing their source code for free development. These licenses avoided imposing the requirement of sharing such software under such rules; that is, they generally permitted developers to make proprietary derivatives from the selected source code they released.
Although the University of California already widely licensed their proprietary version of Unix, the Berkeley Software Distribution (BSD), they re-licensed it with the open source BSD License in the early 1990s ("Berkeley Software Distribution", Wikipedia, 2004). The BSD license gives users the rights to run programs, to view and modify the source, and to distribute their modifications, including for commercial purposes. However, unlike the GPL, the BSD license does not require licensees to release the modifications by copyleft—they are free to make their modifications proprietary. Popular programs that use this
license include a number of variations of the BSD Unix operating system, the JGraph graphing tool, and the PostgreSQL database management system.
Similarly, MIT released various programs under the simple MIT license in the same period under terms very similar to the BSD license. Programs that use this license include the X Windows Unix graphical user interface and the BitTorrent file downloading system.
In 1991 the Free Software Foundation released the Library GPL (later renamed the “Lesser GPL”), which retains the requirement of derivatives being copylefted, but without imposing the same restriction on programmatically linked software (FSF, 1999). This permits the distribution of dynamically linked libraries that are attached to large pieces of software; in particular, the Lesser GPL permits proprietary software to use open source software modules without having to be entirely released as open source.
Open Source Becomes as Competitive as Commercial Products
The communities formed using these later licenses are different from those that typically use GNU licenses in a few ways. Generally speaking, they do not espouse the free software philosophy as radically as the FSF. The software development under these licenses is generally carried out in smaller, focused projects, and the resulting products are often eventually made proprietary by the single person or company that started the project. Many of these projects can hardly be called “communities”, and many of them are not primarily virtual—that is, connected via telecommunications.
However, the success of commercialized open source software demonstrated that this model of software development can create valuable products that can then become proprietary, to the benefit of the corporate founder (Ousterhout, 1999). In 1998, Netscape Corporation released the source code of its ailing Communicator Internet client suite for open source development under the Mozilla project. Mozilla was released under the Netscape Public License (NPL) and the Mozilla Public License (MPL) (“Mozilla,” Wikipedia, 2004). These licenses attempted to include a copyleft provision that required modifiers to distribute derivative works under similar licenses, but the copyleft specifications were light in the sense that programmers could use packaging loopholes to distribute proprietary extensions along with NPL/MPL code.
The release of Netscape code was a milestone in that it was the first major attempt for a large corporation to license their core code as free software, with the strategic intention of incorporating the improvements into their commercial products. The development of the Mozilla project was very gradual until 2003 when AOL, who had bought Netscape after its change in strategy in 1998, scaled back their support for the project. However, this led to the formation of the independent Mozilla Foundation that rallied support and steam for the project. In November 2004, Mozilla released the first official version of their Firefox Web browser, widely acclaimed as matching or even superseding the quality of Microsoft’s Internet Explorer, the market dominator at that time.
Apple Computer surprisingly followed this model in 2000 when they released the kernel of their Unix-based operating system to the open source community as Darwin 1.0 (“Apple Darwin”, Wikipedia, 2004). The original Apple Public Source License (APSL) under which it was released was similar to the Netscape Public License in that it reserved proprietary rights for Apple. In 2002 they helped form the OpenDarwin community, which develops Apple Darwin, the kernel of Apple’s flagship Mac OS X. Apple eventually revised the APSL to be fully copyleft, such that it has been approved by the Free Software Foundation, though it is not as strong as the GPL.
The communities formed around these licenses (Netscape/Mozilla and Darwin) are remarkably different from other OSS communities in that they are dedicated to developing products in parallel with commercial products. Netscape Communicator was almost dying, but is being revived by the success of the Mozilla Project. Similarly, the Apple Mac OS X operating system is flourishing as a result of the partnership between commercial sponsors and the supporting OSS communities which has given the community members the opportunity to create high-quality software (Mishra, Prasad, & Raghunathan, 2002; Stamelos, Angelis, Oikonomou, & Bleris, 2002). The different OSS licenses that have been introduced so far and their specific assumptions are presented in Table 1.
The “Open” Concept Extends Beyond Software
There is much more to open source software than just a technical phenomenon and an alternate software development methodology. Open source software projects are virtual communities in which people interact to achieve a common goal (Chengalur-Smith & Sidorova, 2003; Diker & Scholl, 2001; K. Lakhani & von Hippel, 2003; Ljungberg, 2000). Such communities have power structures, community norms, values, and traditions (Bergquist & Ljungberg, 2001). Most open source software communities are somewhat narrow in their scope of contributors, being primarily white males around 30 years old (Ghosh, Glott, Krieger, & Robles, 2002; Hars & Ou, 2001; K. R. Lakhani, Wolf,
Table 1. Major open source software licenses and their associated assumptions
<table>
<thead>
<tr>
<th>Major open source software licenses</th>
<th>Rights</th>
</tr>
</thead>
<tbody>
<tr>
<td>Name</td>
<td>Creation Date</td>
</tr>
<tr>
<td>GNU GPL</td>
<td>1984</td>
</tr>
<tr>
<td>GNU Free Documentation License</td>
<td>1991</td>
</tr>
<tr>
<td>GNU Lesser GPL</td>
<td>1990</td>
</tr>
<tr>
<td>MIT License</td>
<td>early 1990s</td>
</tr>
<tr>
<td>Berkeley Software Distribution License</td>
<td>1998</td>
</tr>
<tr>
<td>Netscape/ Mozilla Public License</td>
<td>1999</td>
</tr>
<tr>
<td>Apple Public Source License</td>
<td>2000</td>
</tr>
</tbody>
</table>
Bates, & DiBona, 2002). This corresponds closely to the primary demographic of skilled computer programmers. Nonetheless, the open source approach has inadvertently provided concepts that are not only restricted to software. In 2000, the FSF created the Free Documentation License, which was designed to license the text documentation that accompanies free software under terms similar to the GPL, only adapted for text content. This new type of license created a legal instrument for the existence of the textual counterpart of open source software, sometimes called “open source” or “open content” text. The primary exponent of this model has been Wikipedia, the free encyclopedia (“free” in the same sense as free software—www.wikipedia.org). The encyclopedia is maintained by a community of over 100,000 contributors—far larger than any open source software community—because anyone is permitted to contribute to articles.
A comparable phenomenon is being created by the Creative Commons (CC, www.creativecommons.org), a resource that creates licenses on demand for literary, audio, and video works—the more traditional media for copyright licensing (CC refers people to the GNU GPL for software licensing). Created in 2002, CC lets creators of content choose among several rights patterns they want to give users of their works; the community aspect arises from their hosting a directory of CC-licensed works on the Internet. Even though the CC community is not very cohesive as a virtual community, it does provide a legal vehicle by which non-software virtual communities that create shared content could license their works.
CONCLUSION
With the history outlined here, open source software communities have established themselves as an important type of virtual community, secured by the legal framework of open source licenses. To date, out of thousands of software works licensed under the GPL since 1984, we are aware of only one challenge to its legality. As of December 2004, the SCO Group, who currently owns the copyrights to the Unix operating system, is suing IBM for copyright violations on its distribution of Unix-based operating systems, including Linux (“SCO-Linux Controversies”, Wikipedia, 2004). Among other allegations, SCO claims that IBM’s distribution of Linux violates their copyright because, they claim, the GPL is invalid. This first legal test of the GPL is widely considered frivolous, but as of the writing of this article, this case is still being tried in the United States. If SCO should win and then require Linux distributors to obtain special licenses from them, such a development could seriously hinder the development of this operating system. However, if SCO should lose, as most observers expect, this legal test could serve to boost and incontrovertibly establish the place of this type of virtual community with its important role in the software industry.
REFERENCES
Intellectual Property Rights in Open Source Software Communities
**KEY TERMS**
**Copyleft**: A non-exclusive, publicly-accorded legal license backed by copyright law that permits derivative works from the copyright holder's licensed works, on the condition that licensees relicense their works to the public under a similarly liberal copyleft.
**Copyright**: The exclusive right given to the creator of an intellectual work of text, audio, video, or software, to restrict and control how their work and its derivatives are distributed or how they are exploited for financial or other benefit.
**Free Software**: An earlier name for open source software, emphasizing the liberties given to end users and developers of derivative works. Particularly used for copylefted open source software. There is no requirement that the software be distributed at no charge; thus, distinct from freeware.
**GNU General Public License**: The first and still the most radical open source software license, created for the GNU Project. Requires that all derivative works be equally free (in the open source sense); that is, all derivative works must provide the full source code and must permit free use, modification, and redistribution.
**GNU Project**: (Stands for, "Gnu's Not Unix") Established by Richard Stallman in 1983 under the auspices of the Free Software Foundation. Its goal was, and still is, to create an open source Unix-based operating system. This goal was realized in 1991 by Linus Torvald’s creation of Linux.
---
**KEY TERMS**
**Copyleft**: A non-exclusive, publicly-accorded legal license backed by copyright law that permits derivative works from the copyright holder’s licensed works, on the condition that licensees relicense their works to the public under a similarly liberal copyleft.
**Copyright**: The exclusive right given to the creator of an intellectual work of text, audio, video, or software, to restrict and control how their work and its derivatives are distributed or how they are exploited for financial or other benefit.
**Free Software**: An earlier name for open source software, emphasizing the liberties given to end users and developers of derivative works. Particularly used for copylefted open source software. There is no requirement that the software be distributed at no charge; thus, distinct from freeware.
**GNU General Public License**: The first and still the most radical open source software license, created for the GNU Project. Requires that all derivative works be equally free (in the open source sense); that is, all derivative works must provide the full source code and must permit free use, modification, and redistribution.
**GNU Project**: (Stands for, "Gnu's Not Unix") Established by Richard Stallman in 1983 under the auspices of the Free Software Foundation. Its goal was, and still is, to create an open source Unix-based operating system. This goal was realized in 1991 by Linus Torvald’s creation of Linux.
**Intellectual Property Rights**: Exclusive rights accorded by a state to legal persons based on intangible knowledge, permitting them to control how the knowledge is distributed or exploited for financial or other benefit. Consists of copyrights, patents, trademarks, and trade secrets.
**Linux**: A Unix-based open source operating system designed for Intel-based microcomputers. The kernel was created in 1991 by Linus Torvalds, and it was added on to the GNU Project to form what is properly called the GNU/Linux operating system.
**Mozilla Project**: A project formed in 1998 when Netscape released its Internet tools suite for open source development. Released its flagship Firefox Web browser and Thunderbird e-mail client in late 2004. The Sunbird calendar component is currently under development.
**Open Source Software**: Software whose source code is liberally made available for use, modification, creation of derivative works, and redistribution. Not necessarily copylefted.
|
{"Source-Url": "http://chitu.okoli.org/media/pro/research/pubs/OkoliCarillo2005EVC.pdf", "len_cl100k_base": 4292, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 9144, "total-output-tokens": 5424, "length": "2e12", "weborganizer": {"__label__adult": 0.0009007453918457032, "__label__art_design": 0.001560211181640625, "__label__crime_law": 0.0182037353515625, "__label__education_jobs": 0.0046539306640625, "__label__entertainment": 0.0002949237823486328, "__label__fashion_beauty": 0.000232696533203125, "__label__finance_business": 0.00904083251953125, "__label__food_dining": 0.0005946159362792969, "__label__games": 0.00327301025390625, "__label__hardware": 0.0007905960083007812, "__label__health": 0.0006823539733886719, "__label__history": 0.001033782958984375, "__label__home_hobbies": 0.0002491474151611328, "__label__industrial": 0.00045990943908691406, "__label__literature": 0.0019054412841796875, "__label__politics": 0.003116607666015625, "__label__religion": 0.0006499290466308594, "__label__science_tech": 0.03173828125, "__label__social_life": 0.0005135536193847656, "__label__software": 0.1990966796875, "__label__software_dev": 0.7197265625, "__label__sports_fitness": 0.0002491474151611328, "__label__transportation": 0.0005826950073242188, "__label__travel": 0.00039505958557128906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24400, 0.01696]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24400, 0.52296]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24400, 0.92469]], "google_gemma-3-12b-it_contains_pii": [[0, 107, false], [107, 107, null], [107, 3595, null], [3595, 8738, null], [8738, 14020, null], [14020, 18729, null], [18729, 23409, null], [23409, 24400, null]], "google_gemma-3-12b-it_is_public_document": [[0, 107, true], [107, 107, null], [107, 3595, null], [3595, 8738, null], [8738, 14020, null], [14020, 18729, null], [18729, 23409, null], [23409, 24400, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24400, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24400, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24400, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24400, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24400, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24400, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24400, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24400, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24400, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24400, null]], "pdf_page_numbers": [[0, 107, 1], [107, 107, 2], [107, 3595, 3], [3595, 8738, 4], [8738, 14020, 5], [14020, 18729, 6], [18729, 23409, 7], [23409, 24400, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24400, 0.11765]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
60a78855a34acaa6e55df30d9418eb0a621de2df
|
[REMOVED]
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01346185/document", "len_cl100k_base": 7000, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 32037, "total-output-tokens": 8445, "length": "2e12", "weborganizer": {"__label__adult": 0.0010881423950195312, "__label__art_design": 0.0006804466247558594, "__label__crime_law": 0.0012311935424804688, "__label__education_jobs": 0.0013799667358398438, "__label__entertainment": 0.00029969215393066406, "__label__fashion_beauty": 0.000545501708984375, "__label__finance_business": 0.000457763671875, "__label__food_dining": 0.0010223388671875, "__label__games": 0.0312347412109375, "__label__hardware": 0.0019683837890625, "__label__health": 0.0010318756103515625, "__label__history": 0.0008525848388671875, "__label__home_hobbies": 0.00017964839935302734, "__label__industrial": 0.0010328292846679688, "__label__literature": 0.0006475448608398438, "__label__politics": 0.0007224082946777344, "__label__religion": 0.0010433197021484375, "__label__science_tech": 0.05950927734375, "__label__social_life": 0.00017213821411132812, "__label__software": 0.007503509521484375, "__label__software_dev": 0.884765625, "__label__sports_fitness": 0.0011796951293945312, "__label__transportation": 0.0011816024780273438, "__label__travel": 0.0004503726959228515}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32809, 0.01452]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32809, 0.35782]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32809, 0.89569]], "google_gemma-3-12b-it_contains_pii": [[0, 1187, false], [1187, 3819, null], [3819, 6981, null], [6981, 9098, null], [9098, 11818, null], [11818, 14917, null], [14917, 17264, null], [17264, 20181, null], [20181, 21408, null], [21408, 22699, null], [22699, 24766, null], [24766, 27571, null], [27571, 29312, null], [29312, 30848, null], [30848, 32809, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1187, true], [1187, 3819, null], [3819, 6981, null], [6981, 9098, null], [9098, 11818, null], [11818, 14917, null], [14917, 17264, null], [17264, 20181, null], [20181, 21408, null], [21408, 22699, null], [22699, 24766, null], [24766, 27571, null], [27571, 29312, null], [29312, 30848, null], [30848, 32809, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32809, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32809, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32809, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32809, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32809, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32809, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32809, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32809, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32809, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32809, null]], "pdf_page_numbers": [[0, 1187, 1], [1187, 3819, 2], [3819, 6981, 3], [6981, 9098, 4], [9098, 11818, 5], [11818, 14917, 6], [14917, 17264, 7], [17264, 20181, 8], [20181, 21408, 9], [21408, 22699, 10], [22699, 24766, 11], [24766, 27571, 12], [27571, 29312, 13], [29312, 30848, 14], [30848, 32809, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32809, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
ccfc95c88ba0c09b401107c2d4ac45b696827a32
|
[REMOVED]
|
{"Source-Url": "http://web.stanford.edu/~gvaldesu/articles/TutelkanFrameworkSmallSettings.pdf", "len_cl100k_base": 5113, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 23978, "total-output-tokens": 7540, "length": "2e12", "weborganizer": {"__label__adult": 0.00025653839111328125, "__label__art_design": 0.0003695487976074219, "__label__crime_law": 0.000270843505859375, "__label__education_jobs": 0.0018053054809570312, "__label__entertainment": 5.322694778442383e-05, "__label__fashion_beauty": 0.00011873245239257812, "__label__finance_business": 0.0012769699096679688, "__label__food_dining": 0.0002543926239013672, "__label__games": 0.00031113624572753906, "__label__hardware": 0.0003161430358886719, "__label__health": 0.00024509429931640625, "__label__history": 0.0001825094223022461, "__label__home_hobbies": 7.557868957519531e-05, "__label__industrial": 0.0002846717834472656, "__label__literature": 0.0002243518829345703, "__label__politics": 0.00019466876983642575, "__label__religion": 0.0002294778823852539, "__label__science_tech": 0.00514984130859375, "__label__social_life": 9.989738464355467e-05, "__label__software": 0.01218414306640625, "__label__software_dev": 0.9755859375, "__label__sports_fitness": 0.00018167495727539065, "__label__transportation": 0.00028061866760253906, "__label__travel": 0.00015747547149658203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30266, 0.02858]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30266, 0.16633]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30266, 0.87608]], "google_gemma-3-12b-it_contains_pii": [[0, 2535, false], [2535, 5546, null], [5546, 8743, null], [8743, 11864, null], [11864, 13534, null], [13534, 16516, null], [16516, 17195, null], [17195, 20302, null], [20302, 23287, null], [23287, 26616, null], [26616, 30266, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2535, true], [2535, 5546, null], [5546, 8743, null], [8743, 11864, null], [11864, 13534, null], [13534, 16516, null], [16516, 17195, null], [17195, 20302, null], [20302, 23287, null], [23287, 26616, null], [26616, 30266, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30266, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30266, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30266, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30266, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30266, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30266, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30266, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30266, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30266, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30266, null]], "pdf_page_numbers": [[0, 2535, 1], [2535, 5546, 2], [5546, 8743, 3], [8743, 11864, 4], [11864, 13534, 5], [13534, 16516, 6], [16516, 17195, 7], [17195, 20302, 8], [20302, 23287, 9], [23287, 26616, 10], [26616, 30266, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30266, 0.09701]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
4875e24db234494871e52dc4a5b38d8a231846af
|
The KCachegrind Handbook
Original author of the documentation: Josef Weidendorfer
Updates and corrections: Federico Zenith
## Contents
1 Introduction .......................... 6
1.1 Profiling .......................... 6
1.2 Profiling Methods .................. 6
1.3 Profiling Tools .................... 7
1.4 Visualization ..................... 8
2 Using KCachegrind .................. 9
2.1 Generate Data to Visualize ........ 9
2.1.1 Callgrind ..................... 9
2.1.2 OProfile ..................... 9
2.2 User Interface Basics .......... 10
3 Basic Concepts ................... 11
3.1 The Data Model for Profile Data .. 11
3.1.1 Cost Entities ................ 11
3.1.2 Event Types .................. 12
3.2 Visualization State .......... 12
3.3 Parts of the GUI ................. 13
3.3.1 Sidedocks .................. 13
3.3.2 View Area .................. 13
3.3.3 Areas of a Tab .............. 13
3.3.4 Synchronized View with Selected Entity in a Tab .. 13
3.3.5 Synchronization between Tabs .. 13
3.3.6 Layouts .................. 13
3.4 Sidedocks .................. 14
3.4.1 Flat Profile ................ 14
3.4.2 Parts Overview .............. 14
3.4.3 Call Stack .................. 14
3.5 Views .................. 14
3.5.1 Event Type ................. 14
3.5.2 Call Lists ................. 14
3.5.3 Maps .................. 15
3.5.4 Call Graph ................. 15
3.5.5 Annotations ................. 15
Abstract
KCachegrind is a profile data visualization tool, written using KDE Frameworks 5.
Chapter 1
Introduction
KCachegrind is a browser for data produced by profiling tools. This chapter explains what profiling is for, how it is done, and gives some examples of profiling tools available.
1.1 Profiling
When developing a program, one of the last steps often involves performance optimizations. As it makes no sense to optimize functions rarely used, because that would be a waste of time, one needs to know in which part of a program most of the time is spent.
For sequential code, collecting statistical data of the programs runtime characteristic like time numbers spent in functions and code lines usually is enough. This is called Profiling. The program is run under control of a profiling tool, which gives the summary of an execution run at the end. In contrast, for parallel code, performance problems typically are caused when one processor is waiting for data from another. As this waiting time usually cannot easily attributed, here it is better to generate timestamped event traces. KCachegrind cannot visualize this kind of data.
After analyzing the produced profile data, it should be easy to see the hot spots and bottlenecks of the code: for example, assumptions about call counts can be checked, and identified code regions can be optimized. Afterwards, the success of the optimization should be verified with another profile run.
1.2 Profiling Methods
To exactly measure the time passed or record the events happening during the execution of a code region (e.g. a function), additional measurement code needs to be inserted before and after the given region. This code reads the time, or a global event count, and calculates differences. Thus, the original code has to be changed before execution. This is called instrumentation. Instrumentation can be done by the programmer itself, the compiler, or by the runtime system.
As interesting regions usually are nested, the overhead of measurement always influences the measurement itself. Thus, instrumentation should be done selectively and results have to be interpreted with care. Of course, this makes performance analysis by exact measurement a very complex process.
Exact measurement is possible because of hardware counters (including counters incrementing on a time tick) provided in modern processors, which are incremented whenever an event is happening. As we want to attribute events to code regions, without the counters, we would have to handle every event by incrementing a counter for the current code region ourself. Doing this in software is, of course, not possible; but, on the assumption that the event distribution over
source code is similar when looking only at every n-th event instead of every event, a measurement method whose overhead is tunable has been developed: it is called Sampling. Time Based Sampling (TBS) uses a timer to regularly look at the program counter to create a histogram over the program code. Event Based Sampling (EBS) exploits the hardware counters of modern processors, and uses a mode where an interrupt handler is called on counter underflow to generate a histogram of the corresponding event distribution: in the handler, the event counter is always reinitialized to the n of the sampling method. The advantage of sampling is that the code does not have to be changed, but it is still a compromise: the above assumption will be more correct if n is small, but the smaller the n, the higher the overhead of the interrupt handler.
Another measurement method is to simulate things happening in the computer system when executing a given code, i.e. execution driven simulation. The simulation is always derived from a more or less accurate machine model; however, with very detailed machine models, giving very close approximations to reality, the simulation time can be unacceptably high in practice. The advantage of simulation is that arbitrarily complex measurement/simulation code can be inserted in a given code without perturbing results. Doing this directly before execution (called runtime instrumentation), using the original binary, is very comfortable for the user: no re-compilation is necessary. Simulation becomes usable when simulating only parts of a machine with a simple model; another advantage is that the results produced by simple models are often easier to understand: often, the problem with real hardware is that results include overlapping effects from different parts of the machine.
1.3 Profiling Tools
Most known is the GCC profiling tool gprof: one needs to compile the program with option -pg; running the program generates a file gmon.out, which can be transformed into human-readable form with gprof. One disadvantage is the required re-compilation step to prepare the executable, which has to be statically linked. The method used here is compiler-generated instrumentation, which measures call arcs happening among functions and corresponding call counts, in conjunction with TBS, which gives a histogram of time distribution over the code. Using both pieces of information, it is possible to heuristically calculate inclusive time of functions, i.e. time spent in a function together with all functions called from it.
For exact measurement of events happening, libraries exist with functions able to read out hardware performance counters. Most known here is the PerfCtr patch for Linux®, and the architecture independent libraries PAPI and PCL. Still, exact measurement needs instrumentation of code, as stated above. Either one uses the libraries itself or uses automatic instrumentation systems like ADAPTOR (for FORTRAN source instrumentation) or DynaProf (code injection via DynInst).
OProfile is a system-wide profiling tool for Linux® using Sampling.
In many aspects, a comfortable way of Profiling is using Cachegrind or Callgrind, which are simulators using the runtime instrumentation framework Valgrind. Because there is no need to access hardware counters (often difficult with today’s Linux® installations), and binaries to be profiled can be left unmodified, it is a good alternative to other profiling tools. The disadvantage of simulation - slowdown - can be reduced by doing the simulation on only the interesting program parts, and perhaps only on a few iterations of a loop. Without measurement/simulation instrumentation, Valgrind’s usage only has a slowdown factor in the range of 3 to 5. Also, when only the call graph and call counts are of interest, the cache simulator can be switched off.
Cache simulation is the first step in approximating real times, since runtime is very sensitive to the exploitation of so-called caches, small and fast buffers which accelerate repeated accesses to the same main memory cells, on modern systems. Cachegrind does cache simulation by catching memory accesses. The data produced includes the number of instruction/data memory accesses and first- and second-level cache misses, and relates it to source lines and functions of the run program. By combining these miss counts, using miss latencies from typical processors, an estimation of spent time can be given.
Callgrind is an extension of Cachegrind that builds up the call graph of a program on-the-fly, i.e. how the functions call each other and how many events happen while running a function. Also, the profile data to be collected can separated by threads and call chain contexts. It can provide profiling data on an instruction level to allow for annotation of disassembled code.
1.4 Visualization
Profiling tools typically produce a large amount of data. The wish to easily browse down and up the call graph, together with fast switching of the sorting mode of functions and display of different event types, motivates a GUI application to accomplish this task.
KCachegrind is a visualization tool for profile data fulfilling these wishes. Despite being programmed first with browsing the data from Cachegrind and Calltree in mind, there are converters available to be able to display profile data produced by other tools. In the appendix, a description of the Cachegrind/Callgrind file format is given.
Besides a list of functions sorted according exclusive or inclusive cost metrics, and optionally grouped by source file, shared library or C++ class, KCachegrind features various views for a selected function, namely:
- a call-graph view, which shows a section of the call graph around the selected function,
- a tree-map view, which allows nested-call relations to be visualized, together with inclusive cost metric for fast visual detection of problematic functions,
- source code and disassembler annotation views, allowing to see details of cost related to source lines and assembler instructions.
Chapter 2
Using KCachegrind
2.1 Generate Data to Visualize
First, one wants to generate performance data by measuring aspects of the runtime characteristics of an application, using a profiling tool. KCachegrind itself does not include any profiling tool, but is good in being used together with Callgrind, and by using a converter, also can be used to visualize data produced with OProfile. Although the scope of this manual is not to document profiling with these tools, the next section provides short quickstart tutorials to get you started.
2.1.1 Callgrind
Callgrind is a part of Valgrind. Note that it previously was called Calltree, but that name was misleading.
The most common use is to prefix the command line to start your application with valgrind --tool=callgrind, as in:
```
valgrind --tool=callgrind myprogram myargs
```
At program termination, a file callgrind.out.pid will be generated, which can be loaded into KCachegrind.
More advanced use is to dump out profile data whenever a given function of your application is called. E.g. for Konqueror, to see profile data only for the rendering of a Web page, you could decide to dump the data whenever you select the menu item View → Reload. This corresponds to a call to KonqMainWindow::slotReload. Use:
```
valgrind --tool=callgrind --dump-before=KonqMainWindow::slotReload konqueror
```
This will produce multiple profile data files with an additional sequential number at the end of the filename. A file without such an number at the end (only ending in the process PID) will also be produced; by loading this file into KCachegrind, all others are loaded too, and can be seen in the Parts Overview and Parts list.
2.1.2 OProfile
OProfile is available from its home page. Follow the installation instructions on the Web site, but, before you do, check whether your distribution does not already provide it as package (like SuSE°).
System-wide profiling is only permitted to the root user, as all actions on the system can be observed; therefore, the following has to be done as root. First, configure the profiling process, using the GUI `oprof_start` or the command-line tool `opcontrol`. Standard configuration should be timer mode (TBS, see introduction). To start the measurement, run `opcontrol -s`. Then run the application you are interested in and, afterwards, do a `opcontrol -d`. This will write out the measurement results into files under folder `/var/lib/oprofile/samples/`. To be able to visualize the data in KCachegrind, do in an empty directory:
```
opreport -gdf | op2callgrind
```
This will produce a lot of files, one for every program which was running on the system. Each one can be loaded into KCachegrind on its own.
### 2.2 User Interface Basics
When starting KCachegrind with a profile data file as argument, or after loading one with `File → Open`, you will see a navigation panel containing the function list at the left; and, on the right the main part, an area with views for a selected function. This view area can be arbitrarily configured to show multiple views at once.
At first start, this area will be divided into a top and a bottom part, each with different tab-selectable views. To move views, use the tabs’ context menu, and adjust the splitters between views. To switch quickly between different viewing layouts, use `View → Layout → Go to Next` (`Ctrl+→`) and `View → Layout → Go to Previous` (`Ctrl←`).
The active event type is important for visualization: for Callgrind, this is, for example, cache misses or cycle estimation; for OProfile, this is 'Timer' in the simplest case. You can change the event type via a combobox in the toolbar or in the `Event Type` view. A first overview of the runtime characteristics should be given when you select function `main` in the left list; look then at the call graph view. There, you see the calls occurring in your program. Note that the call graph view only shows functions with high event count. By double-clicking a function in the graph, it will change to show the called functions around the selected one.
To explore the GUI further, in addition to this manual, also have a look at the documentation section on the Web site. Also, every widget in KCachegrind has ‘What’s this’ help.
Chapter 3
Basic Concepts
This chapter explains some concepts of the KCachegrind, and introduces terms used in the interface.
3.1 The Data Model for Profile Data
3.1.1 Cost Entities
Cost counts of event types (like L2 Misses) are attributed to cost entities, which are items with relationship to source code or data structures of a given program. Cost entities not only can be simple code or data positions, but also position tuples. For example, a call has a source and a target, or a data address can have a data type and a code position where its allocation happened.
The cost entities known to KCachegrind are given in the following. Simple Positions:
Instruction
An assembler instruction at a specified address.
Source Line of a Function
All instructions that the compiler (via debug information) maps to a given source line specified by source file name and line number, and which are executed in the context of some function. The latter is needed because a source line inside of an inlined function can appear in the context of multiple functions. Instructions without any mapping to an actual source line are mapped to line number 0 in file ???.
Function
All source lines of a given function make up the function itself. A function is specified by its name and its location in some binary object if available. The latter is needed because binary objects of a single program each can hold functions with the same name (these can be accessed e.g. with dlopen or dlsym; the runtime linker resolves functions in a given search order of binary objects used). If a profiling tool cannot detect the symbol name of a function, e.g. because debug information is not available, either the address of the first executed instruction typically is used, or ???.
Binary Object
All functions whose code is inside the range of a given binary object, either the main executable or a shared library.
Source File
All functions whose first instruction is mapped to a line of the given source file.
Class
Symbol names of functions typically are hierarchically ordered in name spaces, e.g. C++ namespaces, or classes of object-oriented languages; thus, a class can hold functions of the class or embedded classes itself.
Profile Part
Some time section of a profile run, with a given thread ID, process ID, and command line executed.
As can be seen from the list, a set of cost entities often defines another cost entity; thus, there is an inclusion hierarchy of cost entities.
Positions tuples:
- Call from instruction address to target function.
- Call from source line to target function.
- Call from source function to target function.
- (Un)conditional jump from source to target instruction.
- (Un)conditional jump from source to target line.
Jumps between functions are not allowed, as this makes no sense in a call graph; thus, constructs like exception handling and long jumps in C have to be translated to popping the call stack as needed.
3.1.2 Event Types
Arbitrary event types can be specified in the profile data by giving them a name. Their cost related to a cost entity is a 64-bit integer.
Event types whose costs are specified in a profile data file are called real events. Additionally, one can specify formulas for event types calculated from real events, which are called inherited events.
3.2 Visualization State
The visualization state of a KCachegrind window includes:
- the primary and secondary event type chosen for display,
- the function grouping (used in the Function Profile list and entity coloring),
- the profile parts whose costs are to be included in visualization,
- an active cost entity (e.g. a function selected from the function profile sidedock),
- a selected cost entity.
This state influences the views.
Views are always shown for one cost entity, the active one. When a given view is inappropriate for a cost entity, it is disabled: when selecting e.g. an ELF object in the group list, source annotation makes no sense.
For example, for an active function, the callee list shows all the functions called from the active one: one can select one of these functions without making it active. Also, if the call graph is shown beside, it will automatically select the same function.
3.3 Parts of the GUI
3.3.1 Sidedocks
Sidedocks are side windows which can be placed at any border of a KCachegrind window. They always contain a list of cost entities sorted in some way.
- The **Function Profile** is a list of functions showing inclusive and exclusive cost, call count, name and position of functions.
- **Parts Overview**
- **Call Stack**
3.3.2 View Area
The view area, typically the right part of a KCachegrind main window, is made up of one (default) or more tabs, lined up either horizontally or vertically. Each tab holds different views of only one cost entity at a time. The name of this entity is given at the top of the tab. If there are multiple tabs, only one is active. The entity name in the active tab is shown in bold, and determines the active cost entity of the KCachegrind window.
3.3.3 Areas of a Tab
Each tab can hold up to four view areas, namely Top, Right, Left, and Bottom. Each area can hold multiple stacked views. The visible part of an area is selected by a tab bar. The tab bars of the top and right area are at the top; the tab bars of the left and bottom area are at the bottom. You can specify which kind of view should go into which area by using the tabs’ context menus.
3.3.4 Synchronized View with Selected Entity in a Tab
Besides an active entity, each tab has a selected entity. As most view types show multiple entities with the active one somehow centered, you can change the selected item by navigating inside a view (by clicking with the mouse or using the keyboard). Typically, selected items are shown in a highlighted state. By changing the selected entity in one of the views of a tab, all other views highlight the new selected entity accordingly.
3.3.5 Synchronization between Tabs
If there are multiple tabs, a selection change in one tab leads to an activation change in the next tab, be it right of the former or under it. This kind of linkage should, for example, allow for fast browsing in call graphs.
3.3.6 Layouts
The layout of all the tabs of a window can be saved (**View → Layout**). After duplicating the current layout (**View → Layout → Duplicate** (Ctrl++)) and changing some sizes or moving a view to another area of a tab, you can quickly switch between the old and the new layout via Ctrl++ and Ctrl++. The set of layouts will be stored between KCachegrind sessions of the same profiled command. You can make the current set of layouts the default one for new KCachegrind sessions, or restore the default layout set.
3.4 Sidedocks
3.4.1 Flat Profile
The **Flat Profile** contains a group list and a function list. The group list contains all groups where cost is spent in, depending on the chosen group type. The group list is hidden when grouping is switched off.
The function list contains the functions of the selected group (or all functions if grouping is switched off), ordered by some column, e.g. inclusive or self costs spent therein. There is a maximum number of functions shown in the list, configurable in **Settings → Configure KCachegrind**.
3.4.2 Parts Overview
In a profile run, multiple profile data files can be produced, which can be loaded together into KCachegrind. The **Parts Overview** sidedock shows these, ordered horizontally according to creation time; the rectangle sizes are proportional to the cost spent each part. You can select one or several parts to constrain the costs shown in the other KCachegrind views to these parts only.
The parts are further subdivided between a partitioning and an inclusive cost split mode:
**Partitioning Mode**
The partitioning is shown in groups for a profile data part, according to the group type selected. For example, if ELF object groups are selected, you see colored rectangles for each used ELF object (shared library or executable), sized according to the cost spent therein.
**Diagram Mode**
A rectangle showing the inclusive cost of the current active function in the part is shown. This, again, is split up to show the inclusive costs of its callees.
3.4.3 Call Stack
This is a purely fictional ‘most probable’ call stack. It is built up by starting with the current active function, and adds the callers and callees with highest cost at the top and to bottom.
The **Cost** and **Calls** columns show the cost used for all calls from the function in the line above.
3.5 Views
3.5.1 Event Type
The **Event Type** list shows all cost types available and the corresponding self and inclusive cost of the current active function for that event type.
By choosing an event type from the list, you change the type of costs shown all over KCachegrind to the selected one.
3.5.2 Call Lists
These lists show calls to and from the current active function. With **All Callers** and **All Callees** are meant those functions reachable in the caller and callee direction, even when other functions are in between.
Call list views include:
• Direct Callers
• Direct Callees
• All Callers
• All Callees
3.5.3 Maps
A treemap view of the primary event type, up or down the call hierarchy. Each colored rectangle
represents a function; its size is approximately proportional to the cost spent therein while the
active function is running (however, there are drawing constrains).
For the Caller Map, the graph shows the nested hierarchy of all callers of the currently activated
function; for the Callee Map, it shows that of all callees.
Appearance options can be found in the context menu. To get exact size proportions, choose
Skip Incorrect Borders. As this mode can be very time-consuming, you may want to limit the
maximum drawn nesting level before. Best determinates the split direction for children from
the aspect ratio of the parent. Always Best decides on remaining space for each sibling. Ignore
Proportions takes space for function name drawing before drawing children. Note that size
proportions can get heavily wrong.
Keyboard navigation is available with the left and right arrow keys for traversing siblings, and
up and down arrow keys to go a nesting level up and down. Enter activates the current item.
3.5.4 Call Graph
This view shows the call graph around the active function. The cost shown is only the cost spent
while the active function was actually running, i.e. the cost shown for main() (if it's visible)
should be the same as the cost of the active function, as that is the part of inclusive cost of main()
spent while the active function was running.
For cycles, blue call arrows indicate that this is an artificial call, which never actually happened,
added for correct drawing.
If the graph is larger than the drawing area, a bird’s eye view is shown on a side. There are view
options similar to those of the call maps; the selected function is highlighted.
3.5.5 Annotations
The annotated source or assembler lists show the source lines or disassembled instructions of
the current active function together with the (self) cost spent executing the code of a source line
or instruction. If there was a call, lines with details on the call are inserted into the source: the
(inclusive) cost spent inside of the call, the number of calls happening, and the call destination.
Select such a call information line to activate the call destination.
Chapter 4
Command Reference
4.1 The main KCachegrind window
4.1.1 The File Menu
File → New (Ctrl-N)
Opens an empty top-level window in which you can load profile data. This action is not really necessary, as File → Open gives you a new top-level window if the current one already shows some data.
File → Open (Ctrl-O)
Pops up the KDE file selector to choose a profile data file to be loaded. If there is some data already shown in the current top-level window, this will open a new window; if you want to open additional profile data in the current window, use File → Add.
The name of profile data files usually ends in .pid.part-threadID, where part and threadID are optional. pid and part are used for multiple profile data files belonging to one application run. By loading a file ending only in pid, any existing data files for this run with additional endings are loaded as well.
If there exist profile data files cachegrind.out.123 and cachegrind.out.123.1, by loading the first, the second will be automatically loaded too.
File → Add
Adds a profile data file to the current window. Using this, you can force multiple data files to be loaded into the same top-level window even if they are not from the same run, as given by the profile data file naming convention. For example, this can be used for side-by-side comparison.
File → Reload (F5)
Reload the profile data. This is useful when another profile data file was generated for an already loaded application run.
File → Quit (Ctrl-Q)
Quits KCachegrind
Chapter 5
Questions and Answers
1. **What is KCachegrind for? I have no idea.**
KCachegrind is a helpful at a late stage in software development, called profiling. If you don’t develop applications, you don’t need KCachegrind.
2. **What is the difference between Incl. and Self?**
These are cost attributes for functions regarding some event type. As functions can call each other, it makes sense to distinguish the cost of the function itself (‘Self Cost’) and the cost including all called functions (‘Inclusive Cost’). ‘Self’ is sometimes also referred to as ‘Exclusive’ costs.
So, for example, for `main()`, you will always have an inclusive cost of almost 100%, whereas the self cost is negligible when the real work is done in another function.
3. **If I double-click on a function down in the Call Graph view, it shows for function `main()` the same cost as the selected function. Isn’t this supposed to be constant at 100%?**
You have activated a function below `main()`, which obviously costs less than `main()` itself. For every function, it is shown only the part of the cost spent while the activated function is running; that is, the cost shown for any function can never be higher than the cost of the activated function.
Chapter 6
Glossary
Cost Entity
An abstract item related to source code to which event counts can be attributed. Dimensions for cost entities are code location (e.g. source line, function), data location (e.g. accessed data type, data object), execution location (e.g. thread, process), and tuples or triples of the aforementioned positions (e.g. calls, object access from statement, evicted data from cache).
Event Costs
Sum of events of some event type occurring while the execution is related to some cost entity. The cost is attributed to the entity.
Event Type
The kind of event of which costs can be attributed to a cost entity. There are real event types and inherited event types.
Inherited Event Type
A virtual event type only visible in the view, defined by a formula to be calculated from real event types.
Profile Data File
A file containing data measured in a profile experiment, or part of one, or produced by post-processing a trace. Its size is typically linear with the code size of the program.
Profile Data Part
Data from a profile data file.
Profile Experiment
A program run supervised by a profiling tool, producing possibly multiple profile data files from parts or threads of the run.
Profile Project
A configuration for profile experiments used for one program to profile, perhaps in multiple versions. Comparisons of profile data typically only makes sense between profile data produced in experiments of one profile project.
Profiling
The process of collecting statistical information about runtime characteristics of program runs.
The KCachegrind Handbook
Real Event Type
An event type that can be measured by a tool. This requires the existence of a sensor for the given event type.
Trace
A sequence of timestamped events that occurred while tracing a program run. Its size is typically linear with the execution time of the program run.
Trace Part
See "Profile Data Part".
Tracing
The process of supervising a program run and storing its events, sorted by a timestamp, in an output file, the trace.
Chapter 7
Credits and License
Thanks to Julian Seward for his excellent Valgrind, and Nicholas Nethercote for the Cachegrind addition. Without these programs, KCachegrind would not exist. Some ideas for this GUI were from them, too.
Thanks for all the bug reports and suggestions from different users.
This documentation is licensed under the terms of the GNU Free Documentation License.
|
{"Source-Url": "https://docs.kde.org/trunk5/en/kcachegrind/kcachegrind/kcachegrind.pdf", "len_cl100k_base": 6724, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 36628, "total-output-tokens": 7662, "length": "2e12", "weborganizer": {"__label__adult": 0.0002262592315673828, "__label__art_design": 0.00021636486053466797, "__label__crime_law": 0.00013709068298339844, "__label__education_jobs": 0.00019741058349609375, "__label__entertainment": 3.93986701965332e-05, "__label__fashion_beauty": 7.253885269165039e-05, "__label__finance_business": 7.027387619018555e-05, "__label__food_dining": 0.0001697540283203125, "__label__games": 0.0004191398620605469, "__label__hardware": 0.0009007453918457032, "__label__health": 0.00012552738189697266, "__label__history": 0.0001106858253479004, "__label__home_hobbies": 5.984306335449219e-05, "__label__industrial": 0.0002008676528930664, "__label__literature": 0.0001208782196044922, "__label__politics": 8.469820022583008e-05, "__label__religion": 0.0002264976501464844, "__label__science_tech": 0.002838134765625, "__label__social_life": 4.357099533081055e-05, "__label__software": 0.01213836669921875, "__label__software_dev": 0.98095703125, "__label__sports_fitness": 0.0001480579376220703, "__label__transportation": 0.00015676021575927734, "__label__travel": 0.00010591745376586914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31251, 0.03417]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31251, 0.35726]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31251, 0.8986]], "google_gemma-3-12b-it_contains_pii": [[0, 124, false], [124, 124, null], [124, 1483, null], [1483, 1483, null], [1483, 1575, null], [1575, 4204, null], [4204, 8681, null], [8681, 10289, null], [10289, 12200, null], [12200, 14552, null], [14552, 16547, null], [16547, 18778, null], [18778, 21291, null], [21291, 23696, null], [23696, 26038, null], [26038, 27567, null], [27567, 28819, null], [28819, 30386, null], [30386, 30860, null], [30860, 31251, null]], "google_gemma-3-12b-it_is_public_document": [[0, 124, true], [124, 124, null], [124, 1483, null], [1483, 1483, null], [1483, 1575, null], [1575, 4204, null], [4204, 8681, null], [8681, 10289, null], [10289, 12200, null], [12200, 14552, null], [14552, 16547, null], [16547, 18778, null], [18778, 21291, null], [21291, 23696, null], [23696, 26038, null], [26038, 27567, null], [27567, 28819, null], [28819, 30386, null], [30386, 30860, null], [30860, 31251, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 31251, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31251, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31251, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31251, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31251, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31251, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31251, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31251, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31251, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31251, null]], "pdf_page_numbers": [[0, 124, 1], [124, 124, 2], [124, 1483, 3], [1483, 1483, 4], [1483, 1575, 5], [1575, 4204, 6], [4204, 8681, 7], [8681, 10289, 8], [10289, 12200, 9], [12200, 14552, 10], [14552, 16547, 11], [16547, 18778, 12], [18778, 21291, 13], [21291, 23696, 14], [23696, 26038, 15], [26038, 27567, 16], [27567, 28819, 17], [28819, 30386, 18], [30386, 30860, 19], [30860, 31251, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31251, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
81dee118adcc07ddcef908e5e3b27a5d6d5ee746
|
Solr
- Solr in DSpace
- Connecting to Solr
- Bypassing localhost restriction temporarily
- Bypassing localhost restriction permanently
- Instructions specific to Tomcat 7 and newer
- Instructions specific to Tomcat 6 and older
- Accessing Solr
- Solr cores
- Solr admin interface
- Solr queries
- Solr responses
- PHP example
- Examples
- Date of last deposited item
- Top downloaded items by a specific user
- Number of items in a specific community
- Breakdown of submitted items per month
- Statistics breakdown per event type
- Statistics: breakdown of downloads per month
- Statistics: number of downloads (item views), for a specific item per month
- Statistics: number of total downloads in a given time span
- Querying Solr from XMLUI
- Examples
- "AND" search as default
- Deleting Solr index data
- Solr delete query
- Manually delete Solr index files
- Set up Solritis (VelocityResponseWriter)
- Guidepost
Solr in DSpace
DSpace uses Solr as a part of Discovery as index to speed up access to content metadata and data about access to DSpace (for statistics). It also provides faceting, search results filtering and in newer versions of DSpace also hit highlighting and "More like this". If Discovery is enabled, the DSpace search field accepts Solr search syntax.
Discovery is an optional part of DSpace since 1.7 (with big improvements and configuration format changes in 1.8). When enabled, Discovery replaces DSpace Search and Browse and provides Solr-based statistics. Since DSpace 3, it is also the default storage for the DSpace OAI-PMH provider (server) responses.
Do I need to read this page?
To gain the benefits of faceting and filtering in XMLUI, all you need to do is enable Discovery. The rest of these page describes some advanced uses of Solr - if you want to query Solr directly for theme customization or read DSpace metadata from outside DSpace.
Please, note, that to get data from Solr, you don't technically need to enable the Discovery aspect, but you do need to populate the index. The statistics core is populated automatically in DSpace 1.6+. To populate the search core (DSpace 1.7+), you need to run `bin/dspace index-discovery` (you will probably want to schedule it in cron to run periodically, too). In DSpace versions older than 4.x, the command was called `bin/dspace update-discovery-index`. There should be no reason to access the oai core (DSpace 3.0), because it contains the same information as the search core, but if you want to populate it, run `bin/dspace oai import`.
Connecting to Solr
By default, the DSpace Solr server is configured to listen only on localhost, port 8080 (unless you specified another port in Tomcat configuration and the `dspace/config/modules/discovery.cfg` config file). That means that you cannot connect from another machine to the dspace server port 8080 and request a Solr URL - you'll get a HTTP 403 error. This configuration was done for security considerations - Solr index contains some data that is not accessible via public DSpace interfaces and some of the data might be sensitive.
Bypassing localhost restriction temporarily
While you could make Solr publicly accessible by changing this default configuration, this is not recommended, because Solr indexes may contain some data you might consider private. Instead, use one of following simple means to bypass this restriction temporarily. All of them will make Solr accessible only to the machine you're connecting from for as long as the connection is open.
1. **OpenSSH client - port forwarding**
- connect to DSpace server and forward its port 8080 to localhost (machine we're connecting from) port 1234
```sh
nen -L 1234:127.0.0.1:8080 mydspace.edu
```
makes mydspace.edu:8080 accessible via localhost:1234 (type `http://localhost:1234` in browser address bar); also opens ssh shell
exit ssh to terminate port forwarding
Alternatively:
```sh
en -N -f -L 1234:127.0.0.1:8080 mydspace.edu
```
run with `-N` and `-f` flags if you want ssh to go to background
kill the ssh process to terminate port forwarding
2. **PuTTY client - port forwarding**
Local port forwarding:
Connection - SSH - Tunnels
Source port: 1234
Destination: localhost:8080
Local
Auto
Add
Once you're connected in PuTTY, visit `http://localhost:1234/solr/` and you should see Solr's web interface. No browser configuration is necessary.
Dynamic port forwarding/ SOCKS proxy*:
Connection - SSH - Tunnels
Source port: 1234
Dynamic
Auto
Add
Once you're connected in PuTTY, you'll need to configure your browser to use localhost:1234 as a SOCKS proxy (and remove "localhost" and "127.0.0.1" from addresses to bypass this proxy - like in the next step)
3. **OpenSSH client - SOCKS proxy**
connect to DSpace server and run a SOCKS proxy server on localhost port 1234; configure browser to use localhost:1234 as SOCKS proxy and remove "localhost" and "127.0.0.1" from addresses that bypass this proxy
all browser requests now originate from dspace server (source IP is dspace server's IP) - dspace is the proxy server
type `http://localhost:8080` in browser address bar - localhost here is the dspace server
```sh
en -D 1234 mydspace.edu
```
*Note about PuTTY as SOCKS proxy - while it can be configured, it raises a security exception when Solr is accessed. If you figure this out, please add this method here.
Bypassing localhost restriction permanently
Instructions specific to Tomcat 7 and newer
Here's how you can:
1. turn off the localhost filter in Tomcat
2. replace it with a RemoteAddrValve and allow an enumerated set of IP addresses or subnets (in the following example the 127.0.0.1, 123.123.123.123 IPs and the 111.222.233.* subnet would be allowed):
```xml
Change your server.xml or alternatively your context fragment (i.e. conf/Catalina/localhost/solr.xml) like this:
<Context path="/solr" reloadable="true">
<Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="127\.0\.0\.1|123\.[0-9]{3}.123\.[0-9]{3}\.|111\.[0-9]{3}.222\.[0-9]{3}\.*"/>
<Parameter name="LocalHostRestrictionFilter.localhost" value="false" override="false" />
</Context>
Do not forget to include localhost (i.e. 127.0.0.1) in the allowed list, otherwise Discovery, OAI 2.0 and other things depending on Solr won't work.
See also:
- Tomcat 7 documentation: Remote Address Filter
- DS-1260 - Getting issue details... STATUS
Instructions specific to Tomcat 6 and older
Please, note that the syntax of the "allow" attribute changed in Tomcat 7 to a single regular expression. In Tomcat 6 and older, it was a comma-separated list of regular expressions, therefore this worked in Tomcat 6, but does not work in Tomcat 7+:
```xml
<Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="111.222.233.*, 123.123.123.123, 127.0.0.1"/>
See also: Tomcat 6 documentation: Remote Address Filter
Accessing Solr
Solr cores
DSpace contains a so-called multicore installation of Solr. That means that there are multiple Solr indexes and configurations sharing one Solr codebase. If you're familiar with Apache HTTPD, it is analogous to multiple virtual hosts running on one Apache server (separate configuration and webpages), except that individual Solr cores are accessible via different URL (as opposed to virtualhost IP:port).
The two Solr instances in DSpace Discovery are called "search" and "statistics". search contains data about communities, collections, items and bitstreams. statistics contains data about searches, accessing users, IPs etc. The two instances are accessible at following URLs (relative to the dspace server):
```text
http://localhost:8080/solr/search/
http://localhost:8080/solr/statistics/
Solr admin interface
Both Solr cores have separate administration interfaces which let you view their respective schemas, configurations, set up logging and submit queries. The schema browser here is very useful to list fields (and their types) included in each index and even see an overview of most common values of individual fields with their frequency.
Solr queries
The base URL of the default Solr search handler is as follows:
- http://localhost:8080/solr/search/search
Using the knowledge of particular fields from Solr Admin and Solr syntax (SolrQuerySyntax, CommonQueryParameters) you can make your own search requests. You can also read a brief tutorial to learn the query syntax quickly. You can also look at the solr log file (in older dspace versions, this was logged to catalina.out) to see queries generated by XMLUI in real time:
```bash
tail -f /dspace/log/solr.log
```
(depending on your OS, Tomcat installation method and logging settings, the path may be different)
Solr responses
By default, Solr responses are returned in XML format. However, Solr can provide several other output formats including JSON and CSV. Discovery uses the javabin format. The Solr request parameter is wt (e.g. &wt=json). For more information, see Response Writers, QueryResponseWriters. An interesting option is to specify an XSLT stylesheet that can transform the XML response (server-side) to any format you choose, typically HTML. Append &wt=xslt&tr=example.xsl to the Solr request URL. The .xsl files must be provided in the [dspace]/solr/search/conf/xslt/ directory.
For more information, see XsltResponseWriter.
PHP example
```php
$solr_baseurl_dspace = "http://localhost:8080/solr/search/query?";
$solr_query = "test";
$solr_URL_dspace = $solr_baseurl_dspace."wt=phps&q=".urlencode($solr_query." AND withdrawn:false"); // use withdrawn:false with DSpace newer than 1.8
$response_dspace = file_get_contents($solr_URL_dspace, false, stream_context_create(array('http' => array('timeout' => 10))));
$result_dspace = unserialize($response_dspace);
$num_dspace = $result_dspace['response']['numFound'];
echo $num_dspace;
```
Keep in mind that although using the phps writer may be faster, it's not recommended for untrusted user data (see PHP unserialize() notes).
Examples
Date of last deposited item
To get all items (search.resourceType:2) sorted by date accessioned (dc.date.accessioned_dt) in order from newest to oldest (desc; %20 is just an url-encoded space character):
```
http://localhost:8080/solr/search/select?q=search.resourceType:2&sort=dc.date.accessioned_dt%20desc
```
Note:
<table>
<thead>
<tr>
<th>search.resourceType:2</th>
<th>items</th>
</tr>
</thead>
<tbody>
<tr>
<td>search.resourceType:3</td>
<td>communities</td>
</tr>
<tr>
<td>search.resourceType:4</td>
<td>collections</td>
</tr>
</tbody>
</table>
To get only the first (newest) item (rows=1) with all but the date accessioned field filtered out (fl=dc.date.accessioned) and without the Solr response header (omitHeader=true):
http://localhost:8080/solr/search/select?q=search.resourcetype:2&sort=dc.date.accessioned_dt%20desc&rows=1&fl=dc.date.accessioned&omitHeader=true
Top downloaded items by a specific user
http://localhost:8080/solr/statistics/select?indent=on&start=0&rows=10&fl=*%2Cscore&qt=standard&wt=standard&explainOther=&hl.fl=&facet=true&facet.field=epersonid&q=type:0
Note:
- facet.field=epersonid You want to group by epersonid, which is the user id
- type:0 Interested in bitstreams only
Number of items in a specific community
Community here is specified by its "community_id" - the identifier from the "community" table in database. The result is the "numFound" attribute of the "result" element. This example returns number of items (search.resourcetype:2) in community with community_id=85 (location.comm:85):
http://localhost:8080/solr/search/select/?q=location.comm:85+AND+search.resourcetype:2&start=0&rows=0&indent=on
Breakdown of submitted items per month
Show breakdown of items (search.resourcetype:2) submitted (facet.date=dc.date.accessioned_dt) per month (facet.date.gap=+MONTH) in the year 2016 (facet.date.start=2016-01-01T00:00:00Z&facet.date.end=2017-01-01T00:00:00Z):
http://localhost:8080/solr/search/select/?q=location.comm:85+AND+search.resourcetype:2&start=0&rows=0&indent=on
Statistics breakdown per event type
Starting from DSpace 3, there is a statistics_type field in the statistics core that contains the "usage event type". Currently, the available types are search, view, search_result and workflow. Here’s how to get event breakdown by type, excluding robots (isBot:false):
Statistics: breakdown of downloads per month
Show breakdown of bitstream (type:0) downloads per month in the year 2016, excluding robots (isBot:false):
http://localhost:8080/solr/statistics/select?indent=on&rows=0&facet=true&facet.date=time&facet.date.start=2016-01-01T00:00:00Z&facet.date.end=2017-01-01T00:00:00Z&facet.date.gap=%2B1MONTH&q=type:0+AND+isBot:false
Statistics: number of downloads (item views) for a specific item per month
Show bitstream (type:0) downloads per month in the year 2016, excluding robots (isBot:false), for a specific item (2163 in the example):
http://localhost:8080/solr/statistics/select?indent=on&rows=0&facet=true&facet.date=time&facet.date.start=2016-01-01T00:00:00Z&facet.date.end=2017-01-01T00:00:00Z&facet.date.gap=%2B1MONTH&q=type:0+owningItem:2163&fq=-isBot:true&fq=-(bundleName:*+TO+*bundleName:ORIGINAL)&fq=-(statistics_type:*+TO+*statistics_type:view)
Statistics: number of total downloads in a given time span
Show the total repository-wide bitstream (type:0) downloads, excluding robots (isBot:false), for a specific duration (September 1 2017 through September 1 2018). No need for faceting to get a total count:
Since Solr returns its responses in XML, it's possible and easy to call custom Solr queries from XMLUI, process the XML response with XSLT and display the results in human-readable form on the HTML page.
There are two ways how to do that - synchronously or asynchronously using AJAX (JavaScript) after the page is loaded. Solr queries are usually very fast, so only synchronous calls will be shown here.
You can include another XML document to be processed by XSLT using the document() function. The parameter to this function is a string with the path to the XML document to process. This can be either a static .xml file stored on the server filesystem or a URL, which will be fetched at time of processing. For Solr, the later is what we need. Furthermore, we need to distinguish templates for processing this external XML document as opposed to the input XML document. We'll do this using the mode attribute and define a different processing mode for each query.
```xml
```
Now we need to define a template with the same mode that matches elements contained in the Solr response XML:
```xml
<xsl:template match="/response/result/doc/date" mode="solr-response">
Last item was imported: <xsl:value-of select="text()"/>
</xsl:template>
```
Furthermore, we don't want to hardcode the `http://localhost:8080` Solr URL, because this can be changed in config file and that would break the template. So we'll call a Java function from XSLT to retrieve the configured Solr URL. See the complete example in the next section.
**Examples**
**Date of last deposited item**
For description of the query parameters, see above.
1. Add the confman namespace and “confman” to exclude-result-prefixes. (For explanation, see how to [Call Java methods from XSLT (Manakin)](http://localhost:8080/solr/statistics/select?indent=on&rows=0&q=time:[2017-09-01T00:00:00Z+TO+2018-09-01T00:00:00Z]+AND+type:0+AND+isBot:false)
```xml
<xsl:stylesheet
...
xmlns:confman="org.dspace.core.ConfigurationManager"
exclude-result-prefixes="... confman">
```
2. Add this simple template to process the Solr query result. More complex date formatting can be done easily in XSLT 2.0 (see [XSLT 2.0 spec](http://localhost:8080/solr/statistics/select?indent=on&rows=0&q=time:[2017-09-01T00:00:00Z+TO+2018-09-01T00:00:00Z]+AND+type:0+AND+isBot:false)), however Cocoon still uses XSLT 1.0 (see [DS-995](http://localhost:8080/solr/statistics/select?indent=on&rows=0&q=time:[2017-09-01T00:00:00Z+TO+2018-09-01T00:00:00Z]+AND+type:0+AND+isBot:false)). It is currently also possible to call Java functions to do date formatting.
```xml
<xsl:template match="/response/result/doc/date" mode="lastItem">
Last item was imported: <xsl:value-of select="substring(text(), 1, 10)"/>
</xsl:template>
```
3. Add the following code to the place where you want the resulting text to appear:
```xml
<xsl:variable name="solr-search-url" select="confman:getProperty('discovery', 'search.server')"/>
<xsl:apply-templates select="document(concat($solr-search-url, '/select?q=search.resourcetype:2&sort=dc.date.accessioned_dt%20desc&rows=1&fl=dc.date.accessioned_dt&omitHeader=true'))" mode="lastItem"/>
```
For example, to add it after the list of Recent items in Mirage, override its template like this:
Multicore join queries
Solr supports join queries across multiple cores since Solr 4.0. Thus it's also supported in DSpace 4.0 (which includes Solr 4.4).
example query (not tested)
http://localhost:8080/solr/search/select/?q=*:*&fq={!join from=owningItem to=search.resourceid fromIndex=statistics}title:"Testing title"
"AND" search as default
Up to and including DSpace 5 (see DS-2809), Discovery uses the "OR" operator as default if you don't specify an operator between your query keywords. So searching for "John Doe" will also return entries like "Jane Doe" and "John Connor". If you want to change that, you have to edit the schema.xml file of the Solr search core:
In [dspace]/solr/search/conf/schema.xml, find this line:
```xml
<solrQueryParser defaultOperator="OR"/>
```
and change it to
```xml
<solrQueryParser defaultOperator="AND"/>
```
Then restart your servlet container (Tomcat).
Warning
It’s not officially recommended to change the defaultOperator setting. Some unrelated Discovery features might stop working if you do this. I haven’t noticed anything wrong, but you might. If something breaks, make sure to notify us and we’ll try to fix it or remove this tip.
Deleting Solr index data
If for whatever reason you need to delete the data in your index (which would normally be followed by running [dspace]/bin/dspace index-discovery (in DSpace versions older than 4.x, it was called [dspace]/bin/dspace update-discovery-index), but you can use the -b parameter instead to reindex everything), here’s how you can do it:
**Solr delete query**
If Solr is running, you can access the following URL from the server where Solr is installed (remember the default localhost restriction):
```bash
</update>"
```
This will delete all documents in the search (Discovery) core.
You can verify the number of documents in the core by running the following query and checking the value of the `numFound` attribute in the output:
```bash
$ curl "http://localhost:8080/solr/search/select/?q=*:*&rows=0"
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">5</int><lst name="params"><str name="rows">0</str><str name="q">*:*</str></lst></lst><result name="response" numFound="0" start="0"/>
</response>
```
The URL listed in the examples is the default Solr URL in DSpace. If you changed it, you can find it in `search.server` in `/dspace/config/modules/discovery.cfg` (DSpace 1.8+) or in `solr.log.server` in `/dspace/config/dspace.cfg` (DSpace 1.7).
Source: [Solr Wiki FAQ: How can I delete all documents from my index?](http://wiki.apache.org/solr/SolrWikiFAQ)
**Manually delete Solr index files**
If your Solr is broken and you can't issue queries, you can still delete the index files manually:
```bash
$ rm -rf [dspace]/solr/search/data/
```
Then restart the servlet container or reload the `solr` webapp.
See also:
- Solr: How can I delete all documents from my index?
- DSpace: deleted wrong directory
**Set up Solritas (VelocityResponseWriter)**
Solritas is a generic search interface on top of a Solr index. It can be useful if you want to explore the contents of a Solr index (core) using facets.
To set it up in DSpace 3.0 (which uses Solr 3.5.0):
- download `apache-solr-3.5.0.tgz` from [http://archive.apache.org/dist/lucene/solr/3.5.0/](http://archive.apache.org/dist/lucene/solr/3.5.0/)
- `tar xzvf apache-solr-3.5.0.tgz`
- `mkdir [dspace]/solr/lib`
- `cp ./apache-solr-3.5.0/dist/apache-solr-velocity-3.5.0.jar [dspace]/solr/lib`
- `cp ./apache-solr-3.5.0/contrib/velocity/lib/{commons-beanutils-1.7.0.jar, commons-collections-3.2.1.jar, velocity-1.6.4.jar,velocity-tools-2.0.jar} [dspace]/solr/lib`
- edit `[dspace]/solr/solr.xml` and add the `sharedLib` attribute:
```xml
<solr persistent="false" sharedLib="lib"/>
```
- edit the `solrconfig.xml` file of each core where you want to use Solritas. Example for the "search" core: add the velocity ResponseWriter and `requestHandler` in `[dspace]/solr/search/conf/solrconfig.xml`:
It should also be possible to use it in other versions of DSpace (starting from 1.6), but these use different versions of Solr, so modify the procedure accordingly (and expect other caveats):
<table>
<thead>
<tr>
<th>DSpace</th>
<th>Solr</th>
</tr>
</thead>
<tbody>
<tr>
<td>6</td>
<td>4.10.2</td>
</tr>
<tr>
<td>5</td>
<td>4.10.2</td>
</tr>
<tr>
<td>4</td>
<td>4.4.0</td>
</tr>
<tr>
<td>3</td>
<td>3.5.0</td>
</tr>
<tr>
<td>1.8</td>
<td>3.3.0</td>
</tr>
<tr>
<td>1.7</td>
<td>1.4.1</td>
</tr>
<tr>
<td>1.6</td>
<td>1.3.0</td>
</tr>
</tbody>
</table>
Note: In older versions, you may need to specify the queryResponseWriter class as org.apache.solr.request.VelocityResponseWriter (I haven’t tested it, though)
Resources:
Guidepost
Other pages on this wiki describing Solr and Discovery.
- [Discovery](http://localhost:8080/solr/search/browse/) Official DSpace 3.x documentation
- [DSpace Discovery](http://localhost:8080/solr/search/browse/) Discovery proposal & purpose, intro video, Discovery 1.8 changes & configuration
- [DSpace Discovery HowTo](http://localhost:8080/solr/search/browse/) Discovery screenshots (before Discovery was included in DSpace), most content obsolete (pre-1.7.0)
See also:
- [Solr Tutorial](http://localhost:8080/solr/search/browse/)
- [ajax-solr](http://localhost:8080/solr/search/browse/), a JavaScript library for creating user interfaces to Solr.
- [/var/log/tomcat6/catalina.out](/var/log/tomcat6/catalina.out)
|
{"Source-Url": "https://wiki.lyrasis.org/download/temp/pdfexport-20200318-180320-2136-407/DSPACE-Solr-180320-2136-408.pdf?contentType=application/pdf", "len_cl100k_base": 6054, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 22201, "total-output-tokens": 7118, "length": "2e12", "weborganizer": {"__label__adult": 0.00024700164794921875, "__label__art_design": 0.0005159378051757812, "__label__crime_law": 0.00023627281188964844, "__label__education_jobs": 0.0009088516235351562, "__label__entertainment": 0.0001825094223022461, "__label__fashion_beauty": 0.00011670589447021484, "__label__finance_business": 0.00038051605224609375, "__label__food_dining": 0.0002053976058959961, "__label__games": 0.000885009765625, "__label__hardware": 0.000690460205078125, "__label__health": 0.00015437602996826172, "__label__history": 0.00028634071350097656, "__label__home_hobbies": 0.00011980533599853516, "__label__industrial": 0.00018405914306640625, "__label__literature": 0.00031185150146484375, "__label__politics": 0.00019347667694091797, "__label__religion": 0.0003883838653564453, "__label__science_tech": 0.0131072998046875, "__label__social_life": 0.0002237558364868164, "__label__software": 0.324462890625, "__label__software_dev": 0.65576171875, "__label__sports_fitness": 0.00016677379608154297, "__label__transportation": 0.00015544891357421875, "__label__travel": 0.0002701282501220703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23234, 0.02692]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23234, 0.21471]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23234, 0.73981]], "google_gemma-3-12b-it_contains_pii": [[0, 3225, false], [3225, 5663, null], [5663, 8414, null], [8414, 11046, null], [11046, 13940, null], [13940, 17422, null], [17422, 18972, null], [18972, 21584, null], [21584, 23234, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3225, true], [3225, 5663, null], [5663, 8414, null], [8414, 11046, null], [11046, 13940, null], [13940, 17422, null], [17422, 18972, null], [18972, 21584, null], [21584, 23234, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23234, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23234, null]], "pdf_page_numbers": [[0, 3225, 1], [3225, 5663, 2], [5663, 8414, 3], [8414, 11046, 4], [11046, 13940, 5], [13940, 17422, 6], [17422, 18972, 7], [18972, 21584, 8], [21584, 23234, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23234, 0.04626]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
bb4a7b262d3d01a4998d9a4b25d742daf970ec2e
|
Graphics display for graphics data management systems
T.V. Hromadka II
Boyle Engineering Corporation, PO Box 3030, Newport Beach, California 92658-9020, USA
&
M.J. Braksator
Williamson & Schmid, 15101 Red Hill Avenue, Tustin, California 92680, USA
Recent applications of computer software to water resources and environmental master-planning studies include use of graphical displays in order to disseminate information. In this paper, a simple-to-use graphics display program is presented which enables a user to display graphical slides, stored on disk, to the CRT. Such a program enables a user to link graphics slides to text data by use of a read-only text display routine. Application of the provided program in a Graphics Data Base Management System is considered as a case study. Computer code is provided for the graphics slide display.
Key words: GIS, water resources, modeling, graphics, interactive.
GRAPHICS DATA BASE MANAGEMENT SYSTEM: APPLICATION TO MASTER PLANS OF CITY FLOOD CONTROL SYSTEMS
An integrated hydrology/hydraulics/planning/deficiency-analysis Master Plan of Drainage computer model is considered as a case study for applications of a Graphics Data Base Management System (or GDBMS). The computer modeling approach evaluates each link of the Master Plan of Drainage for deficiencies with respect to several defined street flow criteria, and determines mitigation measures of parallel and replacement systems. Because different hydraulic systems have different flow velocity characteristics, hydrology estimates are recomputed as the master plan is developed. In general, a city master plan typically involves about 2000–4000 links and hydrologic subareas, and generates considerable quantities of data that become manageable in a GDBMS environment.
The entire Master Plan is represented by graphics layers in AutoCAD format, which allows for rapid communication of master plan data and estimates in graphical form. Two applications are developed:
Application 1: Graphical representation of data, and
access to a data base retrieval system, which is noneditable, and which can be published and distributed to the public.
Application 2: Graphical database storage, and editing via an AutoCAD environment, wherein hydrologic, planning, topographic, and geographic data are accessible for processing in AutoCAD, and thence transferable to the Master Plan of Drainage computer model, with access to a data base retrieval system.
In the following, each major element of the GDBMS will be discussed. An application to an example Master Plan of Drainage will be used to demonstrate graphical display opportunities.
COMPUTERIZED MASTER PLAN OF DRAINAGE AND GRAPHICS DATA BASE MANAGEMENT SYSTEM
The total Master Plan of Drainage software package and data base system contains numerous elements and components that span several technical fields, including data base management, geographic information systems (or GIS), hydrologic/hydraulic computer modeling, graphical data base management, flood control engineering and planning, among others. In the following is provided a brief survey of the key elements of the total software package.
Coupled hydrologic modeling technique
Most flood control agencies at city, county, or state level require specific procedures for the calculation of flood flow quantities. Often the procedure may involve the use of two or more estimates, depending upon conditions such as watershed size. In Southern California, several county flood control districts require use of two flood flow estimation techniques dependent upon catchment area, namely, the rational method for areas smaller than about one square mile, and the design storm unit hydrograph method for areas larger than about one square mile. The transition between techniques has been coupled into an integrated computerized Master Plan of Drainage model, enabling the development, for the first time, of an integrated hydrologic computer model with one pass of the analysis, rather than two separate studies. As a result of coupling hydrologic techniques into just one computer model, single-system analysis is available for use in preparing Master Plans of Drainage and upgrading the master plan, thereby greatly reducing the complexity, review process, and cost involved.
The Master Plan of Drainage software contains internal editing and computational elements that involve 152 hydraulic and hydrologic submodels and global modeling commands. The software enables analysis of an integrated open-channel or closed-conduit flood control system on a study-wide basis.
Graphical data base
Several data base layers will be required to complete any hydrologic study. These layers will be created individually; however, they may be viewed simultaneously to show any hydrologic information desired. These layers include:
(1) base map consisting of contours and streets right-of-way;
(2) watershed boundary to define study boundaries;
(3) drainage reservations to define alignments;
(4) existing facilities to define alignments;
(5) street flow to determine existing flows;
(6) alignments defined by layers 3, 4, and 5;
(7) subarea boundaries defined by layers 5 and 6;
(8) overall mapping divides for final report;
(9) land use map;
(10) hydrologic soil group map;
(11) rainfall isohyetal map;
(12) hydrologic nodal points defined by layers 6 and 7;
(13) hydrologic element type to define routing parameters.
Some layers, such as the base map, drainage reservations, existing facilities, land use, hydrologic soil group, and rainfall isohyetal maps may be available in digital form. If these layers are not available in digital form then they can either be digitized or scanned. The layers specifically related to the development of a Master Plan of Drainage can either be digitized from a marked-up hard copy or directly using AutoCAD.
Primary hydrologic parameters used in the Master Plan of Drainage computer model are land use, hydrologic soil group, rainfall, and hydrologic subarea topographic data such as area, length of water course, and elevation. In general, a study is discretized into subareas that are 10–20 acres in size. These subareas require definition as to each of the parameters listed above. Additionally, maps are needed in order to communicate these data. By obtaining in digital form or actually digitizing the land use maps, hydrologic soil group maps, rainfall maps, and subarea maps, not only is a digital/graphical representation available for display, but the data can then be processed by a 'polygon processor' in order to partition the subareas into the intersection of all of the graphical layers. Geographic location is provided by use of street layout layers, right-of-way maps and freeway maps. The graphics data base is used to prepare hard-copy maps for reports, as well as graphical layers for display on the computer monitor. Figures 1 and 2 show hard copies of example graphical layers for a Master Plan of Drainage. Figure 1 depicts the land use map and Figure 2 depicts the hydrologic soil group map.
An acceptable base map is chosen and information such as subarea boundaries, alignments, and node numbers is added. This information is entered on a watershed basis. Once all this information is added, any desired layers are overlaid to create a hydrology map. Since these maps are created using AutoCAD, they can be reproduced at any scale.
From each watershed map, the boundary is taken and overlaid with other watershed boundaries to create quadrant maps. These quadrant maps, along with an index map, are the navigational tools by which the user can locate any specific location in the study area.
Polygon processor
The use of geographic information systems (GIS) has become widespread in many facets of engineering and planning, among other fields. A key element of a GIS is the ability to intersect graphical layers so that the several forms of information are resolved into 'cells' wherein all parameters are constant. Figure 3 depicts the resolution of several graphical layers of information into homogeneous cells.
In the Master Plan of Drainage, each subarea requires definition of land use, hydrologic soil group, and rainfall, and the proportions of each within the subarea. The polygon processor performs this important task, and then develops a data base for use in the Master Plan of Drainage computer model. The subarea data are stored in tabulated formats, on a subarea basis, indexed according to
subarea number. Thus, the retrieval of a specific subarea number will access these several data, automatically developed by the polygon processor.
Master Plan of Drainage Data Base
The Master Plan of Drainage may be represented, in a data base form, as a collection of nodes (specific points along the catchment flood control system), and subareas (10–20 acres in size). All information computed by the Master Plan of Drainage, such as deficiency system mitigation needs, flow quantities, hydraulic properties, streetflow characteristics, flood control system characteristics, hydrologic parameters, and costs, among others, are stored in agency-designed tabled form in a data base indexed according to node number, link number, and subarea number. Also stored are data entered directly into the data base such as flood control system history, age, and so forth. Once the data base is assembled, the data base may be linked to the graphical data base which displays the digital graphical layers constructed for the polygon processing (i.e. multiple use of a data base form), while allowing easy access to the Master Plan of Drainage data base.
Graphics Data Base Management System
The graphical data base and retrieval software and the
Master Plan of Drainage hydrologic/hydraulic computer software are coupled together to form the Graphics Data Base Management System. Each of the above software packages are developed specifically for this application, and do not require the use of other software packages.
Two applications are developed. The first application, or Application 1, enables a publication of the Master Plan data base for distribution to the public. Using ‘slides’ (i.e. monitor images stored in the graphics data base), the entire study can be resolved into graphics slides of about one-half a square mile in size, showing hydrologic master plan nodes, subareas, links, streets, land use, and hydrologic soil group, among other designed data. Each slide is indexed to successively larger maps so that by selecting quadrants from the monitor, one is able to navigate through the city to a selected point. Additionally, each slide is cross-referenced to a Master Plan of Drainage data base map that stores all the data associated with the slide appearing on the monitor. Figure 4 shows a slide of a data base map which appears on the monitor. Data base operating commands are displayed on the monitor screen, enabling the user to access the slide images. Software, in AutoLISP, for slide displays is contained in the Appendix.
This first application provides significant communica-
tion opportunities for the agency to both the public and the technical sectors. The engineering and planning communities can access the data base for other technical needs, and also inspect the Master Plan of Drainage without reviewing the usual report documents (which typically run to several volumes). The public can inspect the Master Plan, and access information that would otherwise be unavailable.
Application 2 is the actual Management System which includes all the features of Application 1, plus the ability to upgrade the Master Plan of Drainage due to changes in system requirements, land use, and hydrologic parameters, among other factors. Because the agency can perform the upgrade, the master plan can be kept current, enabling up-to-date drainage fee assessment to be developed on an on-going basis.
AutoCAD slide display software
An element of the GDBMS is a slide display program which reads stored AutoCAD slide data and displays these data onto a CRT. The Appendix provides a slide display AutoLISP program. The program documentation is contained throughout the source code, and is self-explanatory.
CONCLUSIONS
A graphics data base management system for computerized Master Plans of Drainage is developed. Two applications are prepared which enable the agency to upgrade the Master Plan in the future, and to publish
the Master Plan in computer graphics form for distribution to the public. Because of the ease of communication opportunities afforded by this approach, the utility in Agency public information programs may be significant.
BIBLIOGRAPHY
APPENDIX
GENERAL APPROACH SUMMARY
APPLICATIONS 1 & 2
/* vslide - a program to display AutoCAD slide files */
#include <graphics.h>
#include <stdio.h>
#include <stdlib.h>
#include <conio.h>
#define offsetvector 251
#define endoffile 252
#define solidfill 253
#define commonendpoint 254
#define newcolor 255
void getheader();
int verifiyheader();
void setaspect();
int getrecord();
void drawvector();
void drawoffset();
void newcolor();
void drawcommon();
void solidfill();
void initialize(void);
void ScreenViewport(void);
int headerrec[31];
int daterec[38];
int curX=0, curY=0, curColor;
int GraphDriver;
int GraphMode;
double AspectRatio;
int MaxX, MaxY; /*highest x and y dots on the screen*/
int MaxColors;
int ErrorCode;
struct palettetype palette;
struct PTS ( int x, y );
FILE *fp;
int main(int argc, char *argv[])
{
char thefile[20], tempchar;
int stat, i;
if (argc < 2)
{
printf("Usage: vslide <slide filename>\n", argv[0]);
exit(1);
}
Initialize();
for (i = 1; i < argc; i++)
{
strcpy(thefile, argv[i]);
if ((strcat(thefile, ".")) == NULL)
{
printf("Can't open %s\n", thefile);
exit(1);
}
}
}
Graphics display for graphics data management systems
```c
) getheader();
if (verifyheader() != 0)
{
cleardevice();
closegraph();
printf("%s is not a slide file\n", thefile);
exit(1);
}
for (i = 0; i < 31; i++)
{
headerrec[0] = fgetc(fp);
}
int verifyheader()
{
int i;
char f_header[] = {0x41, 0x75, 0x74, 0x6f, 0x43, 0x41, ... current values ...,
0x44, 0x20, 0x53, 0x6c, 0x69, 0x64, ...for the file header ...,
0x65, 0x62, 0x40, 0x1a, 0x00}; /* in AutoCAD release 1*/
for (i = 0; i < 17; i++)
{
if (headerrec[i] != f_header[i]) /* Bytes in the header should always */
return (-1); /* be the same. verifyheader() checks */
/* the validity of the .SLD file. */
return(0);
void setaspect()
/* Function sets ratio that will allow */
{ /* for displaying perfect circles */
float ratio, xasp, yasp;
/* without an elliptical effect. */
ratio = ((headerrec[23]) + (headerrec[24]) + (headerrec[25]) + (headerrec[26]))/10000000.0;
xasp = ratio;
yasp = 1.0;
setaspectratio(xasp, yasp);
```
```c
int getrecord()
{
int i = 0; // Function reads the file and evaluates */
int j; // the numbers for their graphical display */
datarec[1] = getc(fp); // significance. Most of the numbers are */
datarec[0] = getc(fp); // composed of more than one byte. All */
switch (datarec[0]) // numbers are within fixed ranges. */
{ // For specific info on what the ranges are*/
case offsetvector: // refer to the AutoCAD Reference Manual. */
/* this condition evaluates the x,y/*
offsets of the last vector and draws*/
datarec[1] = getc(fp); // a new vector using fcn drawoffset();*/
drawoffset();
break;
case endoffile: /*<-- end file reading*/
break;
case solidfill: /*<-- calls fcn that fills closed */
/* objects in a current drawing color. */
datarec[1] = getc(fp);
solidfill();
break;
case commonendpoint: /*<-- calls fcn that draws vector from */
/* last point used, which is common to */
datarec[2] = getc(fp);
drawcommon(); /* more than one vector. */
break;
case newcolor: /*<-- call to set a new drawing color. */
newcolor();
break;
default: /*<-- call to draw a vector. */
for (i = 2; i <= 8; i++)
datarec[i] = getc(fp);
drawvector();
break;
}
return datarec[0];
}
void drawvector()
{
int i, temp[4];
for (i = 0; i <= 1; i++)
{ // temp[0] = datarec[1]*256+datarec[1+1];
temp[0] = datarec[1]*256+datarec[i+1];
}
for (i = 2; i <= 7; i++)
{ // temp[i/2] = datarec[i+1]*256+datarec[i+1];
temp[i/2] = datarec[i+1]*256+datarec[i+1];
}
line (temp[0], MaxX-temp[1], temp[2], MaxY-temp[3]);
curX = temp[0]; curY = temp[1]; //Turbo C's 0,0 origin is in */
/*upper left corner of the scrn.*/
/*AutoCAD's 0,0 origin is in */
/*lower left corner of the scrn.*/
/*MaxX-temp.. fixes this problem*/
}
void drawoffset()
{
int temp[4];
temp[0] = curX + (char)datarec[1];
temp[1] = curY + (char)datarec[2];
temp[2] = curX + (char)datarec[3];
temp[3] = curY + (char)datarec[4];
line (temp[0], MaxX-temp[1], temp[2], MaxY-temp[3]);
curX = temp[0]; curY = temp[1];
}
void newcolor()
{
int acolors[5] = {BLACK, RED, YELLOW, GREEN, CYAN, BLUE, MAGENTA,
WHITE, DARKGRAY, LIGHT BLUE, LIGHTRED, LIGHTMAGENTA,
LIGHTCYAN, LIGHTGREEN, LIGHTGRAY, BROWN};
setcolor (curColor = acolors[datarec[1]]);
}
void drawcommon()
{
int temp[4];
temp[0] = curX + (char)datarec[1];
temp[1] = curY + (char)datarec[2];
temp[2] = curX;
temp[3] = curY;
line (temp[0], MaxX-temp[1], temp[2], MaxY-temp[3]);
curX = temp[0]; curY = temp[1];
}
void solidfill()
{
struct PFS outs[10];
int ptnum, i, numberofpoints;
setfillstyle( SOLID_FILL, cureColor );
numberofpoints = datarec[12];
for (ptnum = 0; ptnum < numberofpoints; ptnum++)
{
for (i = 0; i < 6; i++)
datarec[i]=getc(fp);
outs[ptnum].x = datarec[3]*256+datarec[2];
outs[ptnum].y = MaxY-(datarec[5]*256+datarec[4]);
}
fillpoly(numberofpoints, (int far *)outs);
for (i = 0; i < 6; i++)
datarec[i]=getc(fp);
}
|
{"Source-Url": "http://www.hromadka.net/200-249/230-_Graphics_display_for_graphics_data_management_systems.pdf", "len_cl100k_base": 4271, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 9954, "total-output-tokens": 5050, "length": "2e12", "weborganizer": {"__label__adult": 0.00036454200744628906, "__label__art_design": 0.0012426376342773438, "__label__crime_law": 0.0005707740783691406, "__label__education_jobs": 0.002475738525390625, "__label__entertainment": 0.0001304149627685547, "__label__fashion_beauty": 0.00019299983978271484, "__label__finance_business": 0.0006113052368164062, "__label__food_dining": 0.00042891502380371094, "__label__games": 0.0007901191711425781, "__label__hardware": 0.0029315948486328125, "__label__health": 0.0006399154663085938, "__label__history": 0.0007014274597167969, "__label__home_hobbies": 0.0001809597015380859, "__label__industrial": 0.003620147705078125, "__label__literature": 0.0003304481506347656, "__label__politics": 0.0003151893615722656, "__label__religion": 0.0005960464477539062, "__label__science_tech": 0.38818359375, "__label__social_life": 0.00010198354721069336, "__label__software": 0.138427734375, "__label__software_dev": 0.45556640625, "__label__sports_fitness": 0.00038504600524902344, "__label__transportation": 0.00106048583984375, "__label__travel": 0.0002675056457519531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18556, 0.02514]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18556, 0.76176]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18556, 0.80533]], "google_gemma-3-12b-it_contains_pii": [[0, 3171, false], [3171, 8494, null], [8494, 9734, null], [9734, 11097, null], [11097, 12441, null], [12441, 14055, null], [14055, 15147, null], [15147, 18097, null], [18097, 18556, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3171, true], [3171, 8494, null], [8494, 9734, null], [9734, 11097, null], [11097, 12441, null], [12441, 14055, null], [14055, 15147, null], [15147, 18097, null], [18097, 18556, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18556, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18556, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18556, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18556, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18556, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18556, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18556, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18556, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18556, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18556, null]], "pdf_page_numbers": [[0, 3171, 1], [3171, 8494, 2], [8494, 9734, 3], [9734, 11097, 4], [11097, 12441, 5], [12441, 14055, 6], [14055, 15147, 7], [15147, 18097, 8], [18097, 18556, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18556, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
4c22154eca1bbc13e70c06c01645cbb8e52374c0
|
Increasing Efficiency of ISO 26262 Verification and Validation by Combining Fault Injection and Mutation Testing with Model Based Development
Rakesh Rana¹, Miroslaw Staron¹, Christian Berger¹, Jörgen Hansson¹, Martin Nilsson², and Fredrik Törner²
¹ Computer Science & Engineering, Chalmers/ University of Gothenburg, Sweden
² Volvo Car Corporation, Göteborg, Sweden
rakesh.rana@gu.se
Abstract. The rapid growth of software intensive active safety functions in modern cars resulted in adoption of new safety development standards like ISO 26262 by the automotive industry. Hazard analysis, safety assessment and adequate verification and validation methods for software and car electronics require effort but in the long run save lives. We argue that in the face of complex software development set-up with distributed functionality, Model-Based Development (MBD) and safety-criticality of software embedded in modern cars, there is a need for evolving existing methods of MBD and complementing them with methods already used in the development of other systems (Fault Injection and Mutation Testing). Our position is that significant effectiveness and efficiency improvements can be made by applying fault injection techniques combined with mutation testing approach for verification and validation of automotive software at the model level. The improvements include such aspects as identification of safety related defects early in the development process thus providing enough time to remove the defects. The argument is based on our industrial case studies, the studies of ISO 26262 standard and academic experiments with new verification and validation methods applied to models.
Keywords: Fault injection, Mutation testing, ISO 26262, Simulink, Model based development, Automotive domain, Safety critical software
1 Introduction
Nowadays, a typical premium car has up to 70 ECUs, which are connected by several system buses to realize over 2,000 functions [4]. As around 90% of all innovations today are driven by electronics and software the complexity of cars embedded software is expected to grow. The growth is fuelled by cars beginning to act more proactively and more assistive to its drivers, which requires software to interact with hardware more efficiently and making more decisions automatically (e.g. collision avoidance by braking, brake-by-wire or similar functions). In
total with about 100 million lines of code (SLOC) [5], premium segment vehicles carry more software code than in modern fighter jets and airliners [5]. Software for custom functionality in modern cars is usually developed by multiple suppliers although it is designed by a single OEM (Original Equipment Manufacturer) like Volvo Cars. The distributed development and use of standards like AUTOSAR aims to facilitate reuse of software and hardware components between different vehicle platforms, OEMs and suppliers [8]. However, testing of such systems is more complex and today testing of software generally accounts for almost 50% of overall development costs [2].
ISO-26262 in automotive domain poses stringent requirements for development of safety critical applications and in particular on the testing processes for this software. These requirements are intended to increase the safety of modern cars, although they also increase the cost of modern cars with complex software functions influencing safety or car passengers.
The position for which we argue in this paper is that efficient verification and validation of safety functions requires combining Model Based Development (MBD) with fault injection into models with mutation testing. This position is based on the studies of the ISO 26262 standard (mainly chapter 6 that describes requirements on software development but also chapter 4, which poses requirements on product development [12]). It is also based on previous case studies of the impact of late defects on the software development practices in the automotive section [16].
The requirements from the ISO 26262 standard on using fault injection techniques is challenging since it relates to the development of complete functions rather than components of sub-components of software. The current situation in the automotive sector is that fault injection is used, but it is used at the level of one electronic component (ECU) or one software system, rarely at the function level [9][19].
The current state of art testing is not enough for detecting safety defects early in the automotive software development process since fault injection is done late in the development (when ECUs are being developed), which usually makes the detection of specification-related defects difficult and costly. The evidence from literature on successful use of fault injection shows that the technique indeed is efficient in finding dependability problems of hardware and software systems when applied to compute systems [10]. Finally, to be able to increase the effectiveness of the fault injection strategies and identify whether the faults should be injected at the model, software or ECU level mutation testing should be applied to verify the adequacy of test cases and finally how the combination of these approaches when applied at the model level will enhance the detection of safety defects right at the design stage.
In this paper, we provide a roadmap, which shows how to introduce fault injection and mutation testing to modelling of automotive software in order to avoid costly defects and increase the safety of modern and future cars.
Early Verification and Validation According to ISO 26262
The remaining of the paper is structured as follows: In the next section (2) we provide an overview of software development in automotive domain and associated concepts. This is followed by brief discussion on related work in section 3 and our position is presented and discussed in section 4. Section 5 concludes our work.
2 BACKGROUND
In this section we take a brief overview on the current state of automotive software development process and environment, how safety is important in safety critical applications and overview of theoretical background on fault injection techniques and mutation testing.
2.1 Automotive Software Development & ISO 26262
Various software functions/applications developed within the automotive industry today are classed as safety critical for example Volvos City Safety consists of components that are safety critical.
Fig. 1. Volvo Cars city safety function, image provided by Volvo Car Corporation.
Broy [4] gives examples of functions/areas within automotive domain of recent development which includes crash prevention, crash safety, advanced energy management, adaptable man-machine interface, advanced driver assistance, programmable car, car networking etc., much of these fall within the safety critical
functionality and demands high quality and reliability. Also a number of on-going projects are directed towards the goal of self-driving cars.
Software development in automotive sector in general follows the V process, where OEMs take the responsibility of requirement specification, system design, and integration/acceptance test. This is followed by the supplier, which develops the actual code that runs on ECUs. Although the code is tested at the supplier level (mainly unit testing), the OEMs are responsible for the final integration, system and acceptance testing to ensure that the given implementation of a software (SW) meets its intended functional and safety goals/demands.

In this model of software/product development (see Figure 2) testing is usually concentrated in the late stages of development, which also implies that most of the defects are discovered late in the development process. In a recent study using real defect data from an automotive software project from the industry [16] showed that late detection of defects is still a relevant problem and challenge yet to overcome. The defect inflow profile presented in this study is presented in Figure 3 for reference, which exhibits a clear peak in number of open defects in the late stages of function development/testing.
Testing the software is an important tool of ensuring correct functionality and reliability of systems but it is also a very resource intensive activity accounting for up to 50% of total software development costs [14] and even more for safety/mission critical software systems. Thus having a good testing strategy is critical for any industry with high software development costs. It has also been shown that most of the defects detected during testing do not depend on actual implementation of code, about 50% of defects detected during testing in the study by Megen and Meyerhoff [15] were found during the test preparation, an activity independent of the executable code. And since automotive sector has already widely adopted MBD for the software development of embedded systems, a high potential exists for using the behavioural modes developed at the early stages of software development for performing some of the effort spent on
V&V (Verification & Validation). Early V&V by helping to detect defects early will potentially save significant amount of cost for the projects.
2.2 ISO 26262
ISO/IEC 26262 is a standard describing safety requirements. It is applied to safety-related systems that include one or more electrical and/or electronic (E/E) systems. The overview of safety case and argumentation is represented in Figure 4.
Written specifically for automotive, the ISO-26262 standard is adapted for the V-model of product development corresponding to the current practice in the industry. The guidelines are laid out for system design, hardware and software design and development and integration of components to realize the full product. ISO-26262 includes specifications for MBD and provides recommendations for using fault injection techniques for hardware integration and testing, software unit testing, software integration testing, hardware-software integration testing, system integration testing and vehicle integration testing. Although the functional safety standard specifies clearly the recommendations for using fault injection during various stages of testing but does not recommend anything with respect to using mutation testing. This also reflects the current standard practice...
within the automotive industry where mutation testing is not widely adopted yet.
### 2.3 Fault Injection
Fault injection techniques are widely used for experimental dependability evaluation. Although these techniques have been used more widely for assessing the hardware/prototypes, the techniques are now about to be applied at behavioural models of software systems [20], thus enabling early verification of intended functionality as well as enhancing communication between different stakeholders. Fault injection techniques applied at models level offer distinct advantages especially in an industry using MBD, but use of these techniques at model level in automotive industry is currently at its infancy. Figure 5 shows a mind map of classification of fault injection techniques based on how the technique is implemented; some of the tools which are developed based on given approach are also listed for reference. For a good overview of fault injection techniques readers are referred to [10] [22].
Early Verification and Validation According to ISO 26262
Fault Injection Techniques
- **Software-Based**
- Fault Types
- Trigger Mechanisms
- Assumptions
- Advantages
- Disadvantages
- Tools
- BOND 2000, 16
- XCEPTION 1998, 282
- DOCTOR 1995, 231
- EXFI 1998, 26
- FIAT 1998, 224
- NTAPE
- FTAPE
- GOOFI*
- **Hybrid Models**
- Advantages
- Disadvantages
- Tools
- LIVE 1997, 34
- SFIE*
1995, 57
- **Simulation-Based**
- Advantages
- Disadvantages
- Tools
- VFIT 1997, 103
- MEFISTO*-C 1999, 2, Patented
- RIFLE 1994, 116
- MESSALINE 1993, 191
- FIST* 1989, 183
- MESSALINE 1993, 191
- MARS 1996, 31
- **Hardware-Based**
- Advantages
- Disadvantages
- Tools
- AFIT 1999, 2, Patented
- RIFLE 1994, 116
- FOCUS 1992, 116
- FIST* 1989, 183
- MESSALINE 1993, 191
- MARS 1996, 31
**Fig. 5.** Common classification of fault injection techniques and implementation tools, description available in [10] [22].
2.4 Mutation Testing
Mutation testing is a technique for assessing the adequacy of a given test suite/set of test cases. Mutation testing includes injection of systematic, repeatable seeding of faults in large number thus generating number of copies of original software artefacts with artificial fault infestation (called a mutant). And on the basis of what percentage of these mutations are detected by the given test cases/suite gives a metrics (called mutation adequacy score [13]) which can be used for measuring the effectiveness of given test suite. Faults for mutation testing approach can be either hand written or auto-generated variants of original code. The effectiveness of this approach in mimicking the real faults has also been established [1] i.e. mutants do reflect characteristics of real faults. Mutation theory is based on two fundamental hypotheses namely Competent Programmer Hypothesis (CPH) and the Coupling Effect, both introduced by DeMillo et al. [6]. CPH at its core reflects the assumption that programmers are competent in their job and thus would develop programs close to correct version while coupling effect hypothesis according to Offutt is Complex mutants are coupled to simple mutants in such a way that a test data set that detects all simple faults in a program will detect a high percentage of the complex defects [17].
3 RELATED WORK
A number of European Union sponsored projects have within the area of embedded software development and safety critical systems have looked at and developed techniques to effectively use fault injection for safe and reliable software development. The examples include the ESACS [7] (Enhanced Safety Assessment for Complex Systems), the ISAAC [11] (Improvement of Safety Activities on Aeronautical Complex systems). These projects have used the SCADE (Safety-Critical Application Development Environment) modelling environment to simulate hardware failure scenarios to identify fault combinations that lead to safety case violations.
A model-implemented fault injection plug-in to SCADE called FISCAD is introduced in [21] which utilizes an approach similar to mutation based testing and replaces the original model operators by equivalent fault injection nodes. The derived models are then used to inject the fault during execution and log the results which are analysed later. Dependability evaluation of automotive functions using model based software implemented fault injection techniques have also been studied in [18].
A generic tool capable of injecting various types of faults on the behavioural or functional Simulink models is also developed and introduced [20]. The tool called MODIFI (or MODel-Implemented Fault Injection tool) can be used to inject single or multiple point faults on behavioural models, which can be used to study the effectiveness/properties of fault tolerant system and identify the faults leading to failure by studying the fault propagation properties of the models.
Another work [3] with its root in the European CESAR (Cost-efficient methods and processes for safety relevant embedded systems) project provides a good
theoretical overview of how fault and mutation based test coverage can be used for automated test case generation for Simulink models. We provide a practical framework on how fault injection combined with mutation testing within an MDB environment can be used in the industry. And how will this practice enhance the verification and validation of software under development, its functional validation that would generate statistics for the effective argumentation of ISO 26262 compliance.
4 ROAD MAP FOR EARLY DEFECT DETECTION
We contend that fault injection can be effectively used at the model level to verify and validate the attainment or violation of safety goals. By applying mutation testing approach at the model level enough statistical evidence will be provided for the coverage needed for argumentation of fulfilment of safety goals as per the ISO26262 safety standard requirements.
A major challenge in successful argumentation of ISO-26262 compliance is to provide statistical evidence that SGs would not be violated during operation and doing this within reasonable testing efforts.
If we are able to differentiate early between defects that will or not cause the violation of SGs, the amount of testing required will be manageable. With MBD the testing for functionality under these defect conditions could be modelled using fault injection techniques, while the possibility of implementation bugs in the actual code can be checked using the mutation testing approach. The framework on how this could be achieved in practice is as follows:
Fig. 6. MBD based representation of a general system with inputs, outputs and dependencies.
As illustrated in Figure 6, a given system/function generally has following common features (in context of model based development): firstly it will have
x inputs \( (i_1, i_2, ..., i_n) \); it would have dependencies to other \( y \) components/ functions \((d_1, d_2, ..., d_m)\); it will have \( z \) outputs \((o_1, o_2, ..., o_m)\); and it will have a number of sub-units/modules within it that implements the intended functionality, let us assume that this part contains \( n \) basic blocks in the modelling environment corresponding to \( n \) statements for a hand written code. To verify and validate the correct functionality and ISO-26262 compliance of this generic function using fault and mutation testing approach we can follow the steps as:
- Assign or define the Functional Safety Requirements (FSRs) and Technical Safety Requirements (TSRs) for the \( z \) outputs of the given system/function in accordance to ISO 26262.
- Use fault injection technique to inject common occurring defects and other theoretically possible fault conditions at the \( x \) inputs.
- By studying the fault propagation of different injected faults at inputs and their effect on outputs, the individual faults and combinations of it that violate the FSRs for given system can be noted.
- Steps (b) & (c) should also be done to test and validate the given system/function dependencies on other functions/components.
- Mutation approach is then used to inject faults (or cause mutations) to the \( n \) basic blocks of given functional model and assess the detection effectiveness of test suite/cases for possible implementation bugs.
- The mutants which are not killed by given set of test cases/suits are examined for their effect on given functions FSRs, if the given mutation violates the SGs/FSRs then a suitable test case will be created to detect/kill such mutants i.e. detect such bugs in actual code.
Thus by following the above mentioned steps we not only ensure that the given function works as intended, does not violate the SGs and TSRs under faulty inputs and/or due to dependencies on other functions, but we can also identify possible implementation defects using the mutation approach and ensure that we have test cases ready to catch such faults that can potentially violate the SGs/TSRs even before the code is implemented/generated.
Further to make this framework/approach more effective in industrial practice we identify some best practices that will have positive impact on detecting defects early in the development process and thus have effective V&V of ISO 26262.
- Model evolution corresponding to different levels of software/product development.
– Specification and testing for SGs, FSRs and TSRs on the behavioural models.
– Identification of different types of defects/types of faults and at what stage they could be modelled/injected at models to ensure that models are build robust right from the start instead of adding fault tolerance in later stages of development.
5 CONCLUSIONS
In this paper we have examined the growing importance of software in automotive domain. The development of software in automotive and other similar industries has widely adopted the paradigm of model based development and by the nature of application much of the functionality developed and implemented in these sectors is safety critical. Safety critical software/application development requires observation of stringent quality assessment and adherence to functional safety standards such as ISO 26262 in automotive and DO173 in aerospace industry.
Development of behavioural models in MBD offers significant opportunity to do functional testing early in the development process. Fault injection and mutation testing approach in combination can be used to effectively verify and validate the functional properties of a software system/function. The approach will also provide the required statistics for the argumentation of safety standards compliance. In this paper the need for such validation and a framework on how this could be achieved in practice is discussed. More research and tools are needed to bring this approach into wider industrial adoption.
By detecting defects early and being able to do much of verification and validation of intended functionality, robustness and compliance to safety standards on the models the quality and reliability of software in automotive domain will be significantly enhanced. More effective approaches and tools support will also reduce the V&V costs and lead to shorter development times. High quality, reliable and dependable software in automobiles brings innovative functionality sooner, keeps product costs lower and most importantly ensures that automobiles are safer than ever before.
ACKNOWLEDGEMENTS
The work has been funded by Vinnova and Volvo Cars jointly under the FFI programme (VISEE, Project No: DIARIENR: 2011-04438).
References
11. ISAAC. Improvement of safety activities on aeronautical complex systems. FP6-AEROSPACE project reference 501848, 2007.
|
{"Source-Url": "https://pdfs.semanticscholar.org/c606/9a8f5c7019cf0429f6cd6ddd85c25f927209.pdf", "len_cl100k_base": 4359, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24074, "total-output-tokens": 6198, "length": "2e12", "weborganizer": {"__label__adult": 0.000774383544921875, "__label__art_design": 0.00043582916259765625, "__label__crime_law": 0.0008053779602050781, "__label__education_jobs": 0.0006513595581054688, "__label__entertainment": 9.244680404663086e-05, "__label__fashion_beauty": 0.00032138824462890625, "__label__finance_business": 0.0003786087036132813, "__label__food_dining": 0.0006613731384277344, "__label__games": 0.0013303756713867188, "__label__hardware": 0.004730224609375, "__label__health": 0.0009126663208007812, "__label__history": 0.0003695487976074219, "__label__home_hobbies": 0.0001531839370727539, "__label__industrial": 0.0014743804931640625, "__label__literature": 0.0003349781036376953, "__label__politics": 0.0004134178161621094, "__label__religion": 0.0007076263427734375, "__label__science_tech": 0.0557861328125, "__label__social_life": 0.00010192394256591796, "__label__software": 0.00646209716796875, "__label__software_dev": 0.91455078125, "__label__sports_fitness": 0.0006895065307617188, "__label__transportation": 0.00766754150390625, "__label__travel": 0.0003592967987060547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25951, 0.07431]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25951, 0.56768]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25951, 0.90325]], "google_gemma-3-12b-it_contains_pii": [[0, 2396, false], [2396, 5553, null], [5553, 6862, null], [6862, 9209, null], [9209, 10489, null], [10489, 11495, null], [11495, 12529, null], [12529, 15665, null], [15665, 17472, null], [17472, 19996, null], [19996, 22558, null], [22558, 25951, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2396, true], [2396, 5553, null], [5553, 6862, null], [6862, 9209, null], [9209, 10489, null], [10489, 11495, null], [11495, 12529, null], [12529, 15665, null], [15665, 17472, null], [17472, 19996, null], [19996, 22558, null], [22558, 25951, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25951, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25951, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25951, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25951, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25951, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25951, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25951, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25951, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25951, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25951, null]], "pdf_page_numbers": [[0, 2396, 1], [2396, 5553, 2], [5553, 6862, 3], [6862, 9209, 4], [9209, 10489, 5], [10489, 11495, 6], [11495, 12529, 7], [12529, 15665, 8], [15665, 17472, 9], [17472, 19996, 10], [19996, 22558, 11], [22558, 25951, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25951, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
c581f79977dd6caf977aec54ea8a87bf4ca5ded9
|
ABSTRACT
Programming difficulty is a key challenge to the adoption of FP\-GAs as a general high-performance computing platform. In this paper we present CMOST, an open-source automated compilation flow that maps C-code to FP\-GAs for acceleration. CMOST establishes a unified framework for the integration of various system-level optimizations and for different hardware platforms. We also present several novel techniques on integrating optimizations in CMOST, including task-level dependence analysis, block-based data streaming, and automated SDF generation. Experimental results show that classroom-use generated FP\-GA accelerators can achieve over 8x speedup and 120x energy gain on average compared to the multi-core CPU results from similar input C programs. CMOST results are comparable to those obtained after extensive manual source-code transformations followed by high-level synthesis.
Categories and Subject Descriptors
B.5.2 [Hardware]: Design Aids – automatic synthesis
General Terms
Algorithms, Design, Experimentation
Keywords
System-Level Optimization, High-Level Synthesis, FPGA
1. INTRODUCTION
The performance improvement from traditional frequency and multi-core scaling has significantly slowed down due to power consumption issues. FP\-GAs provide the opportunity to exploit customization and specialization for energy-efficient computing. However, the adoption of FP\-GA as a computing platform is currently limited by the design productivity issues, such as exploration of large design space, and time-consuming and error-prone design environment. There is an urgent need for design automation tools to tackle these issues to enable customized computing.
High-level synthesis (HLS) tools, such as [1], establish an automated design path from C to RTL, and this enables the design of FP\-GA hardware using a high-level programming language for module-level designs. But there is little support for system-level design automation, which requires many microarchitecture considerations, e.g., proper memory and communication architectures to connect various RTL modules, and the integration of the hardware and software modules into the entire system.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org.
DAC ’15, June 07 - 11, 2015, San Francisco, CA, USA
Copyright is held by the owner/author(s). Publication rights licensed to ACM.
ACM 978-1-4503-5320-1/15/06 ...$15.00
http://dx.doi.org/10.1145/2744769.2744807
of State-of-the-art FPGA devices are large enough to support applications with many hardware kernels and embedded processors. The design complexity and design space at system level requires FPGA design flows to follow the platform-based design paradigm [6]. Early research on platform-based methodologies at the electronic system-level (ESL) is summarized in [7] where automated or manually guided design space exploration (DSE) is the main approach to finding good designs. Recently, the response surface model (RSM) [8] and machine learning [9] approaches are proposed to address the scalability problem. However, these general DSE-based flows do not have prior knowledge of the analytic models of the microarchitecture optimizations, and hence suffer from the scalability problem for larger applications.
Microarchitecture optimizations play a vital role in the results of FPGA designs. For example, better data reuse with available on-chip buffers can significantly reduce off-chip memory access [10]; loop transformations are applied to improve data locality in order to exploit parallelism in execution or reduce memory footprint [11]; intelligent system resources allocation among different modules can greatly improve system performance [12]. Whether to apply these optimizations and how to balance the tradeoffs between different optimizations become a significant challenge in automating the compilation process. The polyhedral model provides a unified framework for the scheduling of the repeated task instances. Some combined optimizations have been proposed based on the polyhedral model [10, 13, 14]. However, it is still a big challenge to integrate and combine all these optimization options into a fully automated implementation framework.
By tackling these challenges, our system CMOST targets at enabling software developers to work on FP\-GAs with a fully automated compilation flow—not only on push-button implementation but also on intelligent optimizations. The contributions of this paper are:
* This work was mainly done at Computer Science Department in UCLA.
1. The first push-button compilation flow mapping general C programs into full system designs on different FPGA platforms.
2. A unified abstraction model for combinations of different microarchitecture optimization schemes using customization, mapping, scheduling and transformations.
3. Several novel techniques integrated into CMOST, including task-level dependency analysis, block-based data streaming, and automated SDF generation.
The rest of this paper is organized as follows: Section 2 describes the overall structure of CMOST. Section 3 introduces a unified model for the integration and combination of different system-level optimization schemes. Section 4 presents several novel techniques used in CMOST, followed by the experimental results and conclusion in Sections 5 and 6.
2. CMOST DESIGN FLOW
2.1 Overall design flow
CMOST provides a push-button design flow to generate the executable system on FPGAs from user programs, as shown in Figure 1. Programmers only need to mark the regions of the program (called tasks) for acceleration by pragmas as Figure 2 (a). Data accesses between SW and HW modules are coded directly using array references, and arbitrary loop structures are supported including imperfectly nested loops. The system-level information for the tasks such as the iteration domain and data access patterns is extracted statically and automatically as Figure 2 (b). Moreover, optimization results are performed automatically based on the extracted high-level information as Figure 2 (c). The task-level application model contains a for-loop ([n]) inside, which is not modeled in the iteration domain of the original order in the sequential program. This abstract representation provides the opportunity for the compiler to find the proper scheduling of the repeatedly executed statements, a set of data arrays that the input program [14] produces or consumes, and a set of necessary constraints on the execution order of the statements to keep the semantics of the original program.
2.2 Platform virtualization
At this point, CMOST adopts a bus-based architecture template (Figure 3(a)) to abstract away details of the hardware platform and provide portability for different platforms. The standard bus interface of HW cores makes it easy to integrate with the platform peripherals from different vendors. The template supports two acceleration scenarios (i) servers are connected to FPGA via PCIe and (ii) processor(s) are embedded in FPGA. In the automation flow (Figure 3(b)), the platform-dependent and platform-independent parts are separated to maximize the design reusability between platforms. CMOST generates the implementation files in the OpenCL format with standard HW/SW interfaces. The OpenCL host program is totally platform independent. OpenCL APIs invoked in the host program are implemented by the driver wrappers in CMOST where the effort to support different platform drivers is minimized.
3. ABSTRACTION FRAMEWORK
FPGA provides the opportunities to exploit high performance and energy efficiency by customization and specialization of the accelerators. The large design space results in design complexity in all aspects of computation, communication and storage subsystems. In typical designs, a sequence of optimization schemes is applied for different objectives, and the system bottleneck may switch from one aspect to another during the process. For example, in the stencil application shown in Figure 4, data reuse is first applied to solve the off-chip bandwidth bottleneck by allocating local reuse buffers; data blocking (loop tiling) reduces the reuse buffer sizes; data prefetching overlaps communication with the computation to increase performance of one module; then dataflow streaming enables data-dependent modules to execute simultaneously in a pipeline fashion; and finally, module selection and parallelization optimize the area/performance trade-offs among multiple modules in the streaming system. As a result, a unified modeling is required to integrate all these steps into an automated flow, and to boost the research on how to order/combine these steps efficiently.
3.1 Task-level application model
The feasibility and profitability of system-level optimizations are determined by system-level features of the applications. A unified application model is required to support various optimizations. The polyhedral model is used to represent the application as a set of repeatedly executed statements, a set of data arrays that the statements produce or consume, and a set of necessary constraints on the execution order of the statements to keep the semantics of the input program [14]. This abstract representation provides the opportunity for the compiler to find the proper scheduling of the statement instances for the specific optimizations instead of the original order in the sequential program.
The traditional polyhedral model used in compiler optimizations is at either statement level or loop level, and only applicable to static control programs [15], which require the for-loop bounds and access indexes to be affine. CMOST proposes a task-level polyhedral model, where the basic unit is a task that may contain a segment of code in the program. For example, in Figure 5 task t0 contains a for-loop (n) inside, which is not modeled in the iteration...
domain; and the access function does not map iterators to a data element, but to a set of data elements accessed in the task body. The benefits of the proposed model are twofold: 1) the complexity of the model becomes flexible according to the granularity of the tasks; and 2) the program inside the task is not required to be affine, as in the traditional polyhedral model. Only the loops within the graph scope but outside the task scope need to be affine, and loop transformation are applied on them to perform task scheduling.
### 3.2 Unified optimization model
By analyzing the similarity and differences of the optimization schemes, we group the schemes into four basic dimensions: Customization, Mapping, Scheduling and Transformation. With the target Optimization in the center, we therefore arrive at CMOST as the name for our framework. Customization models the design spaces at application level using the parameterized source code. Mapping and scheduling determine the spatial resource allocation and temporal execution for each component in the application model. Microarchitecture optimizations are represented as a set of semantic-preserving transformations of the application model. Table 1 shows how different system optimizations are projected into the four basic dimensions.
**Table 1. Generalization of the microarchitecture optimizations**
<table>
<thead>
<tr>
<th>Data reuse</th>
<th>Customization</th>
<th>Mapping</th>
<th>Scheduling</th>
<th>Transformation</th>
</tr>
</thead>
<tbody>
<tr>
<td>-</td>
<td>allocate SRAM for buffer</td>
<td>-</td>
<td>create local buffer and fetcher</td>
<td>-</td>
</tr>
<tr>
<td>Data blocking</td>
<td>-</td>
<td>-</td>
<td>be in the order of blocks</td>
<td>-</td>
</tr>
<tr>
<td>Prefetching</td>
<td>-</td>
<td>allocate SRAM for buffer</td>
<td>-</td>
<td>create local buffer and fetcher</td>
</tr>
<tr>
<td>Streaming</td>
<td>-</td>
<td>allocate SRAM for buffer</td>
<td>-</td>
<td>create local buffer and fetcher</td>
</tr>
<tr>
<td>Module selection</td>
<td>module design space</td>
<td>select the options to map</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Parallelization</td>
<td>-</td>
<td>determine type of duplication and execution</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
**Customization** models the application-level design space using the parameterized programs written by users. This is inspired by Genesis2 [16], which used a model-based methodology where design templates and their configurations are separated, and exploration of the detailed module implementations can be done at system level. While Genesis only supports only Verilog/SystemVerilog, CMOST extends the template representation to support C/C++, Tcl, Perl and any textual source code. This creates a unified mechanism for separating different design considerations in the whole design flow, such as platform-dependent vs. independent, and application-dependent vs. independent constraints. Another improvement over Genesis2 is the support of describing the design space in templates, which are the ranges of the parameters. Automated design exploration can benefit from this because a joint exploration of architecture parameters and task parameters can be performed. The design space of task $t$ can be modeled as a set of Pareto-optimal points in the design metrics space:
$$\text{Metrics} = \{(res_{si}, lat_{si}, thrpt_{si}) | 0 \leq i < S_t\}$$
where $res_{si}$, $lat_{si}$, and $thrpt_{si}$ are resource utilization, latency and throughput of the $i$-th options of the task $t$, and $S_t$ is the number of design options for task $t$.
**Mapping** determines the resource allocation of the tasks and data in the application model. Binary selection variables $b_i$ indicate whether a task $t$ is implemented as its design option $i$.
$$res_{si} = \sum res_{si} b_i$$
(2) and
$$\sum b_i = 1$$
(3)
where $res_{si}$ is the resource utilization of the selected option for task $t$. Integer duplication factor $d_t$ indicates the number of parallel hardware units allocated for task $t$.
$$res_{di} = res_{si} \cdot d_t; \quad lat_{di} = lat_{si}; \quad thrpt_{di} = thrpt_{si} \cdot d_t$$
(4)
To model data reuse and streaming buffers, binary variables $r_{dj}$ and $s_{dj}$ indicate whether the reuse or prefetching scheme is applied to the access reference $j$ of array $a$.
$$res_{rj} = res_{ram} \cdot r_{dj}; \quad res_{sj} = (1 - r_{dj}) \cdot BW$$
(5)
The constraints for mapping are the total resource for each type, e.g. LUT, FF, DSP, SRAM and BW (for off-chip bandwidth):
$$\sum res_{di}[lat] \leq total_{lat}; \; \ldots; \; \sum res_{dj}[BW] \leq total_{BW}$$
(6)
**Scheduling** determines the execution start time of each task instance. In the polyhedral model, scheduling functions are used to specify the execution order via an affine mapping from the task iteration domain to the space of order vectors. To simplify the discussion, we only address 1-D order/scheduling vectors.
$$\theta(x) = T \cdot x + c$$
(7)
where $\hat{x}$ is the task instance index, and $\theta(x)$ is the order vector. However, the polyhedral model is originally used for loop transformation where only the relative order of the statement instances is of interest. Task scheduling in FPGA optimizations need an extended model to support the execution of the pipelined and parallel task instances. In CMOST, data dependency constraints, which need to be preserved for the program semantics, consider execution latency of the task instances.
$$\phi(x) + lat_s \leq \phi(x) \forall s \in X \rightarrow [x]$$
(8)
where $\phi(x)$ is the start time vector of task $s$ in time domain, and $s[x] \rightarrow [y]$ means task instance $d[y]$ is dependent on $s[x]$. FPGA hardware modules are typically running in a pipelined way, where the initiation interval is the reciprocal of the throughput.
$$\phi(x) + 1/thrpt_s \leq \phi(x) \forall t, x + d_t \leq \hat{y}$$
(9)
Duplicated hardware units of the same task allow multiple task instances to start simultaneously:
$$\phi(x) = (T \cdot x + c) / d$$
(10)
For task-level pipelining, additional constraints for the streaming buffers are required (in the case of double buffering):
$$D_i = d_t \times (\phi(x+2) - lat_s + \phi(x) \wedge \phi(x) + lat_s \leq \phi(x)) \forall s \Rightarrow t, x \in D_i$$
(11)
where $D_i$ is the iteration domain of task $t$, and $s \Rightarrow t$ means there is a stream from task $s$ to task $t$. Finally system performance can be expressed as:
$$sys_{lat} = \max \{\phi(x)\}, \; sys_{thrpt} = \frac{1}{max(\phi(x) - \phi(x) - 1)}$$
(12)
where $t$ is the output task we use to measure performance. Figure 6 provides some examples of the scheduling modeling.
---
1 We assume all modules continue running, so the total BW is the sum of the module BWs. More complex cases are beyond the scope of this paper.
This requires the underlying dependence analysis tool be extended as well. The dependency is calculated between data regions instead of data elements. The work in [17] proposed a formulation to calculate the reusable data regions by intersecting the polytopes of the data elements that successive loop iterations access. But the reuse across two loop iterations is not considered in the formulation. Thus, it is necessary to design a general dependence distance calculation pass for the task-level polyhedral model representation to integrate data reuse into the system-level automation.
Access functions in the task-level polyhedral model can be expressed as \( F_t(y) = \{ f(x, \hat{a}_t), (x, \hat{a}_t) \in D_t, \hat{a}_t \in D_t \} \) where \( y \) is the instance index of task \( t \), \( \hat{a}_t \) is the iterator variable inside the task body for the array reference, and \( f \) is the linear combination of components in \( y \) and \( \hat{a}_t \).
We define two task instances as dependent if any data element produced by one instance is used by the other instance. A dependency polytope can be used to represent the set of index pairs of the dependent task instances.
\[
P_r = \{ (x, \hat{a}_t), (x, \hat{a}_t) \in D_t, \hat{a}_t \in D_t \} \quad (13)
\]
This appears to be different from the basic polyhedral model [14] in terms of the mathematical form, but if we substitute the access functions in Equation (13), \( P_r \) is actually in a perfectly linear form in terms of iterator variables.
\[
P_r = f(x, \hat{a}_t) = f(x, \hat{a}_t), (x, \hat{a}_t) \in D_t, \hat{a}_t \in D_t \quad (14)
\]
Hence all the general polyhedral analysis methods can be applied in this task-level model. For example, the reuse buffer size is determined by the reuse distance. Reuse distance is the difference of task instance indexes between the source and reused access references, which can be conservatively calculated as
\[
f = \text{lexmax}(y - x) \quad \text{s.t.} \quad (x, \hat{a}_t) \in P_r^d
\]
where \( \text{lexmax} \) is calculating the lexicographically maximum vector. This optimization problem can be solved by integer linear programming. The reuse distance we obtain can be used to calculate the reuse buffer size by the approach in [10].
4.2 Block-based data streaming
In the task level streaming, tasks in different pipeline stages communicate via FIFO so that the synchronization between the tasks is minimized, and read and write accesses can be performed in parallel. Automated flows such as [18] have been established to generate the FIFO-based streaming from high-level programs. However, limited by the channel type, the data communicated between the streaming stages are required to be in the same order at producer and consumer sides. So when the orders do not match, additional memory is needed at either side to perform the reordering operation.
We propose an extension of the traditional streaming framework by introducing block FIFOs. A block FIFO consists of several memory blocks where data access within one block can be accessed in random address, and the order of the blocks accessed by both sides should be the same. For example, in the DCT case in Figure 9(a), data are produced in row order in the first stage and consumed in column order in the second stage. Figure 9(d) shows the overall structure of the block FIFO, which is similar to a traditional FIFO where each data element in the FIFO is replaced by a memory block in block FIFOs. Control signals like read, write, empty and full are all at block level, and they are used to switch the FIFO pointers of blocks at two sides in a cyclic way like basic FIFO. Data accesses are only allowed within the block that FIFO pointers are pointing to, and address signals are used for random access. This locally-random-globally-ordered mechanism of block FIFOs fits well in the CMOST’s task-centric application model where a block of data is accessed by each task instance.
To fully automatically generate the block FIFO-based design, the size of the buffer and the address mapping should be determined. We first merge the references to be mapped to the block FIFO into one union data set. Then the address mapping problem can be generalized as: Given a parameterized set \( \{f(x, \alpha) | \alpha \in D_\alpha \} \) representing the virtual address in the program, find a one-to-one mapping from this set to a non-negative integer set representing the block FIFO addresses. The maximum value in the new set should be minimized to save memory space. Research has been conducted on address mapping for memory size reduction [19], which is quite complex. We propose a novel and simple method to generate the addresses for block FIFO.
For example, let the input set be \( \{(256i + 8j + 5) | 0 \leq i \leq 7, 0 \leq j \leq 7 \} \). This set does not start from zero, so we first subtract a constant from the expression and get \( 256i + 8j \). The points are scattered in the integer space, so we can divide the expression by the GCD of all the coefficients and get \( \frac{1}{32} \). In addition, since the variable \( j \) has a small range of 8, the coefficient 32 can be reduced to 8 where a one-to-one mapping is still satisfied. Then the mapped local address in block FIFO is \( 8i + j \). As a result, the original scattered points are mapped into a dense range from 0 to 63.
The detailed algorithm for address mapping can be summarized as Algorithm 1. The buffer size is also obtained in the algorithm.
### Algorithm 1 Address mapping for block FIFO
1. **input** \( C \); // list of the coefficients for the iterators \( \alpha \)
2. **input** \( R \); // list of the ranges of the iterators \( \alpha \)
3. **integer** \( p \); // current size for the mapped set
4. **sort** \( C \) and \( R \) according the coefficient values (smaller first)
5. divide all the values in \( C \) by the GCD of them; set \( p = 1 \)
6. for all the coefficients in \( C \) (indexed by \( i \)) do
7. if \( C[i] > p \) then \( C[i] = p \) and \( p = C[i] \times R[i] \); // coefficient reduced
8. **else** \( p += C[i] \times R[i] \); // already dense
9. end for
10. **insert** the constant into \( C \) to shift the staring address to zero
11. **return** the updated \( C \) for local address and \( p \) for buffer size
## 4.3 Automated SDF generation
System optimizations for streaming applications have been well studied in recent decades. To leverage this work, the current CMOST framework adopts the method in [12] for the scheduling and mapping optimizations. However, previous works rarely explored the issue of creating the synchronous dataflow (SDF) model from a sequential C program. Automated SDF generation is developed in CMOST to establish a fully automated design flow.
As shown in Figure 10(b), the SDF graph contains computation actors and communication edges. Data streams between actors via FIFOs in the edges. The numbers annotated on both sides of the edge represent the data rates, i.e., the number of units produced or consumed by each firing of the actor. If there are not enough data units on the input edge, the actor will not be fired.
designs is relative small, the results are comparable to those obtained with extensive manual source-code transformations. For the last two cases, dynamic scheduling is applied manually, which is not supported in the current flow. Overall, we can achieve over 8x speedup and 120x energy gain on average.
Table III. Comparison of programming efforts
<table>
<thead>
<tr>
<th>Design</th>
<th># of tasks in CMOST</th>
<th>original code line</th>
<th>OpenMP changes</th>
<th>CMOST changes</th>
<th>Manual changes</th>
</tr>
</thead>
<tbody>
<tr>
<td>Medical Imaging</td>
<td>6</td>
<td>700+</td>
<td>6</td>
<td>28</td>
<td>800+</td>
</tr>
<tr>
<td>MPEG</td>
<td>3</td>
<td>6000+</td>
<td>10</td>
<td>35</td>
<td>1500+</td>
</tr>
</tbody>
</table>
Table III shows that CMOST achieves the speedup with a small number of source code changes (mainly for adding some pragmas to mark the hardware task regions). Compared to the manual HLS design, a great amount of effort is saved. In addition, once a design is ready for one FPGA platform, only one line of change in the directive file is needed for platform switching and working frequency change.
8. CONCLUSION
We present an open-source C-to-FPGA automation flow, which can achieve over 8x speedup and 120x energy gain on average compared to multi-core CPU results using the similar input program. A unified optimization framework is proposed for the combination of various microarchitecture optimizations. Several novel techniques are introduced for the integration of the fully automated design flow. Further work will include 1) automating the task marking process to further minimize the design efforts and enable the optimization on task partitioning; 2) improving the design results by introducing more advanced microarchitecture optimizations such as dynamic scheduling; and 3) providing automation on system evaluation and debugging. CMOST is available for download at http://vast.cs.ucla.edu/software/cmost-system-level-fpga-synthesis.
7. ACKNOWLEDGEMENTS
The authors would like to thank Young-Kyu Choi, Hassan Kianinejad, Jie Lei, Peng Li, Jie Wang, and Yuxin Wang for the efforts in CMOST development and design case study. This research is partially supported by the NSF Expeditions in Computing Award CCF-0926127.
|
{"Source-Url": "http://vast.cs.ucla.edu/sites/default/files/publications/dac2014_cmost_manuscript_final.pdf", "len_cl100k_base": 5968, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 20763, "total-output-tokens": 6209, "length": "2e12", "weborganizer": {"__label__adult": 0.0008144378662109375, "__label__art_design": 0.0010538101196289062, "__label__crime_law": 0.0006670951843261719, "__label__education_jobs": 0.0009002685546875, "__label__entertainment": 0.00015866756439208984, "__label__fashion_beauty": 0.0004036426544189453, "__label__finance_business": 0.0003924369812011719, "__label__food_dining": 0.0006704330444335938, "__label__games": 0.0013761520385742188, "__label__hardware": 0.047760009765625, "__label__health": 0.0012226104736328125, "__label__history": 0.0006213188171386719, "__label__home_hobbies": 0.0003821849822998047, "__label__industrial": 0.00232696533203125, "__label__literature": 0.000255584716796875, "__label__politics": 0.0004916191101074219, "__label__religion": 0.001251220703125, "__label__science_tech": 0.275390625, "__label__social_life": 9.292364120483398e-05, "__label__software": 0.00803375244140625, "__label__software_dev": 0.65283203125, "__label__sports_fitness": 0.0007548332214355469, "__label__transportation": 0.0017652511596679688, "__label__travel": 0.0003888607025146485}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26467, 0.05286]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26467, 0.39614]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26467, 0.88421]], "google_gemma-3-12b-it_contains_pii": [[0, 5057, false], [5057, 10409, null], [10409, 16988, null], [16988, 20961, null], [20961, 24147, null], [24147, 26467, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5057, true], [5057, 10409, null], [10409, 16988, null], [16988, 20961, null], [20961, 24147, null], [24147, 26467, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26467, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26467, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26467, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26467, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26467, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26467, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26467, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26467, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26467, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26467, null]], "pdf_page_numbers": [[0, 5057, 1], [5057, 10409, 2], [10409, 16988, 3], [16988, 20961, 4], [20961, 24147, 5], [24147, 26467, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26467, 0.0916]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
530c682ce04662390d84be96435aa4b48931ee86
|
RAYGO: Reserve As You GO
Original
Availability:
This version is available at: 11583/2926954 since: 2022-03-21T17:38:14Z
Publisher:
Institute of Electrical and Electronics Engineers Inc.
Published
DOI:10.1109/DASC-PICom-CBDCom-CyberSciTech52372.2021.00055
Terms of use:
openAccess
This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository
Publisher copyright
IEEE postprint/Author's Accepted Manuscript
©2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collecting works, for resale or lists, or reuse of any copyrighted component of this work in other works.
(Article begins on next page)
Abstract—The capability to predict the precise resource requirements of a microservice-based application is a very important problem for cloud services. In fact, the allocation of abundant resources guarantees an excellent quality of experience (QoE) for the hosted services, but it can translate into unnecessary costs for the cloud customer due to the reserved (but unused) resources. On the other side, poor resource provisioning may turn out in scarce performance when experiencing an unexpected peak of demand. This paper proposes RAYGO, a novel approach for dynamic resource provisioning to microservices in Kubernetes that (i) relieves the customers from the definition of appropriate execution boundaries, (ii) ensures the right amount of resources at any time, according to the past and the predicted usage, and (iii) operates at the application level, acknowledging the dependency between multiple correlated microservices.
Keywords—Resource management, Cloud, container, autoscaling, vertical scaling
I. INTRODUCTION
Many companies massively leverage cloud computing services for their IT businesses, possibly benefiting from the pay-as-you-go paradigm (also referred to as the serverless approach), using and paying only the resources needed at any given time. This can bring significant cost savings to customers and allow them to concentrate in their higher-level business logic, leaving infrastructure-level tasks (e.g., hardware provisioning and maintenance) to the cloud provider.
However, in order to share the infrastructure among the above customers, the cloud provider usually requires each tenant to define precise resource boundaries for their workloads. According to the Kubernetes terminology, this involves two parameters: Requests and Limits. A Request is what the microservice is guaranteed to get (e.g., 0.5 virtual CPUs), while the Limit determines the maximum amount of resources that can be consumed (e.g., 1 virtual CPUs), which can be provided in a best-effort fashion. In any case, the workload is never allowed to consume resources beyond limits. Finally, Requests and Limits could also be used by the cloud provider to charge the customer, either directly or through cluster autoscaling, which therefore has to carefully balance the trade-off between resource abundance and costs.
Still, developers may not know in advance how to quantify properly these execution boundaries, ending up in two common errors: (i) over-commitment, asking for an amount of resources that is way higher than the actual needs of the application, in order to guarantee the best QoE in every circumstance; (ii) under-commitment, defining boundaries lower than the actual needs, due to imprecise estimations or unexpected spikes of requests. In the first case (over-commitment), the customer will be charged also for unused resources, resulting in unnecessary expenses. In the second case (under-commitment), the customer will experience poor QoE for the deployed application. Interesting, the first scenario is the most likely: Google estimates an actual resource usage for application requests of no more than 50 % [1].
These issues trace back to the immutability of the resources assigned to an application throughout its execution, missing the possibility to update them at run-time. Yet, it is hard to assess in advance how many resources a job needs to run optimally. Load tests can help finding an initial estimate, but these recommendations become soon stale as workload needs change over time. Indeed, many end-user serving jobs have daily or weekly load patterns, and traffic changes across longer time scales as a service becomes more or less popular.
A common solution to the above problem is horizontal autoscaling, which is largely available in public cloud providers as well as in vanilla Kubernetes. This technique (e.g., Kubernetes Horizontal Pod Autoscaler — HPA [2]) leverages the dynamicity of the microservice paradigm by adding or removing replicas in response to changes in the metric under observation, such as the end-user traffic, the average CPU utilization, and more. However, despite its popularity, horizontal autoscaling may be difficult to configure properly and in many cases its effectiveness is subordinated to a significant waste of resources. In fact, the creation of new replicas implies a step increase of all reservations; furthermore this technique is sub-optimal in case only one type of resource (e.g. CPU) should be adjusted to address the current demands, as horizontal autoscaling implies adjusting resources of all types (e.g., CPU and RAM).
A less frequent approach involves vertical autoscaling (e.g., Kubernetes Vertical Pod Autoscaler — VPA [3]) to tune at run-time the amount of resources available to each replica by configuring the Requests of each container based on its
usage. Although apparently less popular, vertical autoscaling is adopted as a valuable solution for container management in Google data centers [1], providing a more accurate and fine grained control of resource provisioning. It ensures better performance especially for those applications that rely on specific network protocols for inter-microservice communication, such as gRPC, which suffer from poor load-balancing if the number of back-ends varies dynamically. In fact, gRPC massively relies on long-lived TCP persistent connections, which can hardly be split or redirected to other replicas, hence hindering the basic assumption under horizontal autoscaling. However, classical vertical autoscaling operates at the single microservice level, hence possibly neglecting the correlations between the multiple components of a single application. Additionally, it was designed with slowly-variable workloads in mind, hence failing to achieve good performance if the load changes abruptly.
The main contribution of this paper is RAYGO, a prediction algorithm for vertical resource provisioning in Kubernetes. RAYGO handles the execution of a microservice-based application through two components. First, a proactive engine, which estimates the future microservice resource demands based on its past history. Second, a reactive engine, which dynamically refines the profiling decision according to the overall behavior of the entire application, ensuring fast reactions and preventing performance drops during load spikes.
The rest of the paper is organized as follows. Section II details the RAYGO algorithm, as well as its experimental evaluation in Section III. Finally, Section IV summarizes the related work and Section V concludes the paper.
II. RAYGO
Given the complexity and the heterogeneity of microservices behavior, we designed RAYGO as composed of two different components. Specifically, (i) a proactive engine (Historical Data Profiler — HDP), which aims to predict the future behavior of a microservice based on its past executions and (ii) a reactive engine (Execution Data Predictor — EDP), which constantly monitors the entire application execution (i.e., the combination of multiple related microservices) and refines the profiling decision based on the information about its current behavior.
The combination of the above two components is required since a purely proactive approach may not be suitable for heavily variable workloads characterized by sudden spikes of requests and unexpected load changes. On the other hand, a purely reactive one, following strictly the current needs of the jobs, may be able to identify load changes much quicker, but it can result in continuous updates of the resources assigned to each microservice. Yet, as of today and in the context of Kubernetes clusters, this is a highly disruptive process, given it requires to restart the application. Although compliant with the stateless approach, too frequent updates could result in poor QoE, as well as overall higher resource consumption during the transient. Conversely, a purely reactive approach might be feasible if the resources assigned could be varied at run-time, with no service disruption. Overall, the role of the proactive component is to mitigate the update rate of the reactive one, hence limiting the application downtime.
A. Historical Data Profiler (HDP)
One of the results emerging from major cloud provider reports [4] is that many hosted applications experience quite stable resource usage patterns in relatively small periods of time. Hence, we can expect the future behavior of a given microservice to be similar to the one observed in the past. Specifically, this component (i) collects the information about the previous executions of a given microservice, (ii) processes the historical data to extract the relevant key features and (iii) leverages the outcome to compute the final profiling value.
Focusing on the feature extraction phase, let consider as input, at a specific instant in time $t_0$, the set of the $n$ past measurements $\xi_{\mu,x}$ referred to a given resource quantity $x$ (e.g. CPU usage) of microservice $\mu$ and collected every $\delta s$:
$$\Xi_{\mu,x}(n) = \{\xi_{\mu,x}[t_0-\delta i] : i \in [0,n]\}. \tag{1}$$
First, the measurements are weighted by the function $w[i] = 2^{-\delta i/\tau}$, to smooth the response to load spikes and give increased relevance to the samples closer in time:
$$\hat{\Xi}_{\mu,x}(n) = \Xi_{\mu,x}(n) \cdot w[i] = \{\xi_{\mu,x}[t_0-\delta i] \cdot 2^{-\delta i/\tau} : i \in [0,n]\}, \tag{2}$$
where $\tau$ is defined as the half life, that is the time after which the weight drops by half. In other words, the larger $\tau$, the slower the system reacts, due to the increased relevance associated with the older samples. Conversely, a small $\tau$ value corresponds to more rapid reactions, quickly neglecting past measurements.
At this point, the final HDP prediction $\Phi_{\mu,x}^{HDP}$ is computed as the $r^{th}$ percentile ($P_r$) of the last $n$ weighted samples:
$$\Phi_{\mu,x}^{HDP} = P_r(\hat{\Xi}_{\mu,x}(n)), \quad i \in [0,n]. \tag{3}$$
The selection of the appropriate $r$ value needs to necessarily take into account the specific characteristics of the monitored feature (e.g., its volatility), as well as the orchestrator behavior. For instance, an under-sized CPU limit could simply lead to reduced performance, as an orchestrator can enforce throttling periods to satisfy the CPU limits. This does not raise any concern for sufficiently short intervals as computations can complete correctly, just slightly slower. Instead, an under-sized RAM limit could lead to service disruption, as a microservice exceeding its limits causes the orchestrator to react with an out-of-memory (OOM) event, effectively terminating the microservice and causing the restart of the application. The following details the approach selected for CPU and RAM predictions, although similar considerations
apply in case other resource types (e.g. ephemeral storage) are considered.
**CPU usage:** Given the possible high volatility of this metric due to the alternations between short-term load spikes and idle periods, it is fundamental to balance between QoE and excessive resource demands triggered by load peaks. For this reason, based on our experience, we suggest to select \( r \in [90, 100] \), with lower values resulting in more aggressive resource estimations, and higher ones being more conservative, reducing the probability of throttling periods:
\[
\Phi^\text{HDP}_{\mu,CPU} = P_r(\Xi_{\mu,x}(n)), \quad i \in [0, n), r \in [90, 100].
\]
(4)
**RAM usage:** Given the service disruption possibly experienced with a poor RAM estimation and considering that the RAM usage typically varies slowly over time due to allocation and caching policies, we conservatively suggest to select \( r = 100 \). Hence, the HDP module computes \( \Phi^\text{HDP}_{\mu,\text{RAM}} \) as the maximum of the last \( n \) weighted samples:
\[
\Phi^\text{HDP}_{\mu,\text{RAM}} = \max_i \Xi_{\mu,x}(n), \quad i \in [0, n).
\]
(5)
**B. Execution Data Predictor (EDP)**
The second concept which emerges from public reports [4] regards the impossibility to correctly infer the future resource consumption based on historical data only for certain categories of microservices. In these cases, it is clear that the proactive approach adopted by the HDP, alone, is not sufficient.
For this reason, we introduce the Execution Data Predictor (EDP) engine, which continuously adapts the HDP prediction based on the current application demands. Specifically, it (i) constantly monitors a different set of metrics (as detailed in the following) with respect to the HDP, to characterize the current microservice execution; (ii) uses the gathered information to identify load spikes; (iii) generates a corrective factor to adapt \( \Phi^\text{HDP}_{\mu,x} \) based on the specific resource demands. The EDP operates considering the complete set of microservices composing a single application, rather than assessing each one independently. Indeed, preliminary evaluations have shown a reduction in the number of application resource updates with such approach and consequently significant improvements in application performance (thanks to the downtime reduction). Indeed, without the contextualized view of the application RAYGO struggles to identify resource predictions, alternately updating the microservices as the workload pressure moves constantly from the front-end to the back-end, and viceversa.
As for the metrics considered, the EDP focuses specifically on the indicators highlighting that a given set of microservices is struggling to achieve good performance. In detail, as for CPU usage, we consider the amount of throttling periods imposed by the orchestrator, which points out the presence of too strict limits with respect to the current workload demands. Similarly, considering RAM usage, the EDP evaluates the number (and type) of memory failure events (ranging from page faults to OOM) sent to the microservice, as we experimentally observed they tend to grow when reaching the configured boundaries.
Let \( \Xi^t_{\mu,x}(m) \) represent again the set of the past \( m \) measurements, considering in this case the number of throttling periods (CPU prediction) and memory failure events (RAM prediction) as metrics of interest \( \xi^t_{\mu,x}[t_p] \), where \( t_p = t_0 + p\delta \) and \( p > 0 \), can be estimated from the previous \( m \) observations as:
\[
\xi^t_{\mu,x}[t_p] = 2 \frac{\alpha \sum_{i=0}^{m-1} \xi^t_{\mu,x}[t_0 - \delta i] - \beta \sum_{i=0}^{m-1} \xi^t_{\mu,x}[t_0 - \delta i]}{m(m^2 - 1)},
\]
(6)
where:
\[
\alpha = 2m^2 - 3m + 3mp - 3p + 1 \quad \text{and} \quad \beta = 3(m + 2p - 1).
\]
(7)
Next, for each metric considered, we derive the application-wide baseline value \( \Upsilon_{A,x} \) to capture the behavior of the entire set of related microservices \( \mu \in A \). Specifically, \( \Upsilon_{A,x} \) is computed as the average of the last \( m \) measurements, over all microservices composing the application of interest:
\[
\Upsilon_{A,x} = \operatorname{avg}_{\mu,i} \Xi^t_{\mu,x}(m), \quad i \in [0, m), \mu \in A.
\]
(8)
Given the outcome of the prediction \( \xi^t_{\mu,x}[t_p] \) and the baseline \( \Upsilon_{A,x} \), we derive, for each microservice \( \mu \), an intermediate factor \( \Psi_{\mu,x} \) obtained computing the ratio between the two resulting values, in a way that the result is always a number \( \geq 1 \).
\[
\Psi_{\mu,x} = \begin{cases} \xi^t_{\mu,x}[t_p], & \text{if } \xi^t_{\mu,x}[t_p] \geq \Upsilon_{A,x} \\ \frac{\Upsilon_{A,x}}{\xi^t_{\mu,x}[t_p]}, & \text{if } \xi^t_{\mu,x}[t_p] < \Upsilon_{A,x} \end{cases}
\]
(9)
In the end, the outcome of the EDP engine (\( \Phi^\text{EDP}_{\mu,x} \)) is:
\[
\Phi^\text{EDP}_{\mu,x} = \begin{cases} +\lambda \sigma \Psi_{\mu,x}, & \text{if } \xi^t_{\mu,x}[t_p] \geq \Upsilon_{A,x} \\ -\lambda \sigma \Psi_{\mu,x}, & \text{if } \xi^t_{\mu,x}[t_p] < \Upsilon_{A,x} \end{cases}
\]
(10)
where \( \lambda \) and \( \sigma \) are two positive scaling constants. The sign of \( \Phi^\text{EDP}_{\mu,x} \) reflects the predicted additional demands of the current microservice compared to the entire application. Intuitively, focusing on CPU usage, above-average throttling periods indicate a struggling microservice (i.e., demanding for more resources), while below-average ones typically follow the end of a load spike, thus allowing for stricter quotas.
**C. Final Resource Prediction**
Given \( \Phi^\text{HDP}_{\mu,x} \) and \( \Phi^\text{EDP}_{\mu,x} \), the final resource prediction \( \Phi_{\mu,x} \) for the microservice \( \mu \) and resource quantity \( x \) is derived as:
\[
\Phi_{\mu,x} = \Phi^\text{HDP}_{\mu,x} \cdot (1 + \Phi^\text{EDP}_{\mu,x}).
\]
(11)
Overall, the final result is composed of two parts: a baseline, represented by the $\Phi_{HDP}^{\mu,x}$ value, and a corrective factor (either positive or negative) predicted by the EDP engine.
The entire process is repeated every $\Delta s$, hence periodically recomputing new $\Phi_{HDP}^{\mu,x}$ predictions values based on the updated information and possibly varying the microservices configuration depending on the outcome. Hence, a known microservice is started with the latest $\Phi_{HDP}^{\mu,x}$ values, while its resource quota is periodically updated according to the most recent (application-wide) predictions in order to match the actual necessities of the application.
### III. Experimental validation
This section validates the above approach through a prototype implementation of RAYGO, publicly available at [5]. It manages the microservices execution within a Kubernetes cluster by dynamically adjusting the amount of resources assigned to each single workload according to the outcome of (11). Specifically, Requests are configured to the $\Phi_{HDP}^{\mu,x}$ value, possibly incremented by a small, user-configurable safety margin $\rho \geq 0$ (i.e., $R_{HDP}^{\mu,x} = \Phi_{HDP}^{\mu,x} \cdot (1 + \rho)$). Then, Limits are obtained enlarging $R_{HDP}^{\mu,x}$ by a configurable factor $\sigma \geq 0$ (i.e., $L_{HDP}^{\mu,x} = R_{HDP}^{\mu,x} \cdot (1 + \sigma)$), to account for sudden and temporary load spikes without waiting for the reaction of RAYGO, which may be slower (and possibly unnecessary). TABLE I details the values of the complete RAYGO parameters adopted for the evaluation.
In Kubernetes, one of the most adopted abstractions to execute microservices is the Deployment. Deployments define the template of the microservice, including the Docker image, its associated execution environment and, most importantly in this context, the amount of resources (in terms of Requests and Limits) that are enforced by the orchestrator. Additionally, they ensure the desired number of replicas is correctly in execution, and transparently manage rolling updates (i.e., starting a new parallel instance of the microservice and tearing down the old one only once the former is correctly running) to limit the application downtime in case the template is varied.
Our implementation leverages standard Kubernetes labels to identify the set of microservices composing a single application, and relies on Prometheus\(^1\) to gather the execution metrics used by the algorithm. This prototype could be easily extended to properly manage other Kubernetes abstractions (e.g. StatefulSets), as well as to directly gather the metrics from the metric-server\(^2\) if Prometheus was not available.
#### A. Testbed setup
In the validation process, we leveraged the cloud-native demo application Online Boutique\(^3\), which consists of ten microservices (graphically represented in Fig. 1) interacting among them through gRPC interfaces. The application is a web-based e-commerce allowing users to browse and purchase items, while featuring recommendations and currency exchange. Locust\(^4\), an open-source load testing tool, was used to replicate end-user interactions with the platform. It allows to define a custom application workload, in terms of number of fake clients and target endpoints, and then generates the suitable requests according to the configuration.
For the sake of comparison, we first evaluated the performance of the application running unsupervised, without restrictions in resource usage. Then, we assessed the outcome when managed by the Vertical Pod Autoscaler (VPA), v0.9.2, as well as the Horizontal Pod Autoscaler (HPA), as of Kubernetes 1.19, configured to increase the number of replicas when the CPU load reaches 80%. Finally, we tested RAYGO, considering the case with only HDP and the combination of the two engines. In the latter case, we evaluated both a degraded version of the EDP module, assessing each microservice independently (HDP+EDP single), as well as the full, application-aware one (HDP+EDP app). Across all tests, the amount of resources available on the nodes was more than enough to accommodate all the microservices, hence posing no constraints on the different approaches.
---
\(^1\)https://prometheus.io/
\(^2\)https://github.com/GoogleCloudPlatform/microservices-demo
\(^3\)https://github.com/GoogleCloudPlatform/microservices-demo
\(^4\)https://locust.io/
---
**Table I**
<table>
<thead>
<tr>
<th>Prediction evaluation period ($\Delta s$):</th>
<th>2 min</th>
</tr>
</thead>
<tbody>
<tr>
<td>Measurements sampling period ($\delta$):</td>
<td>1 s</td>
</tr>
<tr>
<td>Resource Requests increase factor ($\rho$):</td>
<td>0.1</td>
</tr>
<tr>
<td>Resource Limits increase factor ($\sigma$):</td>
<td>0.7</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>HDP measurements rolling window size ($n$):</th>
<th>900</th>
</tr>
</thead>
<tbody>
<tr>
<td>HDP measurements rolling window duration ($\delta$):</td>
<td>15 min</td>
</tr>
<tr>
<td>HDP measurements half life ($\tau$):</td>
<td>15 min</td>
</tr>
<tr>
<td>HDP CPU prediction percentile ($\rho$):</td>
<td>97th</td>
</tr>
<tr>
<td>HDP RAM prediction percentile ($\sigma$):</td>
<td>100th</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>EDP measurements rolling window size ($m$):</th>
<th>120</th>
</tr>
</thead>
<tbody>
<tr>
<td>EDP measurements rolling window duration ($m\delta$):</td>
<td>2 min</td>
</tr>
<tr>
<td>EDP forward prediction samples ($p$):</td>
<td>15</td>
</tr>
<tr>
<td>EDP scaling constant ($\rho$):</td>
<td>0.5</td>
</tr>
<tr>
<td>EDP scaling constant ($\sigma$):</td>
<td>0.1</td>
</tr>
</tbody>
</table>
---
Figure 1. A representation of the microservices composing the Online Boutique, and their interconnections.
B. Workload Pattern
Using Locust, we generated a custom workload characterized by two main phases. In a first warm-up phase, the number of simulated users is gradually increased up to 1000, and then kept constant. This allows all solutions to adapt the amount of resources (and possibly replicas) based on the application demands. At that point, the actual test phase starts, which simulates an unforeseen spike of users to assess how the different application supervisioning approaches behave in this scenario. Specifically, it is characterized by four intervals: first, the number of simulated users grows linearly (one new user every second) up to 2000 users, and then is maintained constant. In the third phase, the users start decreasing (one user removed every second) until reaching again 1000, followed once more by a stationary phase. Hence, assessing both the behaviour, in terms of QoE and resources, during and after a load spike.
Fig. 2 shows the adaptation process performed by RAYGO during the warm-up phase, regarding CPU and RAM usage (represented as the total across all microservices), with the predictions closely tracking the actual values while the number of users (and hence the load) increases. Being the given application CPU intensive, the following evaluation focuses specifically on this metric. However, similar results have been obtained regarding memory usage.
C. Resource usage
Figs. 3 and 4 show respectively the sum of the CPU requests and limits assigned to the set of microservices by the different approaches. Results are aggregated by test phase, with the bar graph showing the average, and the vertical segment representing the minimum and maximum values. Overall, RAYGO achieved significant resource savings compared to the other approaches. Indeed, the HPA suffered from the suboptimal step increases caused by the creation of new replicas, while the VPA struggled to precisely track the application load, keeping the resource demands high even after the spike of requests terminated.
In this context, the additional usage of the application-aware EDP engine, thanks to its ability to detect unexpected load changes, resulted in more resources assigned to the microservices during the workload growing phase, and in slight resource savings when the number of generated users decreased, ensuring better performance for the application, as detailed in the following.
D. Quality of Experience
Previous results could raise the question whether the smallest limits requested by RAYGO are enough to sustain the application workload. In this section we evaluate the QoE provided by the application when supervised by the different solutions. First, we assessed the number of requests correctly processed (i.e., with a successful 2XX response), and the outcome is presented in Fig. 5. The unconstrained values represent a reference measured with no resource constraints, hence reflecting the maximum achievable in the specific test conditions. Given the behavior of Locust, with each client issuing a new request only after the previous response is received (or a timeout expired), a lower result can derive from both errors (e.g. a microservice is being restarted), as well as increased application response times.
During the entire evaluation, RAYGO executed with the combination of HDP and EDP resulted the solution ensuring the performance closest to the reference, despite the overall lower resource demands (cf. Fig. 3). Furthermore the contextualized decisions of the EDP, working with the complete set of microservices, has shown significant improvements in the QoE. Indeed, during the initial phases, the VPA, RAYGO (HDP only) and RAYGO (HDP+EDP single) struggled to adapt the configuration to the workload demands, recovering only when the number of users decreased. While requiring the highest amount of resources, the application supervised by the HPA displayed a 5%–10% gap behind the reference. Although unexpected, this result is caused by the usage of the gRPC protocol for the interaction between microservices.
Successful responses
Requests per core
Success Rate (%)
Response time



Indeed, leveraging persistent connections, it fails to properly load balance the requests when new back-end replicas are created at run-time by the HPA. This problem could be addressed by service mesh techniques, though at the cost of increased complexity and resource consumption.
Additionally, Fig. 6 depicts the success rate of the application managed by the three different solutions, that is the number of 2XX responses out of the total received ones. Errors may be returned both in case one of the microservices was temporarily unavailable (e.g., while being restarted) or the request exceeded the 5s timeout, hence preventing the completion of the transaction. In a nutshell, the vertical autoscaling-based approaches suffered the most, due to the microservice restarts required to adapt the resources. Still, RAYGO strongly limited the disruption, thanks to its application-aware approach. Conversely, much lower success rate was displayed by the VPA, due to uncoordinated updates.
To assess the overall trade-off between QoE and resource demands, Fig. 7 presents the ratio between the number of successful responses achieved by each solution and the resource requests configured by each solution.



E. Response time
Finally, we analyzed the responsiveness of the application when responding to user requests. Fig. 8 presents through a box-plot the distribution of the 95th percentile (P95) of the response times, considering one-second long intervals. In other words, every 1s the P95 value is extracted from all responses received, and the resulting samples, grouped by test phase, constitute the box-plot. In this case, the HPA is the solution suffering the most, once more due to the inefficient load-balancing of gRPC traffic. Indeed, new requests continued to be issued to the overloaded replicas, although new available instances had been created by the HPA. Differently, both VPA and RAYGO (HDP+EDP app) were able to guarantee significantly faster response times (i.e. $\ll 1$ s for RAYGO) with the median value of the P95 distribution never exceeding 100ms in case of RAYGO. Furthermore, the application-aware EDP engine showed its effectiveness in reducing the overall response time, especially during load spikes.
IV. RELATED WORK
Many papers and reports already analyzed the most common workloads in public data centers, aiming at identifying recurrent patterns in resource usage [4], [6], [7] and enabling researchers to explore how scheduling works in large-scale production compute clusters on a long time scale. In this scenario the problem of resource autoscaling for microservices is definitely one of the most relevant topics: indeed, the most adopted solutions for container orchestration (i.e., Kubernetes, Borg [8] and YARN [9]) require users to specify the amount of resources assigned to each job. Once properly defined these values, despite the multi tenancy property of the cloud environment and the underlying shared infrastructure, the orchestrator can guarantee to each application its reserved slice of resources.
Besides widespread open-source solutions such as the Kubernetes HPA and VPA already mentioned in Section I,
autoscaling is a well-developed research area. Many papers addressed the problem of horizontal scaling of resources: [10] introduces a modularized platform for resource provisioning, while [11] defines probabilistic performance models of horizontal autoscalers both in AWS and Azure and [12] exploits an absolute CPU utilization correlation model to accurately predict the number of replicas. Moving to vertical autoscaling, a key enabler is represented by resource usage prediction models. To this end, different approaches have been proposed, including time-series forecasting based on a second order autoregressive moving average method (ARMA) [13], the computation of the median of resource usage observations [14], neural networks [15] and reinforcement learning techniques [16]. Yet, in many scenarios, the computational overhead, as well as the additional time required in training the reinforcement learning network and the extreme variability of microservices behaviour, make those solutions not completely appropriate.
V. CONCLUSIONS AND FUTURE WORK
The automatic prediction of the resource requirements of microservice-based applications is of fundamental importance to achieve the best trade-off between the amount of resources reserved (and hence charged) for their execution and the offered QoE. RAYGO is an algorithm tracking the resource demands of a set of applications and dynamically adapting their configuration according to the foreseen requirements. It combines a proactive approach, predicting future resource demands based on past executions, and a reactive component, which continuously refines the profiling decisions based on the current behavior of the entire application. We compared a RAYGO prototype with two common autoscaling mechanisms leveraged in Kubernetes, namely HPA and VPA, for the management of a realistic ten-tier microservices application. Overall, the results are promising, with RAYGO achieving both stricter resource boundaries and better QoE, in terms of successful responses and response time.
As future work, RAYGO could consider bandwidth demands in addition to CPU and RAM, hence recognising the communication patterns between closely-related microservices. Additionally, we can explore its integration with a job scheduler optimised for distributed edge scenarios.
ACKNOWLEDGMENT
The authors warmly thank prof. Guido Marchetto for his precious help in modelling the system described in this paper.
REFERENCES
|
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/2926954/e384c434-416a-d4b2-e053-9f05fe0a1d67/Reserve_As_You_Go.pdf", "len_cl100k_base": 7429, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 28052, "total-output-tokens": 9051, "length": "2e12", "weborganizer": {"__label__adult": 0.00031876564025878906, "__label__art_design": 0.00046944618225097656, "__label__crime_law": 0.00035262107849121094, "__label__education_jobs": 0.0007610321044921875, "__label__entertainment": 0.00015866756439208984, "__label__fashion_beauty": 0.0001989603042602539, "__label__finance_business": 0.0010671615600585938, "__label__food_dining": 0.00037169456481933594, "__label__games": 0.0006189346313476562, "__label__hardware": 0.0025959014892578125, "__label__health": 0.0006818771362304688, "__label__history": 0.00040984153747558594, "__label__home_hobbies": 0.0001418590545654297, "__label__industrial": 0.0006756782531738281, "__label__literature": 0.0003159046173095703, "__label__politics": 0.000347137451171875, "__label__religion": 0.0003921985626220703, "__label__science_tech": 0.334716796875, "__label__social_life": 0.00013911724090576172, "__label__software": 0.04803466796875, "__label__software_dev": 0.60595703125, "__label__sports_fitness": 0.0002598762512207031, "__label__transportation": 0.0005931854248046875, "__label__travel": 0.00028204917907714844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36388, 0.0382]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36388, 0.33857]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36388, 0.8808]], "google_gemma-3-12b-it_contains_pii": [[0, 1246, false], [1246, 6092, null], [6092, 12092, null], [12092, 18008, null], [18008, 23492, null], [23492, 27562, null], [27562, 30918, null], [30918, 36388, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1246, true], [1246, 6092, null], [6092, 12092, null], [12092, 18008, null], [18008, 23492, null], [23492, 27562, null], [27562, 30918, null], [30918, 36388, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36388, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36388, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36388, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36388, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36388, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36388, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36388, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36388, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36388, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36388, null]], "pdf_page_numbers": [[0, 1246, 1], [1246, 6092, 2], [6092, 12092, 3], [12092, 18008, 4], [18008, 23492, 5], [23492, 27562, 6], [27562, 30918, 7], [30918, 36388, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36388, 0.09827]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
a3eb29f72e9dd1875b159540eda11c0601d515c1
|
Isolating malicious code in Android malware in the wild
Valérie Viet Triem Tong, Cédric Herzog, Tomás Concepción Miranda, Pierre Graux, Jean-François Lalande, Pierre Wilke
To cite this version:
Valérie Viet Triem Tong, Cédric Herzog, Tomás Concepción Miranda, Pierre Graux, Jean-François Lalande, et al.. Isolating malicious code in Android malware in the wild. MALCON 2019 - 14th International Conference on Malicious and Unwanted Software, Oct 2019, Nantucket, United States. hal-02288116
HAL Id: hal-02288116
https://hal-centralesupelec.archives-ouvertes.fr/hal-02288116
Submitted on 13 Sep 2019
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Isolating malicious code in Android malware in the wild
Valérie Viet Triem Tong, Cédric Herzog, Tomás Concepción Miranda, Pierre Graux, Jean-François Lalande, Pierre Wilke
CentraleSupélec, Inria, Univ Rennes, CNRS, IRISA, Rennes, France
firstname.lastname@inria.fr
Abstract
A malicious Android application often consists of a benign part which is the body of the application, and a malicious part that is added later, by repackaging. Fast and efficient analysis of Android malware depends on the analyst’s ability to quickly locate malicious code and have a clear representation of it. To do this, the analysis tools must allow the suspicious code to be quickly located and isolated from the rest of the application. In this article, we propose in a first part to synthesize recent works from the literature and to refresh older research works in order to highlight the discriminating characteristics of malicious code. Then, we propose a heuristic to reveal the suspicious methods of an Android application by static analysis. Finally, we discuss an algorithm to recover the malicious graft. This graft should contain the methods considered suspicious as well as the code calling these suspicious methods.
1. Introduction
The code of an Android application consists of multiple packages, classes, Java or native methods. When such an application is malicious, statically understanding the attack requires to first accurately locate the malicious methods. Few applications contain only code produced by the attacker. Other malware are formed from healthy applications to which malicious code has been added: malware authors can simply decompress a benign application, then add their malicious code to it before finally repackaging it: these repackaged applications have been named piggybacked apps by Li et al. [12]. In the Android context, a classic assumption is that most malware are repackaged applications.
Malicious code localization can first be done manually. For example, the first datasets of malware were manually reversed: one of the pioneering projects is the Android malware Genome Project [19] presented in 2012. This dataset initially contained 1,200 malware samples, covering most of the existing Android malware families, collected between August 2010 and October 2011. This dataset was mainly maintained thanks to student efforts in charge of the reverse and classification of the malware. However, when handling larger-scale malware datasets, the manual reverse engineering does not scale anymore. Thus, we need an automatic method to locate suspicious code.
Our long term goal is to explore different methods to quickly locate malicious code. More precisely, we would like to distinguish the code that implements the malicious intent from the benign code that supports the application. To achieve this goal, we have first compiled and completed various investigations on malware and goodware to highlight the features specific to Android malware. Our experiments were conducted over one malware dataset and one goodware dataset, each containing 5000 unique applications published between 2015 and 2018. These datasets (named GM19) have been carefully constructed to avoid statistical biases. Secondly we propose a heuristic resulting from this study. This heuristic guides a static analysis by highlighting in the application control flow graph the methods considered suspicious because they are more used by malware than by goodware. Finally, we identify the malicious graft (the malicious code written by the attacker) in an application by identifying the code handling data acquired by these methods considered suspicious.
2. Android applications background
An Android application is an archive (an .apk file) that usually includes a collection of resources and the code of the application compiled in the DEX file format. In this article, all the classes.dex are decompiled into Jimple and we compute the interprocedural control flow graph with implicit flows from this representation. A control flow graph is an oriented graph where nodes are Jimple statements and an oriented edge from a node A to a node B indicates that statement B can be executed immediately after the statement A. Such a graph can be easily recovered for each method in the bytecode using Soot [3]. The inter-procedural con-
control flow graph is constructed by connecting all the method graphs, *i.e.* by adding edges representing inter-procedural calls. Explicit inter-procedural calls permit to connect two graphs from a method A and a method B when a node in the graph for A explicitly calls the entry point of the graph for B (by an *invoke* statement). Implicit calls are (sequence of) calls that start in the application space, continue in the Android framework and end in the application space. Formally, a method *f* calls implicitly a method *g* if *f* calls a method of the runtime *h* (*e.g.* *Thread.start()*) which itself calls the method *g*. The interprocedural control flow graph with implicit flows can be recovered using GPFinder [9]. In the following, we simply refer to the interprocedural control flow graph with implicit flows of an application simply as the control flow graph or simply $G$.
We rely in the following on a quantitative study on Android applications. To make this study as objective as possible and avoid statistical biases, we constructed two datasets
---
MAL and GOOD, named GM19, with the following features: MAL and GOOD each contain the same number (5000) of elements. MAL is composed of malware from VirusShare [15]; and GOOD of benign applications from AndroZoo [2], where we keep only those confirmed as non-malicious by VirusTotal [8]. We discard random samples in order to ensure a uniform temporal distribution between 2015 and 2018, and thus avoid biases in the characteristics of APKs due to date differences (for example, API methods found only in newer versions of the SDK in goodware vs. old API methods in malware, avoiding concept drift [14]). Table 1 details the number of packages, classes and invoke statements found in these applications. Note that a high dead code rate is observed because APKs usually include android.* and com.google.* packages without fully using them.
### Table 2. Evaluation of some obfuscation techniques used by goodware and malware
<table>
<thead>
<tr>
<th>Obfuscation techniques</th>
<th>Dataset nature</th>
<th>Ratio of obfuscated applications</th>
</tr>
</thead>
<tbody>
<tr>
<td>Identifier Renaming (*)</td>
<td>Google Play</td>
<td>43% 73% 63.5%</td>
</tr>
<tr>
<td>String Encryption (*)</td>
<td>Google Play</td>
<td>0% 1% 3.5%</td>
</tr>
<tr>
<td>Java Reflection (*)</td>
<td>Google Play</td>
<td>48.3% 49.7% 51%</td>
</tr>
<tr>
<td>Native method usage (**)</td>
<td>GOOD</td>
<td>25.8% 62.5%</td>
</tr>
<tr>
<td>Packer usage (**</td>
<td>GOOD</td>
<td>0.06% 10.88%</td>
</tr>
</tbody>
</table>
(*) Experiments conducted in [5]
(**) Our own findings
---
### 3. Discriminant Features
One of the major challenges of automatic malware analysis is to differentiate between malicious and benign code. In general, malicious code is code whose result will cause damage to whoever executes it. This code is very similar to benign code and we think that the only characteristics that can differentiate malicious code from benign code are:
1. The result of the execution of malicious code goes against the user. It can attempt to contact a remote control server, encrypt user data, access sensitive data (geolocation, contacts, IMEI, etc.), make calls or send messages to premium rate numbers, take control of the device. For all this, malicious code can use some libraries (crypto, TelephonyManager, …) more often than benign code would.
2. The attacker’s gain increases as long as his code is not analyzable and detectable by common anti-virus software. Therefore, the attacker tries to protect his code against (a) static analysis and (b) dynamic analysis. To do this, he obfuscates his code, and delays the execution of its payload to trigger the attack only on a real device when not under analysis.
3. On Android, some malware are distributed directly as entire applications. Many other malware are distributed by hiding in popular third-party applications, encouraging users to install it. These fake applications are referred to as piggybacked applications and are simply repackagings of benign applications where some malicious code has been grafted.
We believe that characteristics (1) refer to the content of the code while characteristic (2) refers to the form (is it obfuscated or not) (a) and structure (is the payload accessible...
directly from an entry point) (b) of the malicious code and (3) impacts the internal structure of the whole application code. We now detail how these features can be exploited (or have been exploited) in Android malware analysis.
Content of the malicious code (1) In 2013, Aafer et al. described malware through their usage of API functions, packages, and parameter level information [1]. Relying on this description, they proposed a detection method that distinguishes malware from benign applications. In particular, their work has highlighted a list of APIs that reveal the presence of potentially malicious code.
This seminal study was very important and has been used by many approaches: mostly as a basis [7, 16–18], fewer for comparison [6]. This work was conducted in 2013 and we conducted a similar study on malware from 2019 in order to update these results.
We have listed all the API methods invoked by the samples of MAL and GOOD datasets. Among these methods, it appears that at least 30 methods are invoked by samples in the MAL dataset and are never invoked by samples in the GOOD dataset, see Figure 1. We also computed the top 30 methods with the highest difference between malware and benign apps. Our results are presented in Figure 2. Comparing to the methods highlighted by Aafer et al. in 2013, we notice that the preferred method for malware is still getSubscriberId in the TelephonyManager API. But the rest of the top 30 has changed: nowadays malware get more information about the device they are running on (about the network, the wifi), rather than manipulating services, SMS messages, and timers. These operations can obviously be used by malware, for example to become persistent or to disable some applications that would analyze them.
Form of the malicious code (2a) The malware developer implements malicious code protections to prevent analysis and therefore detection. These protections are of two types, depending on whether they are protective against static analysis or dynamic analysis. Common obfuscation techniques that protect the code against static analysis are variable renaming, string encryption, reflection, packing (encryption of all or part of the bytecode) and usage of native code. Bacci et al. [4] proposed to automatically identify whether a sample under analysis has been modified by means of obfuscation techniques including disassembling followed by reassembling, repackaging, renaming packages, using call indirections, inserting junk code, renaming identifiers, encoding data, reordering code. Dong et al. [5] investigated how obfuscation techniques are really used by malware in the wild. They have evaluated how three obfuscation techniques (identifier renaming, string encryption, and Java reflection) are really used by Android applications of three typical datasets (Google Play, third-party markets, and malware). Their study has revealed that the percentage of malware using identifier renaming is 63.5%, which is more than for applications available on Google Play (43%), but slightly less than for third-party apps (73%). String encryption is not used by benign applications and only by 5.3% of malware. The proportions of reflection deployment in benign apps and malware are similar (around 50%). To complete this study, we have explored how native code obfuscation is used by malware and goodware. To detect if an application resorts to native code obfuscation we have checked the presence of methods declared as native in the DEX file of APKs from GOOD and MAL dataset. According to our investigation, we have found that malware use way more native methods than goodware (62.5% vs. 25.8%). This can be explained by the necessity of malware to obfuscate their code and, thus, to use native code. We have also quantified the usage of known packers by running APKiD [13] over the GOOD and MAL datasets. We have observed that malware use more packers than goodware (10.88% vs. 0.06%). This can also explain the higher usage of native methods in DEX files: packers rely on native methods to load the packaged code. The results from Dong et al. and our own findings are gathered in Table 2.
Structure of the malicious code (2b) The protection of malware against dynamic analysis is of a different nature. A code is protected against dynamic analysis when it is not executed immediately after the application is launched. From the malware code point of view, this means that the pay-
load can only be reached from an application entry point by passing through one or more conditional statements which are triggering conditions. These conditions ensure that code is only executed when the environment context appears to be suitable for malicious code, outside an analysis platform. These conditions are various: they can delay the execution in time, check the presence of emulators, check that user actions are performed. Leslous et al. [10] explored execution paths towards any piece of code considered as suspicious in Android applications. First, their study revealed that the malicious payload is regularly hidden behind implicit control flow calls (i.e. flows occurring when the Android framework calls a method implemented in the application space) making usual static analyzers believe that the malicious code is unreachable. Their study has also revealed an average of 12.34 conditions per execution path leading to suspicious code locations. These conditions are a mix of necessary checks for the app to work, and of triggering conditions that protect the malicious behavior in order to run only under certain circumstances.
Internal structure of application hosting malicious code
As mentioned above, malicious code is often hosted by a benign application and the resulting application is called a piggybacked application. These applications have been investigated by Li et al. [11] and they have built a large dataset of piggybacked and benign applications pairs. This dataset was obtained by searching for pairs of applications with highly similar code. To know the similarity between two applications, each method of each application is abstracted by a string encoding the different types of statements of the method. Then the similarity between these two applications is reduced to the similarity between two sets of strings. On this dataset, Li et al. described how piggybacked applications differ from benign ones: what actions are performed, what payloads are inserted, and so on. Among several insights, they claimed that piggybacking is done with little sophistication, in many cases automatically, and often via library code.
Our conclusions To conclude, we believe that the studies mentioned above provided a good indication of the characteristics of Android applications hosting malicious code. First, these applications use more libraries than others (result of Aafer et al. in 2013, updated here). Then, malicious code can be protected against security analysis. The protection methods that differentiate them from goodware are mainly string encryption and native code based obfuscation. Lastly, the malicious code may have been added in an initially healthy application, so it forms an independent part grafted to the original code.
In the remainder of this article, we use the first two conclusions to decide whether an Android application is suspicious or not (Section 4). We propose to evaluate the use of suspicious APIs by Android applications and assess their potential threat levels. This type of study helps us distinguish malware from goodware but is not enough to quickly locate the malicious part in all the code of an application since it does not allow us to find all the parts of the code written by the attacker, nor to highlight the structure of the malicious code. For this reason we propose in Section 5 to isolate the malicious graft from the healthy code using the data dependency graph.
4. Highlighting suspicious methods
Section 3 quantifies method invocations in MAL and GOOD datasets, allowing us to highlight which classes and methods are statistically more used by malware than by goodware. Now, we propose to build a heuristic that can be used by static analysis to study the profile of an application according to its use of APIs preferred by malware rather than by goodware. A heuristic file lists methods that should be preferred by a malware than by a goodware. Here, our heuristics files are filled using the study presented in the previous section. Our problem is therefore to select enough methods not to wrongly classify too much goodware and not to wrongly dismiss too much malware, i.e. to be neither too selective nor too little selective.
To tune this heuristic we separate MAL in two subsets: a training set of 4,000 samples and a test set of 1,000 samples. Then, we build a heuristic parametrized by a distance \( d \) and a threshold \( t \). A distance \( d \) means that the methods listed in the heuristic have been chosen because in the previous study, these methods were invoked more than \( d\% \) by malware than by goodware. Choosing a distance of 0% means we add in our heuristic a method present as much in the GOOD dataset than in the MAL dataset. \( H_0 \) is therefore very non-discriminating. On the contrary, a distance of 100% means we add in our heuristic methods exclusively present in the MAL dataset. We have computed 11 heuristics with a distance \( d \) going from \( H_0 \) to \( H_{100} \) by steps of 5%.
We measured the accuracy and relevance of these heuristics on the remaining test set of 1,000 samples in MAL and a similar test set issued from GOOD. We count the number of invocation methods listed in the heuristics for each heuristic \( H_0 \) to \( H_{100} \). We define a detection threshold \( t \): an application is considered as malicious if it uses more than \( t \) methods occurring in a heuristic \( H_d \). We evaluate the impact in term of true positive rate (TPR) and false positive rate (FPR) of a threshold value from 0 to 10000 in Figure 3.
Finally, from this results, we draw the ROC curves, as shown in Figure 4, for all heuristics. By maximizing the true positive rate while minimizing the false positive rate (point closest to the upper left of the ROC curve), we found that the best parameters are when using a distance of 35% and a threshold of 900 suspicious invokes, making \( H_{35} \) with the threshold of \( t = 900 \) invocations of suspicious methods above which the application is considered malicious.
5. Isolation of suspicious code
We conclude this article by focusing on the control flow graph and the data dependency graph of an application. In the control flow graph, we can highlight methods that seem suspicious because one or more of their instructions invoke a suspicious API function according to the previously described heuristics. This methodology leads to the identification of methods in the bytecode. The highlighted code
can be grouped or scattered in the graph. This first step can be used to decide whether or not an application is malicious as we have proposed in the previous section. This method does not allow to understand the structure of the malicious code because it only reports a set of methods without any link between them. We now propose to try to separate malicious code from healthy code by assuming that the malicious code contains the suspicious methods and that the malicious code manipulates data contaminated by instructions considered suspicious.
Suspicious instructions and suspicious methods An instruction is suspicious if it is an invoke of an external suspicious Android API or when it depends on data generated by other suspicious instructions. A method is considered suspicious if it contains at least one suspicious instruction. The set of suspicious methods is recursively computed from the data dependency graph of the application. This data dependency graph is computed using Soot [3] and GroddDroid [10]. It represents the data dependency between a bytecode instruction $i$ and the set of previous instructions that modify the registers impacting $i$.
Figure 5 depicts the control flow graph of a sample of Airpush issued from the AMD dataset\(^2\). This sample has 830 methods, among them 23 (2.8\%) invoke an external suspicious Android API (3 invoke a telephony API, 3 invoke a system API and 17 invoke a network API). The suspicious methods are colored in black in Figure 5. The computation of methods depending on at least a suspicious instruction leads to the identification of 330 (39.8\%) suspicious methods in the following packages:
- com.flurry*: 20 methods over 44
- com.bugsense*: 27/60
- com.mobclix.android*: 101/378
- com.ZGisNcvn*: 177/310
- com.boa.whis*: 5/8
Here again, the precision depends above all on the heuristics chosen: if the heuristic is too broad, then the error is further amplified by the search for methods manipulating data that are incorrectly labeled. In a random sample\(^3\) from MAL having 6127 methods, we found a graph composed of 11 connected components for a total of 2289 methods with the heuristic $H_5$, 2248 methods (loss of 41 methods) with the heuristic $H_{35}$, and a total of 1876 (loss of an entire package) with the heuristic $H_{50}$. The different grafts for $H_{35}$ are depicted in gray in Figure 6.
For each of these heuristics, we found the following number of components depicted here by their size:
- $H_5$: 49 components
(1, 1, 1, 1, 1, 4, 1, 1, 1, 1, 6, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 2, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1)
- $H_{35}$: 6 components
(1, 1, 1, 10, 1, 2372)
- $H_{50}$: 2 components
(1, 2355)
Malicious connected component and grafting point When an attacker grafts malicious code to an application, he ensures that the execution of malicious code is possible through the execution of the original application. To do this, he can either add a new entry point to the application, this is what we have called a "coarse" graft, or he can modify one (or more) methods of the application so as to modify at least one normal execution path to drift it to the malicious code, this is what we have called a "fine" graft. The malicious code can be gathered in a newly added library.
From the graph point of view, a grafting point corresponds to an articulation point (\(i.e.\) a method whose removal would increase the number of connected components) that maximizes the number of suspicious methods contained in a single component. To highlight this suspicious graft, we highlight connected components of the control flow graph containing only suspicious methods obtained in the previous step: these nodes are suspicious either because they invoke a suspicious API or because they manipulate data acquired from suspicious APIs.
\(^2\)SHA256: 84cdef6a088b3303dfc4db7e254a3a82094d89441b68f39 6d2e8c2b70ce963fc
\(^3\)SHA256: d0faed5d5a230685ac027f7e1136015dc5f6a5ef7ba12344b757cd90b57141e25
Figure 5. Method in the CFG depending on a suspicious instruction in AirPush
Figure 6. Estimation of a malicious graft with \( H_{35} \)
6. Conclusion
In this article we have addressed the difficult problem of accurately locating malicious code in the entire code of an Android application. First, we conducted a broad study of the different characteristics that can lead to identify this malicious code. We have updated the list of classes and packages preferably used by malware rather than goodware. This first part was done by randomly selecting goodware and malware sets in the wild with a uniform distribution of numbers of samples between 2015 and 2018 and a distribution of the size of the goodware similar to the distribution of size of malware. This choice of input datasets allows us to limit the bias that datasets representing only certain families can bring. We deduced a heuristic that can be used to detect whether an application is a malware or not. This heuristic relies on the classes and methods used by the application. We have shown that, using this heuristic in conjunction with the data dependency graph allows to locate malicious code grafts. Hash values and heuristic files used here are available on demand.
References
|
{"Source-Url": "https://hal-centralesupelec.archives-ouvertes.fr/hal-02288116/file/camera-malcon19.pdf", "len_cl100k_base": 5652, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 30465, "total-output-tokens": 6762, "length": "2e12", "weborganizer": {"__label__adult": 0.000904560089111328, "__label__art_design": 0.0005502700805664062, "__label__crime_law": 0.0081024169921875, "__label__education_jobs": 0.0008568763732910156, "__label__entertainment": 0.0002065896987915039, "__label__fashion_beauty": 0.0003216266632080078, "__label__finance_business": 0.0002551078796386719, "__label__food_dining": 0.0005006790161132812, "__label__games": 0.002685546875, "__label__hardware": 0.003437042236328125, "__label__health": 0.0009918212890625, "__label__history": 0.0004277229309082031, "__label__home_hobbies": 0.0001615285873413086, "__label__industrial": 0.0006990432739257812, "__label__literature": 0.0006680488586425781, "__label__politics": 0.0005826950073242188, "__label__religion": 0.0007295608520507812, "__label__science_tech": 0.119384765625, "__label__social_life": 0.0002005100250244141, "__label__software": 0.07562255859375, "__label__software_dev": 0.78173828125, "__label__sports_fitness": 0.0004456043243408203, "__label__transportation": 0.0004112720489501953, "__label__travel": 0.0002084970474243164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27797, 0.06379]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27797, 0.53021]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27797, 0.89835]], "google_gemma-3-12b-it_contains_pii": [[0, 1145, false], [1145, 5477, null], [5477, 9818, null], [9818, 14272, null], [14272, 16435, null], [16435, 20767, null], [20767, 24782, null], [24782, 24920, null], [24920, 27797, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1145, true], [1145, 5477, null], [5477, 9818, null], [9818, 14272, null], [14272, 16435, null], [16435, 20767, null], [20767, 24782, null], [24782, 24920, null], [24920, 27797, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27797, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27797, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27797, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27797, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27797, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27797, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27797, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27797, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27797, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27797, null]], "pdf_page_numbers": [[0, 1145, 1], [1145, 5477, 2], [5477, 9818, 3], [9818, 14272, 4], [14272, 16435, 5], [16435, 20767, 6], [20767, 24782, 7], [24782, 24920, 8], [24920, 27797, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27797, 0.07447]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
b16b94f3c95404a4b386cfddf607703acdf96e5d
|
Socially-Constructed Metrics for Agile Quality: An Action-Research Study
Sharon Coyle
Business Information Systems
University of Sydney Business School
Australia, NSW 2006
Email: sharon.coyle@sydney.edu.au
João Barata
CISUC, ISMT, and ESTGOH
University of Coimbra
Pólo II, Pinhal de Marrocos, 3030-290 Coimbra, Portugal
Email: barata@dei.uc.pt
Abstract
We present a method to develop socially-constructed metrics for ascertaining agile software development quality. Canonical action research (CAR) is our mode of inquiry, conducted in a key European player of healthcare information systems. The result is a set of meaningful metrics that are built according to three interrelated dimensions: (1) evidence from practice; (2) stakeholders' expectations; and (3) stakeholders' evaluation. Our contribution suggests simple artifacts to create socially-constructed metrics and the main guidelines for applying them. Agile teams struggle with quality measurement, often supported by a plethora of metrics that do not adhere to rapidly changing project environments. We argue that socially-constructed metrics can address this problem, offering a contextualized perspective of quality that can improve tacit knowledge transfer; critical reflection about quality; and effective support in daily meetings, retrospectives, and audits. Moreover, it suggests a participative approach for continuous improvement in agile software development.
Keywords Socially-constructed metrics, agile development, quality management, action-research.
1 Introduction
The Agile Manifesto was introduced in 2001 with a goal of “uncovering better ways to develop software” (Beck et al. 2001) endorsing an iterative process involving intense stakeholder interaction throughout so as to develop a product of high quality that meets customer’s expectations. Metrics have been established in order to determine the meaning of ‘quality’ and are a popular research topic in agile software development (Agarwal et al. 2014; Hayes et al. 2014; Kupiainen et al. 2015). There are studies focusing on product or software-related metrics (Kupiainen et al. 2015; Mishra et al. 2012), tests and quality control (Agarwal et al. 2014; Janus et al. 2012), software defects (di Bella et al. 2013), stakeholder expectations (Boerman et al. 2015) and the role of auditing (Scharff 2011). Literature provides guidance about metrics used in practice however, there are difficulties in adopting these quality metrics in dynamic project environments (such as agile), which are significantly different from their traditional (‘waterfall’) counterparts. Traditional assessments of quality focus primarily on outcome-related indicators such as product or overall project quality. Kupiainen et al. (2015) found that even in the application of identified metrics, almost 40% of these were customized. Their conclusions state that the majority of existing metrics are non-inclusive of people. This creates significant challenges given the nature of agile projects, which are inherently people-focused. Other metrics focus on the development process (Gruschwitz and Schlosser 2012) yet a lack of solutions exist that integrate different types of metrics in a single method that can be practically applied in agile projects. Moreover, existing methods do not promote stakeholder engagement for metric construction yet individuals and their interactions are a key principle of agile methodologies (Beck et al. 2001).
For the purpose of our research, a metric is socially-constructed (Berger and Luckmann 1991) when users have the capacity to adjust its dimensions and critically evaluate the results. In this context, the metric is not a mere observation of a fact because stakeholders’ opinions are intrinsic to metric construction. Unlike traditional approaches, which compare against predefined goals, stakeholders are not just included at the end of measurement analysis; they become involved in constructing relevant metrics for their processes, project, and product(s). The difficulties of including quality assessment in agile development teams with the vision of socially-constructed metrics inspired our first research question (RQ1): How can agile teams develop socially-constructed metrics during their deployment of agile methodologies? Moreover, it is essential to test those metrics in practice, which we aim to address with RQ2: What are the advantages and disadvantages of using socially-constructed metrics in agile software development (ASD) teams?
The next section outlines the research background including quality management in agile and a review of different approaches for quality assessment and improvement including their importance and limitations. Next we present the selected research approach that is action research in its canonical form (Susman and Evered 1978). We subsequently detail a complete canonical action research cycle conducted in a leading IT supplier of healthcare information systems. The lessons learned and the results are presented afterwards, concluding with our study’s limitations and future research.
2 Background
2.1 Quality Management in Agile
In information systems (IS), quality management is multidimensional including social and technical aspects. According to (Stylianou and Kumar 2000), holistic enterprise quality is a combination of IS quality and the quality of business processes. A strong quality culture encompasses customer orientation, continuous improvement, utilisation of data (and analysis) to support decisions and the involvement of people in quality problems (ISO 2015). This aligns closely with agile principles and practices. For example, the notion of continuous improvement is embedded in the practice of retrospective meetings in agile projects (Babb et al. 2014; McHugh et al. 2012). Therefore, the pattern of common quality principles determined by ISO 9001 that certified companies learn and internalize in their daily practices can be aligned with agile values (Stålhane and Hanssen 2008). In highly regulated development projects however there are reported difficulties of adopting different quality standards and improvement frameworks (including ISO 9001, ITIL, COBIT, and CMMI) with that of agile. Stålhane and Hanssen (2008) for example discuss difficulties in documentation requirements when combining ISO 9001 with agile approaches. However, a technical report provided by Hayes et al. (2014) identifies different moments in agile projects where it is possible to get customer feedback to assess their satisfaction. These are illustrated in Figure 1.
Figure 1: Quality touch-points in agile development (Hayes et al. 2014)
Figure 1 shows that quality requires a continuous effort during the entire project. The evaluation of results within specific meetings (e.g. retrospective) can be important in promoting discussion and conducting critical reflections about quality. Retrospectives allow reflection about previous iterations to identify subsequent actions needed and there are authors such as Péraire and Sedano (2014) who conclude that artifacts and guiding steps for retrospective meetings can provide distinct advantages. Nevertheless, as Baxter and Sommerville (2011) put it “the agile approach of involving end-users as ‘owners’ of requirements is a good one but needs to be extended to take into account a broader set of system stakeholders”. There are threats to quality management in agile due to the constant pressures that can make reflection and analysis difficult in practice (Babb et al. 2014; McHugh et al. 2011). Moreover, ASD projects present different challenges when compared to traditional approaches, namely, “the traditional approach of tracking progress against a pre-made plan and measurable goals conflicts with the Agile value of embracing the change [...] rather comprehensive set of metrics, which does not align well with the Agile principle of simplicity” (Kupiainen et al. 2015). Agile methods go beyond the traditional views of quality such as measuring defects or functionality problems (Hayes et al. 2014). Quality concerns appear in the early stages of agile projects, proceeds in the complete documentation of user stories and “can be supplemented with a more direct measure of customer-perceived value—using customer satisfaction feedback” (Hayes et al. 2014).
2.2 Approaches for Quality Measurement and Improvement in Agile
Several approaches have been proposed for establishing quality in the context of agile practices. For example the 3C approach proposed by Janus et al. (2012) which combines software metrics and continuous integration, concluding that interpretation of results is necessary to promote continuous improvement actions. The model proposed by Hongying and Cheng (2011) include 20 key areas for agile software quality assurance. These authors suggest best practices for each area and a maturity model approach for evaluation and improvement. An earlier approach proposed by Sidky et al. (2007) to adopt agile quality principles, consists of two components. The first component includes an agile adoption index for the principles of “Embrace change to deliver customer value”, “Plan and deliver software frequently”, “Human centric”, “Technical excellence”, and “Customer collaboration”. The second component is a four-stage process for agile adoption, guiding companies to (1) identify discontinuing factors that can prevent agile success, (2) conduct project-level assessment, (3) organizational readiness assessment, and (4) reconciliation to ensure that the organization implements practices required for the project. Sidky et al. (2007) present one of the few examples that includes guidance for assessment and improvement according to the goals established for agile practices. A distinct hierarchical model was developed by Bansiya and Davis (2002) to assess object-oriented design quality of software products and obtain a total quality index. The first level of the model includes product related attributes such as functionality, effectiveness, understandability, extendibility, reusability, and flexibility. The second level details properties that can affect the attributes (such as complexity), defining weights for each property and its positive or negative influence for each attribute (e.g., complexity has a negative influence on understandability). The model has two more specific levels, namely (3) design metrics (such as ‘number of methods’), and (4) design components that are needed for metrics.
Social aspects, process, and outcome are deeply intertwined in iterative agile development projects. Recent studies to assess agility in enterprises (e.g. Tseng and Lin 2011) include social aspects such as personal skills, technology awareness, trust-based relations with customers, collaboration, empowerment, and motivation. Gren et al. (2015) identify different social approaches for assessing agility in teams, for example, using interviews or maturity models to guide the adoption of agile techniques but stress that “more work is needed to reach the point where a maturity model with quantitative data can be said to validly measure agility, and even then, such a measurement still needs to include some deeper analysis with cultural and contextual items”. This research aims to help
address this gap. Existing models do not incorporate people or their interactions into metric construction. According to Ghobadi and Mathiassen (2016) in order “to bridge communication gaps and create shared understanding in software teams, it is critical to take the revealed concerns of different roles into account”. To date, a model that integrates different views of people, process, and outcome is absent in literature. In addition, the perspectives outlined above are applied singularly. They are also usually applied ‘after the fact’ and therefore are very difficult to apply in iterative, dynamic ASD environments.
3 Research Approach
According to Baskerville (1999), from a socio-organizational viewpoint it is essential to study new techniques in practitioner environments. Action-research is well suited for this purpose in the field of IS (Baskerville 1999) as it is performed “collaboratively in an immediate situation using data feedback in a cyclical process” (Hult and Lennung 1980). Action-research encourages the interaction between the researcher and external clients, consequently contributing to some current challenges encountered in IS research (Gill and Bhattacherjee 2009). Amongst the multiple forms of action-research, we selected canonical action research (CAR) as one of the most popular and well documented (Davison et al. 2004). CAR cycles are conducted according to five phases (Lindgren et al. 2004; Susman and Evered 1978):
1. Diagnosing, identifying, or defining the problematic situation, as a shared task by the researcher and practitioner. The actors holistically interpret the phenomenon and formulate working hypothesis to be used in the subsequent phases of the cycle;
2. Action planning, specifying possible courses of action to improve the problematic situation;
3. Action taking, referring to the implementation of the course of action, causing change to occur and trying to create improvements to the situation;
4. Evaluating, assessing the consequences of the actions, involving a critical analysis of the results;
5. Specifying learning, identifying the findings, documenting and defining the outcomes that will add to the body of knowledge. Although appearing last, this phase is a permanent activity (Baskerville and Wood-Harper 1996; Cunha and Figueiredo 2002).
The perception of CAR as “context-bound” creates problems in generalizing the findings (Avison and Wood-harper 2003), however, there are different views regarding the degree to which generalization is required (Gregor 2006); the action researcher should look for transferable results. For example, Eden and Huxham (1996) assert that (1) there must be implications beyond those required for action in the specific project context, allowing it to inform other contexts; (2) there is a need to produce theory that is significant to others; (3) in the case of designing tools, techniques, models, and methods, its basis must be clear and linked to theory; (4) theory emerges from action and previous knowledge; and (5) theory building is incremental in action research, moving gradually from the particular to the universal. To ensure rigor and validity we evaluated our research according to the principles suggested by Davison et al. (2004), specifically for CAR: Principle of the Researcher–Client Agreement; Principle of the Cyclical Process Model; Principle of Theory; Principle of Change through Action; and Principle of Learning through Reflection. In the next section we depict the complete CAR cycle (Susman and Evered 1978).
4 Data Collection
4.1 Client-system Infrastructure
Our client is a European software provider of healthcare information systems for hospitals and clinics. Founded 25 years ago they are present in four continents, serving over 120,000 users and 25 million clinical processes. The company has migrated its quality management system to the recently revised 2015 version of ISO 9001:2015. Their regulatory space includes other specific standards for innovation management and healthcare standards that apply to their software product lines and operating context (such as data quality and record privacy). Quality management is essential to remain competitive and compete in different regions, as with its high-growth American market where the company achieved important contract agreements in recent years. Their global presence increases pressure for short development cycles and immediate feedback to their costumers and national partners conducive to an agile approach.
4.2 Diagnosing
The diagnosis included interviews with the quality manager and IT infrastructure manager. Simultaneously, we conducted a literature review to identify best practices for ascertaining quality and the role of metrics in quality assessment and improvement (Section 2). Metrics in the health sector are plentiful however, as stated by the quality manager, the company “has numerous indicators but only a few are valid for agile quality”. The reasons vary because in some cases “the numbers are highly dependent on the context and must be carefully interpreted”. In other cases “[she] does not think it is fair to establish goals, for example regarding number of defects or features implemented; these type of metrics depend on multiple factors”. Agile quality is problematic to them because “40% of our major customers [representing 80% of income] require quality indicators and evidence for each iteration, due to the critical nature of healthcare IT”. We confirmed the importance of retrospectives for quality in agile because in this case a lack of adequate implementation of retrospectives contributed to “difficulties in creating improvement on our project and without appropriate communications we are not sharing knowledge which is a critical aspect of our business due to the complexity of product lines”. This research participant also talked about the importance of being able to change metrics for each project or team, in an “agile way” that coincides with agile principles. According to our interviewee, quality metrics provide interesting dashboards “but what we need is to assess and improve quality; it cannot be done with ceremonial conformity or high level metrics that do not have correspondence with practice”. Even worse, “template” metrics “and unrealistic goals can reduce the team commitment to quality during agile projects”. When we asked how user intervention might assist in constructing metrics she stated how “this would be a very useful, inclusive approach [and that] it has the potential to address our main issues of (1) knowledge sharing, (2) obtaining quality evidence for our team and external audits, (3) re-invigorating our retrospectives, (4) providing support for weekly meetings and customer requests, (5) and ‘provide meaning’ to our agile numbers!”
4.3 Action planning
Our action plan for this CAR cycle included four main activities:
- Establish a model to create metrics. The model should assist project participants in the identification of the types of metrics and how to calculate them;
- Define the indicators that should be included for each metric type (people, process, outcome);
- Establish the structure of each indicator (how it is calculated) according to three possible dimensions of (1) evidence from practice; (2) stakeholders expectations; and (3) stakeholders evaluation;
- Develop a tool to manage metrics that can be useful for daily meetings, retrospective, and quality audits.
The plan, agreed by researchers and practitioners, aimed at solving a practical problem while contributing to research in the form of a new method to use metrics that adhere to the principles of agile, in particular people and interaction. Moreover, we wanted to provide practical tools in the form of tables accessible to agile teams and not dependent of specific technologies. The CAR cycle started in March 2016 and ended on July 2016. The next section presents the results of action taking.
4.4 Action taking
First, we agreed on a reference model to guide the construction of metrics, presented in Figure 2.

According to our review, a comprehensive assessment of agile quality requires three main types of metrics, represented to the left of Figure 1: (1) people-related, (2) process-related, and (3) outcome-related pertaining to the specific project and the product. Moreover, our proposal of socially-constructed metrics allows users to create composite metrics, as presented to the right of Figure 2. The resulting indicator will include a comparison with past results to identify if improvement occurred (evidence); a comparison with the expected result according to the stakeholders’ initial plan (expectations); and finally, critical analysis performed by the agile team (evaluation). The final result of each indicator is in fact a weighted average of each of its dimensions E1, E2, and E3. As a reference to weight the dimensions of each selected indicator, we used the suggestions included in Table 1 while Table 2 describes guidelines to evaluate each indicator according to the selected dimensions.
<table>
<thead>
<tr>
<th>Dimension</th>
<th>Definition</th>
<th>Potential ways to consider weightings</th>
</tr>
</thead>
<tbody>
<tr>
<td>Evidence</td>
<td>Quality is based on facts. Evidence represents the effective improvement of the indicator comparing it with the backlog.</td>
<td>If the indicator is not significantly affected by uncontrolled aspects, the weight can be higher.</td>
</tr>
<tr>
<td>Expectations</td>
<td>There are goals to achieve in agile development. There are technical goals (e.g., reduce defects), social goals (e.g. improve motivation), or other.</td>
<td>If the indicator is mostly influenced by stakeholders’ decisions, the weight can be higher.</td>
</tr>
<tr>
<td>Evaluation</td>
<td>Agile quality requires reflection and debate (e.g. about the meaning of the data) and to identify lessons learnt.</td>
<td>If the indicator is not consensual or it is highly variable according to external factors, the weight can be higher.</td>
</tr>
</tbody>
</table>
Table 1. How to weight each dimension of (1) evidence, (2) expectations, and (3) evaluation
<table>
<thead>
<tr>
<th>Dimension</th>
<th>0 (regression)</th>
<th>50 (no improvement)</th>
<th>100 (clear improvement)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Evidence</td>
<td>Worse comparing to last measurement</td>
<td>Similar to last result</td>
<td>Better than last measurement</td>
</tr>
<tr>
<td>Expectations</td>
<td>Below expectations</td>
<td>Within expectations</td>
<td>Better than expected</td>
</tr>
<tr>
<td>Evaluation</td>
<td>Negative opinion</td>
<td>Neutral opinion</td>
<td>Positive opinion</td>
</tr>
</tbody>
</table>
Table 2. How to calculate / value dimensions of each indicator
The second activity was to establish the indicators. We faced several difficulties at this stage because the company had dozens of indicators but did not have the practice of using them as an improvement tool. We decided to use specified indicators for each metric type and establish the rule that each type should have at least 1 indicator. Thirdly, for each indicator, the team decided the weights that needed to apply for the dimensions of evidence, expectation, and evaluation. Figure 3 presents the tool that was developed for using socially-constructed metrics in practice and as a result constitutes our fourth activity in CAR action phase.

Figures 3-5 include tables that we used to assess (1) people-related, (2) process-related, and (3) outcome-related metrics. We selected three for people (the columns were provided by the team: customer satisfaction, team satisfaction improvement, suggestions (internal)); four indicators for process (Figure 4) and another four concerning outcome (Figure 5). The aggregated result of each socially-constructed metric (line Total ranging from 0 to 100 that exists in each Figure is a weighted average. For example, for “customer satisfaction” in Figure 3 (column 1), evidence is weighted 0,5; expectation 0,2; and evaluation 0,3. In order to simplify grading in this instance each dimension can have a measure of 100 (clear improvement), 50 (no improvement), and 0 (regression) however deployment of a continuous scale is also an option. Below each Figure, the project stakeholders can provide comments about the interpretation of results and propose actions that remain in the table as long as they are active. Our model does not define or prescribe metrics; therefore, each team can select metrics according to their project or client priorities.
4.5 Evaluating
To ensure rigor and relevance we adopted the following principles (Davison et al. 2004):
- **Principle of the Researcher–Client Agreement**
Researchers and practitioner agreed that CAR was an appropriate approach to study socially constructed metrics in practice. The practitioner made an explicit commitment to the project and to adopt our proposed solutions within their teams. Their main objective is to improve quality assessment and improvement, making use of meaningful metrics that they can apply simply to their project. Data collection included interviews, observation, and document collection, safeguarding confidentiality.
- **Principle of the Cyclical Process Model**
Our research followed the five stages of CAR according to Susman and Evered (1978). We created our frame of reference for CAR with a literature review and semi-structured interviews (Barata and Coyle 2016). Then, we made a diagnosis of the situation in the selected company. During action taking, researchers and the quality manager developed an action plan and conducted a continuous evaluation according to the principles suggested by Davison et al. (2004). To minimize threats to validity two researchers proceeded in parallel, constantly contrasting data sources and challenging the results. Due to time constraints, we considered that one CAR cycle was appropriate, however, we identified opportunities for future research (presented later).
• **Principle of Theory**
Theory guided our research providing a theoretical frame of reference via the literature review. We were guided by existing theory in agile metrics and models to improve agile quality. We then proposed a new solution to share within the scientific community. Our proposal can support agility by (1) introducing flexibility in indicators’ selection and weightings, (2) promoting continuous interaction and (3) critical evaluation and debate to accommodate variable factors of project environments.
• **Principle of Change through Action**
Change occurred in a number of situations. First, we created a new way of using and calculating metrics in the practitioner organization, including self-evaluation within a metric structure. We have created artifacts and promoted new routines (Pentland and Feldman 2008) to guide the development team and the quality department. The situation of this IT organization and its context was evaluated before, during, and after the intervention, ensuring that change was analysed and properly documented.
• **Principle of Learning through Reflection**
Progress reports were provided to the client. Learning trough reflection occurred as a joint activity by researchers and practitioners in different stages of CAR. There was a joint reflection to ensure that our results would be relevant for science and help to improve the client situation and ensure project results. We learned about the benefits of the method but also the challenges emerging from critical analysis and composite metrics that require explanation. The feedback was positive but new questions emerged, for example, “should we create rules to enforce corrective actions below value X? Should we enforce explanations if the evaluation differs from the other two dimensions (e.g., evaluation 0 when the other dimensions receive 100)?” These are questions we plan to tackle in our next research cycles.
5 Discussion
This research included self-evaluation by development team members. Future research cycles will include customer assessments to cross-check different perspectives and promote the quality debate. As this is one of the first studies aimed at unravelling socially-constructed metrics for agile quality we encountered some challenges. Firstly, questions emerged regarding the selection of indicators for people, process, and outcome. Our option was to look across literature and within the organisation for existing indicators. This minimized the overhead in deploying the tool in practice and we reduced the number of indicators to a maximum of four for each type of metric. We selected indicators that were directly relevant to the project and company’s priorities at that time. However, we have constructed the model so that in future, the set of indicators can adapt. How to allocate weight to each dimension of the indicator and its grading was also cause for debate. The weights were selected by the managers in this cycle but we intend to provide a workshop in future cycles to define the indicators and weights, according to the guidelines presented in Tables 1 and 2. Allocating values to each indicator proved to be an incredibly insightful process. It opened up discussions as to what constitutes agile quality, the prescribed or extended practices, organizational goals and so on. In Figure 3, the weighted value of the indicator (60) is the least important part when compared to the [process] debate that included the search for solutions and opening communication between team members and management.
On analysing the “process” metric in Figure 4, indicators “Open Incidents” and “% Incidents - expired due date”; initially, there was a decrease in both indicators for the last week (‘100’ for evidence) and it was clearly below their established target (‘100’ for expectations), but the team highlighted that customer holidays are usually a period of less incidents so their number and percentage allocations are not justifiably comparable with other periods. They considered this as ‘normal’ but not excellent, the latter of which would be interpreted if we only looked at the value comparing to a pre-determined target. Outcome-related metrics (Figure 5) are also insightful for example, (1) On initial inspection, “failed features” present worrying results but the reason attributed to this was external to the team (problems in information completeness); (2) “critical defects sent by customers” clearly improved comparing to target (expectation) and past values (evidence) but the team attributed this to a reduction in system updates; and (3) “% of improvement features” increased compared to previous periods (‘100’ in evidence), while still not on target (‘50’ in expectations). The main reason attributed to this improvement was that by being a % of value, it increased because the total number of features decreased, making the number of improvements more significant in an artificial (rather than meaningful) way.
Corrective actions and improvement actions are important to this process (for readability purposes in this paper, we only include an example in Figure 3 for people to “contest ideas”). According to Oza and Korkala (2012) “it is not sufficient to merely collect all possible metrics but driving the culture of continuous measurement is imperative”. The partitioners in this study consider this model an improvement for agile metrics that adhere to agile principles particularly that associated with interaction. There are also difficulties that are inherent to our use of composite metrics, namely (1) it is always necessary to see the values of the three dimensions to understand the result, (2) it is a contextualized evaluation and cannot be used to compare different companies – although it may be used to compare different in-house projects and (3) it includes a subjective part of evaluation that makes the value representative of the team’s reality. The same difficulties can simultaneously provide potential improvements for quality in teams because they (1) require team’s to specify their own metrics, (2) provide ongoing adherence to practice, and (3) promote debate and critical reflection that is an intrinsic part of our method which complements agile techniques.
6 Conclusion
This action research project was set up to develop socially-constructed metrics for agile quality. We conducted a diagnosis at our practitioner organization and a literature review to establish the theoretical frame of reference. Then, we designed and implemented our action plan to (1) propose a model to create socially-constructed metrics for agile quality, (2) define indicators (3), establish their structure, and (4) create simple tools to assist participants in using metrics. The findings suggest that socially-constructed metrics can provide a new way of assessing and improving agile quality, adhering to the most crucial values of agile: the focus on people and their involvement; simplify support processes for quality management; stakeholder collaboration; and accepting change as a part of the development process (Beck et al. 2001).
We concluded that it is necessary to consider three main types of socially-constructed metrics: (1) people-related, (2) process-related, and (3) outcome-related. We suggest that a small set of indicators should be used but in line with adaptive project management, companies should allow these to change over time according to the project requirements. Moreover, we suggest that socially-constructed metrics should include three interrelated dimensions: (1) evidence from practice; (2) stakeholder expectations; and (3) stakeholder evaluation. The metrics dashboard includes indicators that are easy to obtain and allocates rules to promote improvement and critical analysis. The limitations of this study act as a starting point for planning future research. First, this is the first CAR cycle of our research and, although developed in a highly demanding context (healthcare development) it is necessary to test our model in different settings. Secondly, the benefits of our method are only assessed by the researchers and the organizational team, omitting auditors, partners, other teams, or customers. The next cycle may include other stakeholders and explore the contrast of viewpoints within and among agile teams. Thirdly, we also identified difficulties during method execution (such as defining which indicators to use) that could benefit from a taxonomy of metrics for the three types. Fourth, there is opportunity to achieve a rich agile quality index that organizations can use to (self-) evaluate their improvement efforts and the efficacy of improvement actions by comparing how the indicators change over time. The index can be the result of weighted average of all the indicators in the company, opening opportunities for agile quality dashboards. Finally, due to our focus in developing metrics and tools, we could not fully explore the social changes (e.g., knowledge transfer, team motivation) involved in the systematic debate using metrics in daily meetings, retrospective, and audits. Future research can help in addressing these challenges and contribute to understanding the effect of using the artifact in organizations. We see potential for socially-constructed metrics to inspire other researchers to use, improve, change, and extend metrics in other fields, for example in other software development approaches, fostering participative assessment and improvement of quality or for business processes management as a participative form of evaluating and improving business processes.
7 References
Socially-Constructed Metrics for Agile Quality
Copyright: © 2016 authors. This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial 3.0 Australia License, which permits non-commercial use, distribution, and reproduction in any medium, provided the original author and ACIS are credited.
|
{"Source-Url": "https://ro.uow.edu.au/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1057&context=acis2016", "len_cl100k_base": 6936, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 38885, "total-output-tokens": 9758, "length": "2e12", "weborganizer": {"__label__adult": 0.0004868507385253906, "__label__art_design": 0.0004935264587402344, "__label__crime_law": 0.0004887580871582031, "__label__education_jobs": 0.007061004638671875, "__label__entertainment": 6.633996963500977e-05, "__label__fashion_beauty": 0.0002312660217285156, "__label__finance_business": 0.00243377685546875, "__label__food_dining": 0.000461578369140625, "__label__games": 0.0006818771362304688, "__label__hardware": 0.0004754066467285156, "__label__health": 0.0011205673217773438, "__label__history": 0.0003142356872558594, "__label__home_hobbies": 0.00012767314910888672, "__label__industrial": 0.0005664825439453125, "__label__literature": 0.0005316734313964844, "__label__politics": 0.0003643035888671875, "__label__religion": 0.0005011558532714844, "__label__science_tech": 0.01418304443359375, "__label__social_life": 0.00019872188568115232, "__label__software": 0.00623321533203125, "__label__software_dev": 0.9619140625, "__label__sports_fitness": 0.00041556358337402344, "__label__transportation": 0.0005812644958496094, "__label__travel": 0.0002498626708984375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42474, 0.03654]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42474, 0.17995]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42474, 0.91869]], "google_gemma-3-12b-it_contains_pii": [[0, 1530, false], [1530, 6590, null], [6590, 11310, null], [11310, 15830, null], [15830, 19490, null], [19490, 22683, null], [22683, 25279, null], [25279, 30263, null], [30263, 35426, null], [35426, 39286, null], [39286, 42474, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1530, true], [1530, 6590, null], [6590, 11310, null], [11310, 15830, null], [15830, 19490, null], [19490, 22683, null], [22683, 25279, null], [25279, 30263, null], [30263, 35426, null], [35426, 39286, null], [39286, 42474, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42474, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42474, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42474, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42474, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42474, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42474, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42474, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42474, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42474, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42474, null]], "pdf_page_numbers": [[0, 1530, 1], [1530, 6590, 2], [6590, 11310, 3], [11310, 15830, 4], [15830, 19490, 5], [19490, 22683, 6], [22683, 25279, 7], [25279, 30263, 8], [30263, 35426, 9], [35426, 39286, 10], [39286, 42474, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42474, 0.07874]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
965e43b46b2c7ce4f4e26a08b049065be525dc14
|
Leveraging the Ubiquitous Web as a Secure Context-aware Platform for Adaptive Applications
Heiko Desruelle, Frank Gielen
Dept. of Information Technology – IBCN
Ghent University – IBBT
Ghent, Belgium
{heiko.desruelle, frank.gielen}@intec.ugent.be
John Lyle
Dept. of Computer Science
University of Oxford
Oxford, UK
john.lyle@cs.ox.ac.uk
Abstract—The availability as well as diversity of connected devices has turned the Internet into a ubiquitous concept. In addition to desktop and laptop PCs, the Internet also connects numerous mobile devices, home entertainment systems, and even in-car units. Various new types of software applications arise, trying to make optimal use of this trend. However, as the fragmentation of devices and platforms grows, application developers are increasingly facing the need to cover a wider variety of target devices. Maintaining a viable balance between development costs and market coverage has turned out to be a challenging issue when developing applications for such a ubiquitous ecosystem. In this paper, we present the Webinos approach, a distributed web runtime that adaptively leverages the device-independent characteristics of the Web. By introducing the concept of context-aware Personal Zones, the Webinos platform aims to facilitate the development of self-adaptive and immersive applications, optimized for ubiquitous computing environments.
Keywords—ubiquitous web; context-aware platform; distributed runtime; adaptive applications; webinos
I. INTRODUCTION
The Internet is drastically changing the way people work and live. As the diversity of connected devices is increasing rapidly, the Internet is penetrating our everyday lives through a multitude of devices. From desktop and laptop PCs, to mobile devices, to home entertainment systems and even in-car headunits, users throughout all consumer segments should prepare for a connected experience [1]. The evolution towards a ubiquitous Internet creates the opportunity for numerous new and innovative software applications. The main driver for such applications would be to seamlessly enable the inherently nomadic character of a ubiquitous system. Furthermore, this driver should aim to enable users to access and share information whenever and wherever they want, regardless of the device type that is being used to initiate the operation.
The development and deployment of applications for such a ubiquitous ecosystem, however, introduces an important series of resource-consuming requirements [2]. The available combinations of hardware characteristics, operating systems, software frameworks, etc. are virtually endless. For software developers, this diversity has turned out to be a double-barreled asset. It provides consumers the freedom to operate applications at will across several devices. On the other hand, the device diversity asset heavily fragments the application’s delivery targets. By the absence of a general native development solution, developers often have no alternative than to create and maintain a set of device-dependent versions of their applications. Hence, ensuring a viable balance between development costs and an application’s market coverage will more than ever become a challenging issue.
Against this backdrop, the use of web technologies for application development purposes has proven to be a viable long-term candidate solution [3]. Through years of standardization efforts and the wide adoption of languages such as HTML, CSS, and JavaScript, the web can be deployed as a powerful foundation for universal application development and delivery. Running on top of the Internet infrastructure, the web application ideology is rapidly gaining momentum amongst developers.
A web-based application development approach has been explored from various perspectives. Developers can opt for pure web applications, running in a standard browser environment. However, due to the sandboxed nature of browsers this approach drastically limits the available APIs (Application Programming Interfaces) to the underlying device. In turn, a hybrid web application approach was introduced providing developers access to a richer API set, whilst still maintaining most of the cross-platform advantages from pure web applications. This type of application is still built using web technology, but no longer uses the browser as the client-side runtime environment. A separate client-side web runtime framework is deployed to bridge the gap between native and web applications by granting the application scripting access to most device APIs. Hybrid web applications are currently being developed using web widget engines such as provided by the BONDI/WAC [4] initiatives, device-independent frameworks such as the PhoneGap [5] application wrapper, and even completely web-centric operating systems such as Chromium OS [6] and HP webOS [7].
Current hybrid web application solutions, however, only
partially succeed in enabling a convincing ubiquitous experience [8]. Their main focus lies with porting traditional API support and operating system aspects to the web. Applications built upon these old principles result in virtual silos, unable to truly cross the physical boundaries of a device. By neglecting the evolution towards, e.g., distributed user interfaces, adaptive context-aware application behavior, etc., the true immersive nature of ubiquitous computing is mostly left behind [9] [10]. The absence of elaborate context-awareness is a key element driving this issue. In order for ubiquitous applications to adaptively support various contextual situations, the underlying application platform needs to provide structured, as well as secure and up-to-date access to the user’s contextual setting. This requirement is not just limited to providing access to a detailed description of the target delivery context (screen size, interaction methods, available sensors, etc). Moreover, structured access to details regarding the user context (personal preferences, social context, disabilities, etc.) and the physical environment (location, time, etc.) ought to be supported.
From this perspective, we introduce the Webinos approach, a platform aiming to support hybrid web applications across mobile, PC, home media and in-car devices. Structured along a federated hierarchy, the proposed architecture enables developers to access a common set of rich context-aware APIs, allowing applications to dynamically adapt their cross-user, cross-service, and cross-device functionality in an open yet secure manner.
The remainder of this paper is structured as follows. Section II discusses background and related work. Section III provides a general overview of the proposed federated application platform. Section IV elaborates on the platform details of setting up secure context-awareness support. Section V discusses the use case of an adaptive social networking application. Finally, the conclusion and future work is outlined in Section VI.
II. BACKGROUND AND RELATED WORK
The availability of detailed and reliable metadata regarding a user’s contextual situation provides an important driver for enabling rich ubiquitous applications. The exact entities represented by this contextual information can be of a very dynamic nature, potentially affecting the consumer’s expectations towards the application’s user interface, behavior, content, etc. In initial context-aware research, context of use was considered a component containing only two parameters: the end-user’s location and the set of objects in the immediate vicinity [11]. The subsequent introduction of extensible contextual categories has drastically increased the flexibility of this definition. Chen and Kotz hereto identified five contextual base categories: the device context, the user context, the environment context, the time context, and the historical context [12].
The device context describes the characteristics of the target device that is being used to access the application. A ubiquitous ecosystem covers a diversity of screen sizes, interaction methods, software support, etc. In web-based environments, the device capabilities are generally retrieved through Resource Description Framework (RDF) devices profiles, i.e., User Agent Profile (UAProf) [13] and Composite Capability/Preference Profiles (CC/PP) [14]. The necessary device identification step in this process is handled through HTTP header user agent matching. In order to facilitate the collection and aggregation of these device profiles, the W3C Mobile Web Initiative (MWI) standardized the Device Description Repository specification (DDR). The specification provides an API and its associated vocabulary for structured access to context providers services [15]. In essence, a DDR thus provides a standardized means for retrieving contextual information about a-priori knowledge on the characteristics of a particular target device or web runtime. Various open as well as proprietary DDR implementations are actively being maintained. Most notably OpenDDR, WURFL, and DeviceAtlas.
In a ubiquitous setting, the end-user’s profile description gains more and more importance. Besides exposing information on user preferences and specific experience, this model should also comprise knowledge regarding the user’s specific abilities and disabilities, e.g., enabling accessibility requirements for providing support to elderly people, and people with disabilities. From this perspective, Heckmann proposed the GUMO formalism as a general user model ontology for representing generic user descriptions using the Web Ontology Language semantics (OWL) [16]. The current challenge in this domain is modeling the enormous amount of parameters and relationships that characterize the user context [17]. To overcome this issue, forces are being joined with other ontology-driven projects such as Linked Data [18], and UbisWorld [19].
The environment-, time-, and historical context aspects define where, how, and when the interaction between the user and an application is exactly taking place. The environment context is specified by observing the numerous sensors available on the user’s device (e.g., location, temperatures, network service discovery, the level of background noise, etc.). Furthermore, the notion of time and historical context is not to be neglected. As context is a dynamic concept, support for temporal patterns recognition and management is needed. The W3C Ubiquitous Web Domain is currently in the process of standardizing the Delivery Context Ontology specification (DCO) [20]. The DCO provides a formal model of the characteristics of the environment in which devices, applications, and services are operating.
III. WEBINOS HYBRID APPLICATION PLATFORM
In order to enable application developers to set up services that fade out the physical boundaries of a device, we
propose the Webinos architecture. Webinos is a federated web application platform and its runtime components are distributed over the devices, as well as the cloud. Figure 1 depicts a high-level overview of the platform’s structure and deployment. The system’s seamless interconnection principle is cornered around the notion of a so called Personal Zone. The Personal Zone represents a secure overlay network, virtually grouping a user’s personal devices and services. To enable external access to and from the devices and services in this zone, the Webinos platform defines centralized Personal Zone Hub (PZH) components. Each user has his own PZH instance running in the cloud. The PZH is a key element in this architecture, as it contains a centralized repository of all contextual data in the Personal Zone. Moreover, the PZH keeps track of all devices and services in the zone and provides functionality to enable their mutual communication. This way, the PZH facilitates cross-device interaction with someone’s services over the Internet. The PZHs are federated, allowing applications to easily discover and share data and services.
On the device-side, a Personal Zone Proxy (PZP) component is deployed. The PZP handles the direct communication with the zone’s PZH. In order to keep the user’s Personal Zone synchronized, the PZP is responsible for communicating device status with its PZP. This communication channel is built around a publisher-subscriber pattern. As all external communication goes through the PZP, this component also acts as a policy enforcement point by managing all access to the device’s exposed resources. In addition, the PZP is a fundamental component in upholding the Webinos platform’s offline usage support. Although the proposed platform is designed with a strong focus on taking benefit from online usage, all devices in the Personal Zone have access to a locally synchronized copy of the data maintained by the PZH. The PZP can thus act in place of the PZH in case no reliable Internet connection can be established. This allows users to still operate the basic functionality of their applications even while being offline. All data to and from the PZP is again synchronized with the PZH as soon as the Internet access gets restored.
The Web Runtime (WRT) represents the last main component in the Webinos architecture. The WRT can be considered as the extension of a traditional web browser engine (e.g., WebKit, Mozilla Gecko). The WRT contains all necessary components for running and rendering web applications specified using standardized web technologies: HTML parser, JavaScript engine, CSS processor, rendering engine, etc. Furthermore, the WRT maintains a tight binding with the local PZP. The WRT-PZP binding allows the WRT to be much more powerful than traditional browser-based application environments. Through this binding, applications running in the WRT are able to securely interface with local device APIs and services. In addition, the PZP also allows the runtime to connect and synchronize with other devices in the Personal Zone through its binding with the PZH.
IV. Secure Context-aware Personal Zone
The innovative nature of the proposed approach lies with the platform’s capability to establish a cross-device, cross-service, cross-user overlay network. For this Personal Zone concept to be successfully adopted by ubiquitous application developers, the platform needs to provide these developer access to a rich at-runtime overview of the user’s contextual setting. As stipulated in Section I, elaborate platform support for transparent context management is vital. In this section, we provide more detail on the available developer tools for
setting up secure context-awareness within a Personal Zone environment.
A. Delivery Context Model
The Webinos delivery context model is defined to span all the platform’s contextual knowledge within the user’s Personal Zone. The model builds upon the W3C’s Delivery Context Ontology (DCO) specification [20]. The Webinos delivery context model comprises four top-level submodels: the user context, the device context, the environment context, and the application context (see Figure 2 for a high-level overview). The first three submodels are internally managed and updated by the Webinos platform, whilst the application context model is to be maintained by the application developer. In order for each of these proposed models to support historical evaluation, pattern detection, conflict resolution strategies, all stored context properties are timestamped. The contextual information regarding the Personal Zone’s owner is described by the user context model. This model consists of an aggregation of user profile data, user preferences, social context information, etc. Furthermore, each device and its physical environment are described by a separate instance of respectively the device context model and the environment context model. A device context model comprises knowledge regarding the corresponding device’s availability in the Personal Zone, hardware characteristics, supported software, etc. The environment context model contains a description of a certain device’s location, surrounding noise levels, etc. Lastly, the application context model provides developers the freedom to store a number of contextual properties, describing a situation from the perspective of their application.
B. Context Framework and API
The Webinos context framework is built on top of the above described context models. As depicted in Figure 1, providing application developers access to an elaborate distributed context framework is one of the core Webinos service. The Webinos context framework provides all necessary functionality for acquiring, storing, inferring new knowledge, and granting external access to the available contextualized data. Web applications running in the Webinos WRT, as well as other Webinos services, can rely on this framework to support their at-runtime need for contextualized data.
The Webinos Personal Zone is structured as a distributed system. In order to keep the zone synchronized, strong communication facilities between the device PZPs and the centralized PZH have architecturally been put in place. The Webinos context framework tries to make optimal use of this structured communication channel to gain additional contextual knowledge regarding the Personal Zone. The context framework hooks into the PZP’s event dispatching and synchronization mechanism. As visualized in Figure 3, out-bound status events are intercepted by the framework’s context acquisition component and subsequently filtered for relevant data. The extracted context is locally stored and synchronized with the rest of the zone through a context-update event over the communication channel. The context acquisition process is autonomously managed by the Webinos platform and operates completely transparent for both the user and application developers. Moreover, the context framework is closely coupled with the PZP’s security and policy enforcement framework. This binding ensures the secure handling of all context data that is being stored and accessed, as it often contains highly sensitive information.
For application developers aiming to create context-aware ubiquitous applications, the context framework provides an API to access Personal Zone wide context information. The context API supports the W3C standardized SPARQL RDF query language for unambiguously stating powerful context queries [22]. All context API requests are passed to the query processor component. The processor parses the request and checks its execution rights in collaboration with the PZP’s policy enforcement framework. In case the request is granted by the PZP, the query is optimized and dispatched for execution. The API supports two modes for accessing context information: a generic query mode, and a change subscription mode. The generic query mode allows applications to execute targeted queries for specific context data in the storage system. The change subscription mode, on the other hand, enables an application to subscribe for specific context update events. These events are triggered by the context framework when new contextual knowledge is acquired.
Figure 3. Webinos platform’s distributed context framework, enabled through its tight integration with the Personal Zone.
C. Policy Enforcement Support
The Webinos platform aims to meet the security and privacy requirements of applications and end users primarily through an access control policy system. Every access to a Webinos API is mediated by policies, enforced by the Personal Zone Proxies on each device as well as in the Personal Zone Hub. This action follows the principle of least privilege, granting applications only the permissions they require. Access policies are set when an application is first installed and can be updated subsequently. The policy system is derived from the BONDI/WAC architecture [4] and uses XACML (eXtensible Access Control Markup Language) including a number of extensions developed by the PrimeLife project [23]. XACML is a general-purpose access control language for defining policies based on subjects, resources, action and conditions [24]. By including the PrimeLife XACML extensions, the Webinos policy enhancement framework can allow users to specify detailed situation-specific access control policies. This is a significant advantage over current web runtime solutions and native mobile application platforms, where once an application has been granted access to a particular asset this access can be reused without further control.
Context data is often privacy-sensitive, as its analysis might reveal a user’s history of actions or the people and devices they have interacted with. The Webinos platform aims to follow the principle of least surprise, so that a minimum of unexpected data disclosures will occur. This is achieved by disabling the collection of most context data by default, and providing the user a simple interfaces to turn it on again, complete with feedback about the kind of data that is being shared and stored. Where possible, data is filtered to remove unnecessary personal data. The main advantage of the Webinos platform is that context data remains within the zone and under the control of the end-user. This compares favorably to online user tracking, as users are able to view and manage the data stored about them, and applications will have to request specific access to this information.
V. Use Case
To elaborate on the application possibilities of the Webinos approach, we present a use case that has been built with the platform. The application is a cross-device social media app, able to use the APIs of television sets, mobile devices and desktop computers within a Personal Zone. The application utilizes the platform’s default knowledge of a user’s devices as well as their exposed capabilities and services. A user has the possibility to set policies for dispatching system API calls of the application to alternative devices. In result, the input (i.e., multimedia access, text input modality, contacts retrieval, etc.) and output (i.e., display selection) operations are adaptively abstracted from the traditional physical device level to the Personal Zone level. E.g., if a user wants to post a new message to one of his contacts on the Twitter social network: he can use his television set for displaying the main UI, use his smartphone an interaction device for navigating through the interface, putting in text, and accessing the device’s contact list, access this home media center to attach a video to his post, etc.
A prototype of the proposed platform and use case are implemented and made available as part of the Webinos open source project [25]. Based on the project’s extensive analysis of the current ubiquitous ecosystem [26], the following prototype platforms have been selected: PC (Linux, Windows, MacOS), mobile (Android), vehicles (Linux), home entertainment (Linux).
VI. Conclusion and Future Work
In this paper we presented the Webinos application platform approach, aiming to enable immersive ubiquitous software applications by leveraging the cross-platform possibilities of the web. The proposed approach utilizes the web infrastructure to establish its Personal Zone concept, a virtual overlay network for grouping all of user’s devices and available services. Through the federated structure of Personal Zones, Webinos is able to provide application developers access to elaborate at-runtime context data regarding the current user, his devices, and the surrounding environment. The availability of this information allows developers to more accurately anticipate to a user’s contextual situation. The Webinos platform’s context-awareness enables numerous applications that make full use of the diversity and interconnectivity of devices. From this perspective, Webinos aims to be a key enabler in the realization of ubiquitous
applications that are able to execute across the physical boundaries of devices.
While the extensive evaluation of our approach has yet to be carried out, initial testing of prototype implementations shows promising results. Although the proposed platform addresses challenging issues in the ubiquitous application development domain, the current architecture only represents a first milestone in the pursuit of true ubiquitous application convergence. Whilst the Webinos platform provides structured access to rich contextual knowledge, it is still the application developers’ responsibility to incorporate the necessary logic that allows their applications to act accordingly. Therefore, future work should include research on further extending the platform with (semi-) automated application adaptation mechanisms, driven by the platform’s rich context-awareness. Regarding the privacy and security impact of such an application runtime, there will undoubtedly be a need to further experiment with user interfaces. This in order to strike an acceptable balance between the advantages that context sensitivity can offer, as well as privacy and user and developer convenience.
ACKNOWLEDGMENTS
The research leading to these results has received funding from the European Union’s Seventh Framework Programme (FP7-ICT-2009-5, Objective 1.2) under grant agreement number 257103 (Webinos project).
REFERENCES
|
{"Source-Url": "http://www.thinkmind.org/download.php?articleid=adaptive_2012_3_20_50042", "len_cl100k_base": 4605, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18416, "total-output-tokens": 6197, "length": "2e12", "weborganizer": {"__label__adult": 0.0003066062927246094, "__label__art_design": 0.0003910064697265625, "__label__crime_law": 0.000347137451171875, "__label__education_jobs": 0.0004122257232666016, "__label__entertainment": 8.52346420288086e-05, "__label__fashion_beauty": 0.0001628398895263672, "__label__finance_business": 0.0002658367156982422, "__label__food_dining": 0.0002696514129638672, "__label__games": 0.0004732608795166016, "__label__hardware": 0.001590728759765625, "__label__health": 0.00045871734619140625, "__label__history": 0.00029730796813964844, "__label__home_hobbies": 6.99758529663086e-05, "__label__industrial": 0.0003147125244140625, "__label__literature": 0.0002799034118652344, "__label__politics": 0.00024318695068359375, "__label__religion": 0.0004055500030517578, "__label__science_tech": 0.066162109375, "__label__social_life": 8.380413055419922e-05, "__label__software": 0.0240478515625, "__label__software_dev": 0.90234375, "__label__sports_fitness": 0.0002034902572631836, "__label__transportation": 0.00045013427734375, "__label__travel": 0.00021278858184814453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29057, 0.02669]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29057, 0.145]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29057, 0.89201]], "google_gemma-3-12b-it_contains_pii": [[0, 4929, false], [4929, 10879, null], [10879, 14591, null], [14591, 19163, null], [19163, 23918, null], [23918, 29057, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4929, true], [4929, 10879, null], [10879, 14591, null], [14591, 19163, null], [19163, 23918, null], [23918, 29057, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29057, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29057, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29057, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29057, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29057, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29057, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29057, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29057, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29057, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29057, null]], "pdf_page_numbers": [[0, 4929, 1], [4929, 10879, 2], [10879, 14591, 3], [14591, 19163, 4], [19163, 23918, 5], [23918, 29057, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29057, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
ece6a28ff7005d5916f8e9df89fed04ed3ec20bf
|
WHAT IS SPECIAL ABOUT PLC SOFTWARE MODEL CHECKING?
D. Darvas*, E. Blanco Viñuela, CERN, Geneva, Switzerland
I. Majzik, Budapest University of Technology and Economics (BME), Budapest, Hungary
Abstract
Model checking is a formal verification technique to check given properties of models, designs or programs with mathematical precision. Due to its high knowledge and resource demand, the use of model checking is restricted mainly to core parts of highly critical systems. However, we and many other authors have argued that automated model checking of PLC programs is feasible and beneficial in practice. In this paper we aim to explain why model checking is applicable to PLC programs even though its use for software in general is too difficult. We present an overview of the particularities of PLC programs which influence the feasibility and complexity of their model checking. Furthermore, we list the main challenges in this domain and the solutions proposed in previous works.
INTRODUCTION AND MOTIVATION
The promise of model checking is to provide precise, mathematically sound means to check the satisfaction of given requirements on models, representing for example software. Although some tools are available (e.g. CBMC1 [1], BLAST2, Bandera3 [2], DIVINE4 [3]), it is still difficult to use model checking on real-sized software in practice. One of the bottlenecks is the verification performance, the excessive need of resources for the successful verification.
Besides checking software written in general-purpose programming languages (e.g. C, C++, Java), active research can be observed focusing on PLC (Programmable Logic Controller) programs specifically. It has been studied by dozens of research groups over the last 20 years [4]. However, model checking is still far away from being easy-to-use or part of the state of the practice of PLC program development.
The reader may ask the question: what is the reason for targeting PLC model checking specifically? What makes this domain special and why there is a need for specific tools? What makes PLC model checking different from verifying general-purpose programs? This paper is dedicated to the specificities of PLC programs, which facilitate their verification, or contrarily, make the model checking more difficult. Our aim is to summarise our experience with PLC software model checking that we have acquired during the development of PLCVerif [5], and to help formal verification researchers to specialise in this field, or to make their model checker tools applicable to the PLC program verification domain too.
The paper first overviews the difficulties and advantages arising from the domain specificities. Then the syntactic and semantic particularities of PLC programs are discussed. Finally, the need for environment modelling is mentioned. An extended version [6] of this paper is also available which contains more details and example programs.
DOMAIN SPECIFICITIES
Many of the differences between the general-purpose and PLC programming languages, but also between the available verification methods originate from the differences in the respective domains. Therefore, we start by overviewing the most important properties and specificities of the PLC domain which influence the formal verification of PLC programs.
“Medium criticality” Except for trivial programs, it is difficult to imagine and prove absolute correctness or safety, just as absolute security. Instead of pursuing those ideals, a more pragmatic approach is needed: the verification costs and the risks of failure should be in balance. Formal verification is already often used where the cost of failure is exceptionally high: in case of highly critical systems (e.g. nuclear, railway or avionics systems) or systems produced in high quantities (e.g. microprocessors). Even the methods requiring special knowledge and lots of resources may be affordable in those cases. Contrarily, in case of systems with low criticality, deep analysis may not be required.
PLC systems are in the middle of this criticality scale: their criticality is often not high enough to afford an independent, specially skilled verification team. However, a potential failure or outage may cause significant economic losses, motivating a sound and detailed verification approach.
Consequence. PLC model checking approaches should be easily accessible, specifically targeting the PLC domain, without requiring unaffordable resources or having an excessive cost compared to the level of criticality.
Advantage: Simple operations and data structure In general, the functionality of PLC programs is simpler than most programs written in C or Java. PLC programs do not deal with graphical interfaces, large data structures; they do not create files, do not perform complex operations. All these features may complicate the software model checking.
Consequence. The simplicity of the programs makes model checking more feasible computationally. This makes the PLC domain a good target for formal verification.
Difficulty: Variety of PLC languages PLCs use special languages, not used outside this domain. Furthermore, there is a wide variety of PLC programming languages. IEC 61131,
the relevant standard [7] defines five different languages: Structured Text (ST), Instruction List (IL), Function Block Diagram (FBD), Ladder Diagram (LD) and Sequential Function Chart (SFC). Furthermore, each vendor provides their own flavour, with minor or major differences compared to the standard. Siemens PLCs typically support Structured Control Language (SCL), Statement List (STL), Function Block Diagram (FBD), Ladder Logic (LAD), and S7-GRAPH, which correspond to the previously listed standard languages, respectively. The difference between some of them is minor (e.g. between LAD and LD), but in other cases it is very significant (e.g. between STL and IL or SCL and ST).
**Consequence.** As PLC programs can mix these languages (e.g. a function written in FBD can call a function written in SCL), each language should be supported by a PLC program verification tool. Furthermore, as there are common parts in those languages (e.g. variable declarations), the language infrastructure of the verification tool (parser and program representation) should be generic and reusable.
**Difficulty: Different background knowledge of developers**
General purpose programming languages, their development environments and verification tools are typically developed “inside the community”: by software engineers, for software engineers. PLC programs, however, are often written by people with different skills and background knowledge: automation engineers, technicians, etc. The theory and practice of formal verification is often not part of the general curriculum of software engineers, making the application of model checking hard. This knowledge gap is even bigger and more severe in case of the PLC program developers.
**Consequence.** Special attention should be paid to bridge the semantic gap between the user and the verification tool. The tools should use inputs and outputs which are close to the users’ domain. For example, the PLCVerif tool [5] uses the PLC programs and requirement patterns based on English sentences as inputs, and the outputs are provided in an easy-to-understand, self-contained form, using concepts directly from the PLC domain.
**SYNTAX OF PLC LANGUAGES**
As mentioned earlier, PLC programs are written using a wide variety of programming languages. Since—according to their claims—Siemens is a market leader in the field of automation, we mainly focus on the languages supported by Siemens S7 PLCs, especially the high-level Structured Control Language (SCL), which is a variant of the Structured Text (ST) language defined in IEC 61131 [7].
In this section, we show that although the PLC programs are simpler, their syntax may actually be more complex than that of general-purpose programs.
**Difficulty: Complex syntax**
PLC programming languages—especially their Siemens variants—often have richer and more complex syntax than general-purpose programming languages supported by software model checkers. For example, C (the C99 version) contains 6 basic data types with built-in support⁵. Java contains 9, but SCL contains 16 base types, which was extended to 30 in the new version of the language supported by the new development environment (TIA Portal) and the new hardware (e.g. S7-1500).
**Consequence.** Development of the language infrastructure for PLC software model checking needs a lot of effort. As there is no good, reusable language infrastructure available, the entry cost of PLC program verification is high.
**Difficulty: No precise syntax definition**
The well-established general-purpose languages typically have precisely, often formally defined syntax. For example, the syntax of C is standardized by ANSI, ISO and IEC (ISO/IEC 9899), C# is defined by the ECMA-334 standard. The Java syntax is not standard, but a detailed specification is provided by Oracle. The syntax of standard PLC programming languages are defined in IEC 61131 (with some ambiguities [8,9]). However, having a precise definition for the vendors’ flavours is not always easy. Siemens provided syntax definition for SCL version 5.3 [10] and for STL [11], which cover most, but not all aspects. The authors are not aware of any precise syntax description of the new version of SCL, supported by the new development environment, TIA Portal.
**Consequence.** As the available syntax definitions are partial or too vague, the only way to determine the precise syntax is through systematic trials with the compilers. Creating precise descriptions for the most commonly used PLC programming languages and open source, generic parser implementations could facilitate new researchers to focus on the PLC domain and also to focus the research efforts on the verification challenges.
**Difficulty: Absolute and symbolic addressing**
Each Siemens PLC program contains an editable symbol table, which assigns names (“symbols”) to memory locations or program units. This allows to use symbolic addressing, i.e. using names instead of absolute addresses. However it is possible (although considered as bad practice) to mix absolute and symbolic addresses. For example, “\texttt{var1 := TRUE;}”, “\texttt{"var1" := TRUE;}”⁶ and “
\texttt{M4.1 := TRUE;}” can have the same meaning if there is a symbol \texttt{var1} defined for the memory location \texttt{M4.1}.
**Consequence.** In order to support real PLC applications, besides supporting the five languages, the symbol tables shall be supported too. The verification tools should be able to handle the mix of absolute and symbolic addresses, or at least warn the user when an object is referred to using several names.
---
⁵ Here we not only consider the basic numeric types of C, but also strings. Even though a C string is simply a character array, there is dedicated language-level support for string constants (e.g. \texttt{"var = "teststring";}).
⁶ The quotation marks denote that \texttt{var1} is a symbol, however in SCL v5.3 they can be omitted if it does not cause any confusion.
Difficulties: Permissive grammars
An additional challenge to be faced when developing the PLC language infrastructure is the permissiveness of the grammars. For example, there are at least six syntactic ways to refer to a given bit in the bit memory area: absolute access (e.g. M4.1, %M4.1; the % and X are optional) and indexed access (e.g. M[4,1]). Furthermore, symbolic access is also possible, as mentioned before.
Consequence. The language infrastructure should have a uniform internal representation to hide these redundant details and simplify the verification task.
Difficulties: Context-dependent grammar
Another challenge in PLC software model checking arises from the context-dependent nature of the programming languages. For example, in the STL language “A ∧ A;” is a valid statement, where the first “∧” stands for “AND operation”, and the second “∧” denotes a Boolean variable with name “A’.
Consequence. These features of the language have to be taken into account when choosing the technology for the language infrastructures. For example, a parser that identifies the keywords first—independently from the context—cannot successfully parse a program written in STL due to the mentioned ambiguities, or certain workarounds are required. It also poses a challenge to provide a single, unified parser for SCL and STL.
SEMANTICS OF PLC LANGUAGES
Not only the syntax of PLC programs is rich, their semantics (i.e. the description how the programs behave during execution) may also impose additional challenges compared to formal verification of general-purpose programs.
PLC execution semantics
To provide verification for PLC programs, first the key semantic differences between general-purpose programs and PLC programs have to be understood.
PLC programs are typically executed cyclically. A cycle (so-called scan cycle or PLC cycle) consists of (1) sampling the physical inputs (and keeping their values stable in the memory), (2) executing the user code, (3) assigning the computed outputs to the physical outputs. This allows to have consistent input and output signals.
In Siemens PLCs, the scan cycle can be interrupted. Cyclic interrupts ensure the periodic execution of a certain piece of code. Diagnostic and error handling interrupts can also be defined. The interrupts and various operating system tasks (e.g. communication) can alter the length of the scan cycle. If the scan cycle exceeds the predefined length, an error-handling block will be executed.
There is a difference in the programming concepts too. Even though the latest IEC 61131 standard introduced object-oriented programming for PLCs, most programs still use functions and function blocks. A function block is a stateful function, the values of its variables (except for the temporary variables) are kept even after the execution of the block. The semantics of a function block is similar to a class that has a single member method in object-oriented languages.
Advantages: Simple memory handling
The formal verification of PLC programs is greatly facilitated by its simple memory handling. PLC programs use static typing: variables are declared explicitly, with a given type. Variables are strongly typed: except for some safe cases, explicit type conversions are required between the different data types. However, it has to be noted that SCL permits the use of the special data type POINTER and ANY which can store addresses of other data.
There is no dynamic memory allocation in PLC programs. All variables and data blocks are allocated statically, at compile-time.
Furthermore, in high-level PLC programming languages (e.g. SCL) pointers are rarely used. In lower-level languages (e.g. STL) pointer usage is sometimes unavoidable. However, even without using pointers explicitly, semantically equivalent constructs may be present. For example, IB[10] denotes the value of byte 10 in the input memory area. If the IB array is indexed with a variable (IB[var1]), then var1 practically behaves as a pointer.
Consequence. Due to simple control structures and the lack of dynamic memory allocation, many popular model checkers, e.g. NuSMV or UPPAAL can efficiently be used to verify most of the PLC programs. To support all PLC programs, pointer support is required on the verification side.
Difficulties: Imprecise semantics definition
Having a precise, formal semantics for the input models is an obvious requirement for model checking. Unfortunately, there is no mathematically sound semantics definition neither for the IEC 61131 languages, nor for the Siemens PLC languages. Some reference manuals are available for SCL [10] and STL [11], but they are in some places ambiguous, imprecise or incorrect. For example, the SCL description does not define precisely the semantics of CASE statements, or the STL description incompletely and sometimes incorrectly defines the behaviour of the nesting stack used for complex Boolean operations.
PLC programs depend on a library of basic functions and function blocks, such as timers, data transmission blocks, special memory operations. Precise description (either formal definition or source code) is required for these program units too for the verification, but it is often not available.
Consequence. Developers of PLC verification tools cannot fully rely on the provided language descriptions and documentation. Systematic, rigorous experiments have to be conducted in order to explore the precise semantics of the different PLC program structures.
No short circuit evaluation
The IEC 61131 standard permits the short-circuit evaluation for logic expressions, i.e. the evaluation can be interrupted as soon as the result can be
---
determined. However, our experiments showed that Siemens PLC programs do not use short circuit evaluation. For example, in case of the “func1() OR func2()” expression the function func2 will be called even if the return value of func1 is true (thus the expression will be evaluated to true independently from func2).
**Consequence.** This may facilitate the representation of PLC programs as control flow graphs.
**Difficulty: Timed behaviour** PLC programs often involve time-related behaviour, typically by using the timers defined in [7] (TP, TON, TOF). Accurate modelling and verification requires precise representation of time, which might make the verification task extremely difficult. In reality, PLC timers rely on the PLC’s real time representation. The elapsed time between two timer calls depends on the cycle time, which in turn relies on the executed methods, the precise type of the hardware, the communication between the PLC and other systems, etc.
**Consequence.** The verification tool should use an appropriate time representation, i.e. an appropriate trade-off between precision of modelling and needed resources. One possibility is to simplify the physical time handling and assuming that each PLC cycle takes a non-deterministic amount of time, and the global time is incremented by this value at the end of the cycle at once. Then effectively the time does not elapse during a PLC cycle, which may alter the behaviour of the timer blocks, but this was often found to be an acceptable trade-off. This representation may lead to false negatives, i.e. omitted faults. Other time representations could cause false positives (false error reports). The consequences of the chosen time representation shall be clearly described for the user, using the terminology of the PLC domain.
**Difficulty: Semantics depending on the compiler and hardware version** The precise semantics of PLC programs may depend on various compiler settings, the used compiler and the hardware.
- Certain data types and languages are available only using certain hardware.
- The precise semantics of the programming languages depend on the development environment.
**Example.** Let D be a variable of type DINT (32 bit signed integer). Using the STEP 7 V5.5 development environment, the execution of the SCL assignment “D := INT#1 + 50000” will result in D=50001 (where “INT#1” denotes a 16 bit signed integer). However, the same code compiled using the TIA Portal development environment will result in D=−15535 on the same hardware due to the differences in typing rules.
- The behaviour depends also on the semantic settings. For example, some details of the SFC execution can be modified.
- The hardware configuration and the interrupt configuration can also influence the precise semantics of a given PLC program. Furthermore, this information is not included in the source code.
**Consequence.** Different semantic variants of PLC languages shall be supported, and the user shall be able to choose the appropriate one for each program under verification.
**Difficulty: Bit-level memory manipulation** PLC programs allow various low-level memory manipulations.
- Integer variables can also be treated as bit arrays by using explicit type conversion operators. The same behaviour is also possible by defining so-called views, practically declaring multiple variables mapped to the same memory location using the keyword AT in SCL.
- It is also possible to directly address a specific area in the memory (absolute addressing), independently from the variable borders. For example, DB1.DW3 refers to the WORD starting at byte 2 in the data block DB1. However, this memory location may represent several variables, or parts of different variables.
**Consequence.** The verification tool should either provide accurate, low-level representation of the PLC memory model (causing a high overhead), or at least provide static analysis methods to check situations where such advanced memory representation is required to check the PLC program under verification.
**ENVIRONMENT**
**Challenge: Environment model** PLCs are mainly used for process control tasks, therefore they inherently interact with their environment. It is reasonable to check certain safety properties (i.e. a given property is always satisfied, no matter what are the input sequences) without considering the environment during verification. However for other types of requirements having no assumption on the environment may lead to many false positives, i.e. non-satisfied requirements where the violation is practically impossible.
**Consequence.** To get practical, usable verification results, the model of the environment needs to be incorporated. This can exclude cases where for example only a physically impossible change in the controlled process could cause the signalled violation. In our opinion, there are three main challenges related to the environment models, as follows.
- It is difficult to find appropriate formalisms and to describe the environment (e.g. the controlled process) precisely.
- Including the environment model may significantly increase the computation resources required for model checking.
- An imprecise environment or process model may lead to false negative results, i.e. it can lead to the omission of real problems.
There are various attempts to precisely describe environment or process models and include them in various verification procedures [12–14], however, we think that this still remains one of the greatest challenge in PLC model checking.
**Challenge: Fault assumptions** It is important to keep in mind that the input variables of the PLC programs often represent physical inputs. It is unrealistic to assume that all inputs are always correct. In other words, the “no failure” assumption in the environment model during verification may hide potential problems. The other extreme—assuming that everything can fail at the same time—may be unrealistic too, leading to useless counterexamples which undermines the usability of the method.
**Consequence.** The environment models shall be able to incorporate various assumptions. For example, a single failure hypothesis may be rational in some cases, but in other cases including the simultaneous failure of certain dependent signals in the verification may be desired too.
**OUR RESPONSE: PLCverif**
To overcome most of these challenges and to provide feasible, easy-to-use formal verification for PLC programs, CERN started the development of PLCverif [5]. With the ongoing development of PLCverif we aim to provide a generic tool and language infrastructure that can make the development or integration of new verification methods to the PLC domain significantly easier.
PLCverif hides the formal verification-related details from the user. Also, as it relies on a control flow graph-based intermediate representation that is independent from the PLC programming languages, this tool can hide many of the syntactic and semantic peculiarities of the PLC domain from the (formal) verification solutions.
Recognizing that the listed particularities make the development of any verification method challenging for PLC programs, PLCverif is opening towards supporting other verification techniques besides model checking, for example static code analysis and unit testing.
Although we have overcome many syntactic and semantic problems—except the ones which would have required unreasonable amount of resources compared to their pertinence, such as properly supporting pointers—, the lack of proper environment modelling limit the use of PLCverif to well-defined, isolated parts of PLC applications, such as individual function blocks or safety logic implementations.
**CONCLUSION**
In this paper many of the specific challenges of model checking PLC programs have been presented, as well as the features of those programs which can facilitate their formal verification. We believe that PLC model checking is still a research field with a lot of industrial attention and with many unsolved challenges.
On one hand, PLC program verification is an ideal target for model checking due to the medium criticality and the relatively simple programs.
On the other hand, syntax and semantics of PLC programs are complex, which makes it difficult for non-PLC experts to contribute to verification, as the knowledge and development effort required for PLC program verification is high.
Open source, reusable language infrastructures could leverage this challenge, allowing to focus on the challenges of performance and clarity of results. We need a bridge not only between formal verification and the PLC developer community, but also between the formal verification researchers and the industrial control systems domain. Furthermore, environment modelling is still a big challenge to be solved, which could significantly improve the practical applicability of model checking for PLC programs.
**REFERENCES**
|
{"Source-Url": "http://darvasd.hu/publications/THPHA159.pdf", "len_cl100k_base": 5065, "olmocr-version": "0.1.49", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 16480, "total-output-tokens": 6389, "length": "2e12", "weborganizer": {"__label__adult": 0.0004954338073730469, "__label__art_design": 0.0003058910369873047, "__label__crime_law": 0.0005965232849121094, "__label__education_jobs": 0.0007014274597167969, "__label__entertainment": 6.693601608276367e-05, "__label__fashion_beauty": 0.00017249584197998047, "__label__finance_business": 0.0004854202270507813, "__label__food_dining": 0.0004267692565917969, "__label__games": 0.000881195068359375, "__label__hardware": 0.004245758056640625, "__label__health": 0.0006823539733886719, "__label__history": 0.00022709369659423828, "__label__home_hobbies": 0.0001906156539916992, "__label__industrial": 0.003688812255859375, "__label__literature": 0.00019741058349609375, "__label__politics": 0.0002837181091308594, "__label__religion": 0.0005860328674316406, "__label__science_tech": 0.0953369140625, "__label__social_life": 7.748603820800781e-05, "__label__software": 0.01177215576171875, "__label__software_dev": 0.87646484375, "__label__sports_fitness": 0.00042510032653808594, "__label__transportation": 0.0012922286987304688, "__label__travel": 0.00018656253814697263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28500, 0.01669]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28500, 0.84359]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28500, 0.90726]], "google_gemma-3-12b-it_contains_pii": [[0, 5219, false], [5219, 11202, null], [11202, 17010, null], [17010, 22579, null], [22579, 28500, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5219, true], [5219, 11202, null], [11202, 17010, null], [17010, 22579, null], [22579, 28500, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28500, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28500, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28500, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28500, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28500, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28500, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28500, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28500, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28500, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28500, null]], "pdf_page_numbers": [[0, 5219, 1], [5219, 11202, 2], [11202, 17010, 3], [17010, 22579, 4], [22579, 28500, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28500, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
5679e85a856c588ea1714a6eb0a0f074efb2c1af
|
Oracle’s Converged Database: How to Make Developers And Data More Productive
As enterprises digitize more business processes and decision points, they face a seemingly impossible choice—improve developer productivity now or data productivity later. But a radically new approach, Oracle’s converged database, breaks this impasse.
Purpose Statement
This document is intended to help CTOs, enterprise architects, and development managers understand the benefits of converged databases compared to single-purpose databases.
Intended Audience
The intended audience of this paper is I.T. leaders making decisions about the future of enterprise computing architecture, including CTOs, enterprise architects, and development managers.
Disclaimer
This document in any form, software or printed matter, contains proprietary information that is the exclusive property of Oracle. Your access to and use of this confidential material is subject to the terms and conditions of your Oracle software license and service agreement, which has been executed and with which you agree to comply. This document and information contained herein may not be disclosed, copied, reproduced or distributed to anyone outside Oracle without prior written consent of Oracle. This document is not part of your license agreement nor can it be incorporated into any contractual agreement with Oracle or its subsidiaries or affiliates.
This document is for informational purposes only and is intended solely to assist you in planning for the implementation and upgrade of the product features described. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described in this document remains at the sole discretion of Oracle.
Due to the nature of the product architecture, it may not be possible to safely include all features described in this document without risking significant destabilization of the code.
# TABLE OF CONTENTS
THE DILEMMA: DEVELOPER PRODUCTIVITY OR DATA PRODUCTIVITY 3
INTERNET CONSUMER SERVICES: WHERE DEVELOPERS REIGN 3
ENTERPRISE SAAS: WHERE DATA IS KING 4
THE KEY TO HAVING BOTH: ORACLE'S CONVERGED DATABASE 4
MULTI-MODEL: ONE ENGINE, MANY PERSONALITIES 5
MULTITENANT: CONSOLIDATION, ISOLATION, AND AGILITY 6
MULTI-WORKLOAD: DOING MANY JOBS AT ONCE 8
ORACLE AUTONOMOUS DATABASE: CONVERGED DATABASE AS A SERVICE 9
ORACLE'S CONVERGED DATABASE DELIVERS THE UNIFIED DATA TIER 10
CONCLUSION 11
THE DILEMMA: DEVELOPER PRODUCTIVITY OR DATA PRODUCTIVITY
Enterprises have to create unique data assets and make the most of them to remain competitive. But as companies create more applications, analytics, and AI to digitize more processes and decision points, they’re faced with a difficult choice: to optimize for either fast application development now or easier value creation from data later. In other words, developer productivity or data productivity.
To optimize for developer productivity, teams spin up single-purpose databases for specific projects. Each database offers a convenient data model for that purpose and a simple set of APIs, making it easier to start developing against them. However, as a project grows and additional single-purpose databases or cloud services are required, data fragments across these services. Each has its own tooling, security methods, and operational characteristics, risking inconsistent data, security gaps, and increased difficulty in using that data in critical reporting and analytical work.
To optimize for data productivity, teams build on instances of a corporate standard database, usually a relational database or a relational-based multi-model database. The corporate standard database enforces official policies and simplifies anticipated data reuse, like reporting, but limited functionality may slow or even prevent innovation. This risks putting the entire business at a disadvantage to the competition.
Neither one of these choices is acceptable. How did businesses get stuck with this dilemma? More importantly, how do they escape it?
INTERNET CONSUMER SERVICES: WHERE DEVELOPERS REIGN
When internet consumer services like search, ecommerce, and social media took off, they faced a new set of requirements from enterprise applications. They typically used relatively simple application logic because they supported piecemeal customer interactions rather than detailed business processes. They didn’t connect into existing enterprise systems, and they preferred lightweight data models like documents, objects, and graphs that more closely fit their web-based app development methods. These services faced unprecedented volatility with peaks of tens to hundreds of millions of users. And those users cared far more about the overall experience of the service than the integrity of the data it might capture.
To deal with these new requirements, internet pioneers invented their own data tier from individual services, each supporting a specific data model, access method and workload needs. Because these firms made money on the services they offered and not the technology they built, they open sourced many of their internal innovations.
Tech startups turned the resulting open source projects like Cassandra and Hadoop into commercial open source software products. Shortly thereafter, the first cloud infrastructure services appeared. The cloud providers and the open source software projects grew symbiotically until there were multiple cloud vendors each offering multiple commercial open source data management services alongside their own proprietary services.
These data tier cloud services embodied features that made sense for internet consumer services and embraced a specific set of trade-offs:
1. Availability over consistency at extreme scale. A down service is a useless service. Making sure the service remained available, even at the expense of applications having to manage inconsistent data, is paramount. Therefore, transaction support and data consistency were relaxed to improve performance at extreme scale.
2. Bitbuckets over database management systems (DBMSs). Bare-bones, single-purpose APIs that offered basic requests consistent with internet protocols, supporting a specific data model or workload such as JSON, graph, or analytics, were preferred. Although these minimal databases were too primitive for complex enterprise processes, they could still support the development of individual features of consumer internet apps.
3. Microservices over monoliths. If the application required more features and complexity, then multiple “bare-bones”, single-purpose services could be leveraged together to build richer applications while maintaining tolerance to individual service failures. This approach, based on microservices, allowed development teams to operate more independently of each other, potentially increasing agility and innovation.
GREATER FLEXIBILITY, BUT INCREASED OPERATIONAL COMPLEXITY AND RISK
By building on a new cloud data tier consisting of single-purpose databases intended for specific data types and workloads, early adopters gained faster development time for simple cloud consumer applications that needed to operate at extreme scales.
But integrating multiple, single-purpose databases to create a complete, highly available, and secure enterprise solution can quickly become complicated. It may require a lot of custom code and hard trade-offs to manage data fragmented across multiple services. Unfortunately, organizations bear the burden of the integration work required to make apps built on an array of single-purpose databases feasible on a large scale.
Personnel need to be knowledgeable about the operational aspects of each single-purpose database. Security policies need to be re-implemented in every database, and apps become more complex to deal with as they propagate data from one database to another. Ironically, combining “best-of-breed” single-purpose databases often results in the “worst-of weaknesses” for the shared capabilities necessary to run an enterprise application such as security, scalability and replication. The weakest link will, unfortunately, define the level for all.
And integration has the potential to become a job that never ends as the underlying building-blocks are continually changing—leading to continuous re-integration, re-testing, re-tuning, and troubleshooting.
**ENTERPRISE SaaS: WHERE DATA IS KING**
On a parallel track from consumer internet apps but at roughly the same time, enterprise cloud applications, more commonly known as SaaS (Software-as-a-Service), reimagined enterprise apps for delivery over the internet. In contrast, enterprise SaaS apps used relatively complex application logic to support critical business processes. They often had to connect to existing enterprise systems. SaaS apps required extreme scale. And the business using them cared just as much about the data created through these apps as the apps themselves. Strict transactional consistency, as well as data integrity constraints, were non-negotiable.
As a result, enterprise SaaS vendors stuck with relational databases but pushed them, and the vendors like Oracle who built them, to new levels to meet these demands, including:
1. *High availability and consistency at very large scale.* ACID transactions were required for applications, data had to be correct, and systems available.
2. *Database management systems (DBMSs) over bitbuckets.* Relational databases supported complex queries and application logic plus reporting. It was essential for enterprise data to be available for analytics and reporting. But enterprise SaaS vendors needed support for new data models and workloads too.
3. *Monoliths and microservices as needed.* Instead of using a purely microservices-based approach, enterprise SaaS vendors used designs where most of their application code remained in a monolith, but microservices might be utilized to add functionality, such as mobile interfaces, or to improve performance in an isolated but critical section of code.
**THE KEY TO HAVING BOTH: ORACLE’S CONVERGED DATABASE**
Enterprises pursuing digital transformation face the same challenges as internet consumer services and enterprise cloud apps, but with the additional burden of making sure these new systems interact with existing ones.
Companies need to create consumer-facing mobile and web apps with the same rapid iteration and flexibility as the internet giants. They also have to provide IT services to multiple departments with the same agility as commercial SaaS providers. Plus, they have to make their existing enterprise systems use data from those new environments and contribute data back to them.
However, this is extremely difficult when each environment is built on different single-purpose databases with different operational, security, and performance profiles. Instead, firms need a unified data tier supporting all of these apps, analytics, and AI algorithms. This requires an innovation in data management—a converged database.
A converged database is a *multi-model, multitenant, multi-workload* database (Figure 1). It supports the data model and access method each development team wants, without unneeded functionality getting in the way. It provides both the consolidation and isolation these different teams want but don’t want to think about. And it excels in all the workloads (like OLTP, analytics, and IoT) these teams require. Oracle Database 19c is the world’s first converged database.
POWERFUL SYNERGIES
A good analogy for a converged database is a smartphone. Consider how smartphones integrated phone calls, messaging, a camera, calendar, music and other features into a single product when each initially required separate products. Now, these point products are mere features of smartphones. Smartphones rely on constantly improving custom integrated circuits produced at very high volume to achieve their small size, low power consumption, and unique features such as augmented reality, virtual reality, and 3D. With smartphones, synergy across features makes the whole better than the sum of parts, because of the tight integration and the new workflows this integration makes possible.
For example, the camera in your smartphone integrates with a lot of other applications. It automates the photo storage and backup process, allows pictures to be sent in emails and texts and easily posted on popular social media sites, and can even provide automated editing and color correction in real-time. The calendar is continuously updated since it uses the phone’s internet connectivity to sync with the cloud. The music app can stream music continuously from an extensive music library in the cloud. Each of these separate features is more capable and powerful in some ways compared to its standalone, single-purpose counterparts. Expensive, single-purpose, high-end cameras and stereo systems can still deliver higher quality pictures and sound than your smartphone camera and music app, but the economies of scale smartphones achieve versus these high end systems (with volumes in the millions versus only thousands for high end cameras) helps them narrow the gap over time.
The same ease of use, convenience and synergy you get from a smartphone also holds for a converged database. A converged database makes it much simpler to develop applications because standard SQL can be used to run very sophisticated machine learning, spatial, and graph algorithms instead of implementing these in separate databases and APIs. Instead of writing complex messaging and event code to weave data together, you can use standard SQL functionality like JOINs. If a developer prefers to program directly to the native data type without using SQL, Oracle provides rich APIs for the most popular data models, including JSON and graph, that support this.
Beyond the synergies and dynamic new workflows, built-in capabilities of Oracle’s converged database often surpass those of single-purpose databases. Oracle’s converged database delivers the most complete and highest quality feature sets for spatial, graph, JSON, machine learning capabilities, and more. Industry analysts have noticed this as well; ranking Oracle’s converged databases number one in all areas that are purportedly the focus of some niche, single-purpose database.¹ Choosing to use Oracle’s converged database does not mandate acceptance of lesser capabilities for the specific requirements of each data model, workload, or development paradigm.
MULTI-MODEL: ONE ENGINE, MANY PERSONALITIES
Let’s consider multi-model first: how does supporting different data models and their access methods in a single engine improve both developer productivity and data productivity?
¹ Gartner, Critical Capabilities for Operational Database Management Systems https://www.gartner.com/document/3975496
Unlike single-purpose databases, Oracle’s converged database supports JSON, XML, relational, spatial, graph, IoT, text and blockchain data with full joins, transactions, and other critical SQL features enterprises rely on (Figure 2). In addition, Oracle’s converged database also supports model-specific access methods for graph and spatial queries, as well as hundreds of common machine-learning algorithms. These abilities are accessible through RESTful APIs as well as stateful connections, leaving the choice in developers’ hands.
Oracle’s converged database goes even further in its support of JSON data commonly used in web and mobile applications. Simple Oracle Document Access (SODA) is a set of pre-built RESTful APIs that allow developers to create and query JSON collections without having to use SQL. However, unlike single-purpose document databases, Oracle Databases can generate a schema and indices from JSON objects to enable parallel SQL analytics, transactions, and joins of JSON data with spatial, graph, and relational data. In fact, one of the largest consumer electronics companies in the world uses the Oracle Database JSON features to support world-wide point-of-sale transactions.
These multi-model capabilities allow enterprises to have both developer productivity now and data productivity later. They give developers simple API-driven access and model-specific languages, while still having recourse to powerful SQL capabilities whenever they want. Meanwhile, IT enjoys a common approach to security, upgrades, patching, and maintenance across all deployments of Oracle’s converged database.
MULTITENANT: CONSOLIDATION, ISOLATION, AND AGILITY
Containers, one of the most powerful ideas in apps today, got its start in 1979 as a way to isolate individual processes on a shared Unix machine. Containerization not only provides better isolation, but when coupled with orchestration of the containers, virtualizes the underlying resources, leading to more efficiency, agility, and portability.
But just as OS containerization means something different from app containerization, so too does containerization of the database.
Unlike containerized apps which can disappear when no longer needed, databases need to persist data. Storing data durably and making it available for later usage is a core part of a database’s job. So how do you get container capabilities in the data tier? The Oracle converged database, with its unique multitenant features, incorporates containerization and orchestration within the data tier itself.
The multitenant architecture of Oracle’s converged database allows a single container database to support multiple pluggable databases (Figure 3). The Oracle container database is analogous to an app container engine in that it supports multiple pluggable databases (or, data containers), just as the app container engine supports multiple application containers.
Consolidation
- Self-contained pluggable database for each app or service
- Common operations at container database level (e.g., backup, upgrade)
Isolation
- Lockdown profiles
- Transparent encryption
- Resource isolation
- Single pluggable database per container database, as needed
Agility
- Fast provisioning
- Online relocation across public cloud, local cloud, on-premises
Figure 3. Oracle’s converged database provides containerization in the data tier
Like application tier container orchestration frameworks, such as Kubernetes, Oracle’s multitenant cloud architecture leverages container databases to orchestrate pluggable databases in a unified data tier. With this multitenant architecture, Oracle’s converged database delivers:
- **Technical efficiency**: Supports multiple databases in isolated workspaces while sharing common infrastructure. This eliminates redundant replication of overheads, enabling more databases per server.
- **Operating efficiency**: Administrative costs for the common environment are divided between multiple databases, effectively allowing you to manage many as one.
- **Agility**: It’s simple to provision new databases, clone existing ones, or redeploy them on other platforms as needed for consolidation, isolation, or performance.
- **Ease-of-use**: Databases (and their schemas) run unchanged. Scalability is transparent. Centralized management reduces complexity. Simple SQL extensions control new multitenant capabilities, like rapid provisioning and cloning.
The multitenant architecture of Oracle’s converged database also makes it easier to develop and deploy microservices by providing a pluggable database for each microservice. Each pluggable database is isolated, secured, and uses the exact data type or workload a particular microservice requires—even as the Ops team manages many databases as one through the container database.
The increased efficiency and simplified management of a unified data tier leveraging multitenant architecture increases developer productivity. Developers can focus on application development rather than data management tasks. In addition, the pluggable databases in the unified tier are easily accessed by data scientists and business analysts using their preferred tools and access methods to generate reports, perform analytics and build AI models, increasing data productivity.
MULTI-WORKLOAD: DOING MANY JOBS AT ONCE
Different kinds of database workloads require different kinds of software optimizations to remain performant at scale (Figure 4). For example, smart sensors taking frequent measurements need a database that can ingest a large number of new records extremely quickly. Oracle’s converged database supports this by letting an IoT app write its data directly into a memory buffer while, in a separate background process, the database commits those records to disk. This ability enables Oracle’s converged database to handle 25 million inserts per second on a two-socket server by eliminating the wait for data to be persisted to disk, which is a slower process.
But training machine-learning models is a very different kind of job. This involves extremely large numbers of relatively simple calculations which means a lot of CPU cycles. Oracle’s converged database includes hundreds of common machine-learning algorithms which have been modified to allow parallelization across many CPU cores, dramatically speeding up these kinds of calculations.
In some cases, a development team needs a single converged database to support multiple diverse workloads simultaneously. For example, most modern transactional applications need to run at least some analytics on operational data in real-time. Oracle’s converged database optimizes both of these workloads behind the scenes by representing the same data in both a row format (for fast transaction processing) and columnar format (for fast analytics) and keeping the two perfectly in sync as the data changes.
Developer productivity improves when a core database engine fully supports different kinds of application workloads without requiring excessive customization or app tuning. Likewise, data productivity improves when that same core engine also fully supports different analytical workloads critical to data scientists and business analysts.
Oracle’s converged database lets DevOps teams turn these different optimizations on and off without changes to the application. This enables performance tuning as demand on the database grows and even as the kind of demand changes.
All of these software optimizations have implications for the physical computing resources required, including processors, memory, disk, and input/output between these different components. Many enterprises have built sophisticated combinations of blade servers and network attached storage to provide Oracle’s converged database with an operational environment to support its extraordinary abilities. But there’s a better way—Oracle Exadata.
Scale-out, scale-up architecture not possible in software alone
Optimal Hardware Configuration
- Database Servers. Storage Servers
Separates compute from storage, allowing independent scaling
- Tiered Caching in Storage
Uses persistent memory, flash memory, and disk to tier data based on access frequency
- Direct Memory Access over the Network
Database servers have direct access to memory on storage servers, bypassing bottlenecks in typical network attached storage
Smart Storage Software
- Smart Scan
Automatically offloads SQL processing to storage servers
- Automatic Format Optimization
Stores data in hybrid columnar compression for analytics, row format for transactions
- Automatic Storage Index
Creates multiple indexes for each column, avoiding processing of unneeded data
- Automatic Data Tiering
Prioritizes data by active use across flash memory, persistent memory, disk
Figure 5. Oracle Exadata combines hardware and software innovations to provide unbeatable price-for-performance for all data workloads
ORACLE EXADATA
Oracle Exadata is a combination of hardware and software specifically designed for database workloads (Figure 5). Using an architecture that separates compute from storage, Oracle Exadata provides a modular scale-out/scale-up solution with the best price-for-performance in the industry for database workloads. The reason Oracle Exadata outperforms all other database architectures is Oracle’s co-engineering process which optimizes database software to squeeze every drop of performance from the latest hardware innovations incorporated into a known architecture.
Oracle Exadata X8M, which uses Intel® Optane™ Persistent Memory in its storage servers is a prime example of this process. Persistent memory is extremely fast, like normal dynamic memory, but persists data in the event of power loss, like disk. This creates the opportunity for radical increases in performance if only the database software can be modified to take advantage of it.
The average network latency of round trips between a database server and storage would negate the speed persistent memory can offer. Instead, Oracle Exadata X8M uses Random Direct Memory Access over Converged Ethernet (RoCE) to take advantage of data cached in memory on the storage servers. This is just one of thousands of hardware-software collaborations that make Oracle Exadata the most cost-effective platform for all database workloads.
Oracle’s converged database, running on Oracle Exadata infrastructure is available as a public cloud service in Oracle cloud, as Oracle Exadata Cloud Service, via Oracle Exadata Cloud@Customer and Oracle Dedicated Region, and on-premises as Oracle Exadata.
ORACLE AUTONOMOUS DATABASE: CONVERGED DATABASE AS A SERVICE
Oracle’s converged database achieves its ultimate expression in the Oracle Autonomous Database, which consists of Oracle’s converged database running on Oracle Exadata infrastructure, delivered as a fully managed cloud service. There’s no administration required because the Autonomous Database is:
- **Self-driving.** You tell the Autonomous Database the service level to achieve, and it handles the rest. The Autonomous Database automates the provisioning, securing, monitoring, backup, recover, troubleshooting, and tuning of databases. This dramatically reduces mundane database maintenance tasks, reducing costs and freeing scarce administrator resources to work on higher value tasks.
- **Self-securing.** The Autonomous Database is more secure than a traditional database deployment because it protects itself. This applies to defenses against both external and internal attacks.
Self-repairing. The Autonomous Database is more reliable than a traditional database deployment. At startup, it automatically establishes a triple-mirrored scale-out configuration in one regional cloud datacenter, with an optional full standby copy in another region. The Autonomous Database automatically recovers from any physical failures, whether at the server or datacenter level. It has the ability to rewind data to a point in time in the past to back out user errors. By applying software updates in a rolling fashion across nodes of the cluster, it keeps the application online during updates of the database, clusterware, OS, VM, hypervisor, or firmware.
The Oracle Autonomous Database is available in several popular personalities to save time for developers, data scientists, and analysts who want to get to work on the applications, AI, and analytics they really care about. The family of Autonomous Database services includes:
- **Autonomous Transaction Processing (ATP)** simplifies database operations for OLTP and mixed workloads requiring real-time or batch analytics. ATP reduces runtime costs by up to 90% and provides unparalleled scale, performance, and security with embedded machine learning-based automation.
- **Autonomous Data Warehousing (ADW)** is optimized for analytical processing. It automatically scales compute and storage, delivers fast query performance, and allows querying data beyond the data warehouse in other cloud services like object storage and Kafka streams.
- **Autonomous JSON Database (AJD)** is tailored to support JSON-centric app development. It provides native JSON management with advanced indexing and transparent scale-out, as well as full ACID transactions over JSON documents. All these capabilities are available through REST APIs.
**ORACLE’S CONVERGED DATABASE DELIVERS THE UNIFIED DATA TIER**
Oracle’s converged database is available in a wide array of deployment scenarios across public cloud, local cloud, and on-premises. By relying on the same core engine delivered through different mechanisms in different operating environments, companies maintain their power of choice while creating a unified data tier (Figure 6). With this unified data tier, enterprises can:
- **Build both microservices and monolithic apps.** Oracle’s converged database is a new way to support microservices applications. Each service has its own database with the model and access method it requires. Each of these databases can scale up or out as needed, using sophisticated methods like Real Application Clusters (RAC) or relatively simple tactics like sharding. And yet, because these databases are tenants of a supervising container database, they have one security model, patching, and upgrade path. Meanwhile, other instances of this same engine can do what they’ve always done—support the building and running of traditional enterprise applications.
- **Run workloads in the public cloud, local cloud, or on-premises.** Oracle’s converged database, running on Oracle Exadata infrastructure is the only way to have the exact same data tier architecture in all three computing environments. In the Oracle Cloud, this architecture is available as the Oracle Autonomous Database and Oracle Exadata Cloud Service. In local cloud, it’s available as Oracle Exadata Cloud@Customer and part of the new Dedicated Cloud Region offering. And on-premises, it’s available as Oracle Exadata.
- **Do analytics and data science on operational data as well as data outside the database.** The multi-workload capabilities of Oracle’s converged database make it possible to run analytical queries on data in operational databases supporting production applications. This provides real-time analytics on production data. In addition, instances of Autonomous Data Warehouse let analysts ask questions of data residing in object storage on the Oracle Cloud or other clouds.
- **Manage many databases as one—or not at all.** When every database instance your ops team has to manage is a different configuration of the same core engine, fleet management becomes much simpler. The entire fleet follows the same security protocols, patching methods, and upgrade path. With the Oracle Autonomous Database, not only does the entire fleet follow the same management methods, it does so on its own.
CONCLUSION
Oracle’s converged database is a radical re-thinking of the Oracle Database. This multi-model, multitenant, multi-workload database allows enterprises to satisfy two seemingly opposite goals: give developers the right tools for each job to make them more productive, and IT a unified data tier that makes reuse of data and overall management simpler and easier. What this means for enterprises is a simplified foundation for digital transformation. With Oracle’s converged database, enterprises can both digitize more business processes more quickly and inject more data into more decision points more confidently.
CONCLUSION
Oracle’s converged database is a radical re-thinking of the Oracle Database. This multi-model, multitenant, multi-workload database allows enterprises to satisfy two seemingly opposite goals: give developers the right tools for each job to make them more productive, and IT a unified data tier that makes reuse of data and overall management simpler and easier. What this means for enterprises is a simplified foundation for digital transformation. With Oracle’s converged database, enterprises can both digitize more business processes more quickly and inject more data into more decision points more confidently.
|
{"Source-Url": "https://www.oracle.com/a/otn/docs/database/oracle-converged-database-technicalbrief.pdf", "len_cl100k_base": 5920, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 37568, "total-output-tokens": 6461, "length": "2e12", "weborganizer": {"__label__adult": 0.0003981590270996094, "__label__art_design": 0.0004515647888183594, "__label__crime_law": 0.0003249645233154297, "__label__education_jobs": 0.0013523101806640625, "__label__entertainment": 0.0001239776611328125, "__label__fashion_beauty": 0.00016605854034423828, "__label__finance_business": 0.00833892822265625, "__label__food_dining": 0.000335693359375, "__label__games": 0.0005140304565429688, "__label__hardware": 0.0016984939575195312, "__label__health": 0.0004107952117919922, "__label__history": 0.000263214111328125, "__label__home_hobbies": 0.00010818243026733398, "__label__industrial": 0.0006313323974609375, "__label__literature": 0.0002925395965576172, "__label__politics": 0.0002491474151611328, "__label__religion": 0.00034332275390625, "__label__science_tech": 0.0288238525390625, "__label__social_life": 9.709596633911131e-05, "__label__software": 0.03985595703125, "__label__software_dev": 0.9140625, "__label__sports_fitness": 0.00020325183868408203, "__label__transportation": 0.0005788803100585938, "__label__travel": 0.00020170211791992188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32166, 0.00374]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32166, 0.18536]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32166, 0.90579]], "google_gemma-3-12b-it_contains_pii": [[0, 330, false], [330, 2032, null], [2032, 2550, null], [2550, 7319, null], [7319, 11638, null], [11638, 15005, null], [15005, 17929, null], [17929, 20305, null], [20305, 22918, null], [22918, 26583, null], [26583, 30911, null], [30911, 32166, null], [32166, 32166, null]], "google_gemma-3-12b-it_is_public_document": [[0, 330, true], [330, 2032, null], [2032, 2550, null], [2550, 7319, null], [7319, 11638, null], [11638, 15005, null], [15005, 17929, null], [17929, 20305, null], [20305, 22918, null], [22918, 26583, null], [26583, 30911, null], [30911, 32166, null], [32166, 32166, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32166, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32166, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32166, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32166, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32166, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32166, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32166, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32166, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32166, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32166, null]], "pdf_page_numbers": [[0, 330, 1], [330, 2032, 2], [2032, 2550, 3], [2550, 7319, 4], [7319, 11638, 5], [11638, 15005, 6], [15005, 17929, 7], [17929, 20305, 8], [20305, 22918, 9], [22918, 26583, 10], [26583, 30911, 11], [30911, 32166, 12], [32166, 32166, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32166, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
b3ed5150c2fe7b7ed0f4a3c19926cee38cc2ee7b
|
APPLICATION OF MICROCODE TO IMPROVE PROGRAM PERFORMANCE
An Honors Thesis (ID 499)
by
Dawn D. Greenwood
Thesis Director
Clinton P. Fuelling
Ball State University
Muncie, Indiana
May 18, 1981
Note: Spring 1981
CONTENTS
INTRODUCTION ................................................. 1
MICROPROGRAMMING OPERATING SYSTEMS. ...................... 1
ARCHITECTURE REDEFINITION VIA MICROPROGRAMMING. .......... 2
Manual Tuning .............................................. 3
Automated Process ......................................... 9
Frequency of Sequential Occurrence .................... 9
Frequency of Execution .................................. 10
Combination Methods .................................... 11
OPTIMIZATION ............................................. 13
Vertical Optimization ...................................... 13
Removal of Nonessential Operations .................. 13
Redundant Actions .................................. 13
Negated Actions .................................... 14
Code Motion .......................................... 14
Code Consolidation .................................... 14
Horizontal Optimization .................................. 15
Local Compaction ...................................... 15
Compaction Algorithms ................................ 17
Linear Algorithm .................................. 18
Critical Path Algorithm ............................ 18
Branch and Bound Algorithm ......................... 19
List Scheduling .................................... 20
Global Optimization ...................................... 21
Register Allocation ...................................... 21
INTRODUCTION
The use of microcoding to improve program performance has introduced new areas of study in recent years. Microcoding operating systems and architecture redefinition via microprogramming are two such areas. High-level microprogramming languages are also commanding great interest and research. All these techniques require optimization algorithms to assure optimal execution times. These areas will be considered in this paper.
Due to the difficulty in locating resources, many articles were not available for review. In this case, care has been taken in guiding the reader to the article by referencing it when appropriate. A number of books and articles in the bibliography offer a basic look at microprogramming in general as well as the subjects covered in this paper [AGRA74, AGRA76, BROA74, CHUR70, DAVI78, FULL76, HIGB78, HINS72, JONE76a, JONE76b, KRAF81, LEVE77, RAUS80, ROSI74, SALI76].
MICROPROGRAMMING OPERATING SYSTEMS
Since "users seldom (if ever) perform a computation without assistance from the operating system," [DENN71] any improvements in execution rates of most frequently used segments should reflect favorably on application programs. Microcoding primitives used throughout the operating system is one approach to improving operating system time. Larger segments can also be selected. Choosing the most appropriate segments is important to performance improvements [RAUS80, BROW77]. In [BROW76] is a list of fourteen functions which should be
considered for microcoding. These functions tend to be too exact for hardware, relatively consistent, and noticeably affected by software overhead. Several applications in the area of software have been discussed [HUBE70, BELG76, DHAU77, CHAT76, LUND76, DENN71, STOC77, SOCK75].
ARCHITECTURE REDEFINITION VIA MICROPROGRAMMING [RAUS76]
Called "tuning" by [ABDA74, ABDA73], the central concept is to change a system's architecture to more efficiently solve a particular problem. Architecture in this sense refers to "the attributes of a computer as seen by the programmer" [RAUS76]. This includes the memory and particularly the machine instruction set which the programmer interacts with. The microprograms in control store which interpret the machine instructions define a particular architecture. By modifying these microprograms, the architecture changes accordingly.
Such modification is necessary for several reasons. Typical instruction set design has followed hardware constraints instead of considering the problems to be solved and their structure. The results are instruction sets which are awkward and inefficient [RAUS76, WADE75]. In addition, the general-purpose architecture was conceived through a series of compromises that permit it to perform the widest range of services. Like the jack-of-all-trades, it is frequently master of none and handles all its functions in a suboptimal fashion [HUSS70].
To counteract these compromises and tune the instruction set, carefully chosen segments of machine code are microcoded, placed in control
storage and referenced by one new machine language instruction. The major advantage of this approach is the decrease in machine instruction fetch-and-decode time. Considering the average (FORTRAN) program spends forty percent of its execution time on this function, it is estimated that savings can easily be twenty percent [RAUS76]. Additionally, the microcode can be optimized. It is important to stress that optimization strategies are vital to architecture redefinition success [ABDA74]. The final section covers optimization strategies.
Isolation of the appropriate segments can be a manual process based on environmentally determined areas of processing inefficiency. An automated process can be based on the frequency of sequential occurrence in an environment, the frequency of execution, or a combination thereof. [RAUS76]
MANUAL TUNING
Manual tuning is the creation of specialized microcoded instructions when the code is written by a programmer directly or in some microprogramming language. [ABDA74] applied the term to tuning efforts prior to when writable control store made automated tuning feasible. It is no less applicable to vendor packages and installation-written code designed to "speed up" a particular feature or activity.
Those activities most likely to benefit by microprogramming are CPU bound, use a number of intermediate results, are highly repetitive, or are not easily handled by existing machine language instructions [CLAP72]. Such features include multiply routines, square root algorithms, matrix operations, table searches, array address calculation,
number generation, stacking routines, and tree searches [COOK70, HUSS70, TUCK71, CLAP72, REIG72, REIG73, PARK73, SNYD75, HABI74, TAFR75].
In addition, applying manual tuning concepts when an instruction set is being designed will assure an application oriented architecture [WADE72, WADE73, HARR73, WADE75]. [WADE75] capitalized on the similarity of languages' constructs to design a "general-purpose" architecture capable of supporting a number of widely-accepted programming languages. Wade's basic instruction set included simple arithmetic, compare, branch, logic, and load/store instructions. In addition to this "kernel" [RAUS76] language, Wade designed a set of "super instructions" to efficiently execute five common high-level constructs. These operations include arrays and array indexing, repetition loops, block structure housekeeping, input/output editing, and character string manipulation. Each feature will be examined briefly, with somewhat more detail on array indexing to allow for better understanding of the process of developing application oriented instructions.
ARRAYS AND ARRAY INDEXING
Wade designed a new machine language instruction ADGEN. This instruction enables a microprogram which was specially written and placed in control storage by Wade to calculate the address of an array element and use that location in one of several ways. A high-level language version of the algorithm used to calculate the address is given below.
SUM = ∅
For i = 1 step 1 until n do
begin
sum = sum * (upper-bound [i] - lower bound [i] + 1)
sum = sum + (subscript + [i] - lower bound [i])
end
sum = sum * element - size
computed - address = array - address + sum
The microcoded algorithm uses information stored in the instruction and the array descriptor. These formats are given below.
ARRAY DESCRIPTOR FORMAT
n# array # lower-bound - 1# upper-bound - 1# ... # lower-bound - n# upper-bound - n# element size (in bytes)
where:
n = number of dimensions of array
array = address of array in memory
INSTRUCTION FORMAT
ADGEN$5\$ address of array descriptor $b$ subscript $-1\$...
subscript $-n\$ flags $b$ index register
where:
$5\$ = instruction operation code
flags = indicate action to be performed with location
index register = received calculated address
The algorithm is designed for arrays stored in row order internally. For those stored by columns (FORTRAN), the subscript fields are interchanged.
Repetition Loops (DO- or FOR-loops)
Observing that such loops can vary in intricacy, Wade designed four different instructions. Two instructions handle the simplier cases requiring little more than an increment-compare-branch sequence, while the remaining instructions were used for the more complicated loops.
Block Structure
Block structures generate overhead due to the housekeeping involved in the dynamic manipulation of storage necessary for nested BEGIN blocks and procedures CALL/RETURN statements. When handled by inappropriate instruction sets, that overhead becomes excessive. To implement these
functions in microcode, BEGIN Block and END Block instructions, two procedure calls, and a single RETURN statement were designed.
**Input/Output Editing**
I/O editing refers to the conversion of decimal character numbers into binary form for internal storage and manipulation and the conversion back to decimal numbers for external use. The instruction CONVERT TO DECIMAL converts one binary number to its decimal representation. Input instructions were provided for conversion to binary form.
**Character String Manipulation**
Wade's goal was to provide for the majority of the manipulation features of PL/1. A string descriptor similar to that of arrays provided direct and indirect addressing capabilities. This simplifies usage of varying-length strings. MOVE and COMPARE statements were designed to best handle these features.
**PERFORMANCE RESULTS**
Wade's improved machine language instruction set was implemented and its performance compared to that of the IBM 370 architecture. A number of PL/1 and FORTRAN programs were executed on both machines. The following chart lists the execution time improvements realized by the high-level language oriented architecture over the IBM general purpose architecture.
<table>
<thead>
<tr>
<th>Program Function</th>
<th>Percent Improvement</th>
</tr>
</thead>
<tbody>
<tr>
<td>Finding prime numbers by sieve of Eratosthenes</td>
<td>32.2%</td>
</tr>
<tr>
<td>Generates random sentences from English grammar</td>
<td>64.2%</td>
</tr>
<tr>
<td>Convert arithmetic expression from infix to postfix notation</td>
<td>87.4%</td>
</tr>
<tr>
<td>Adds record to linked list</td>
<td>54.2% 27.9%</td>
</tr>
<tr>
<td>Solving differential equation by fourth order</td>
<td></td>
</tr>
<tr>
<td>Runge-Kutta method</td>
<td>51.7% 50.5%</td>
</tr>
<tr>
<td>Multiplication of two matrices</td>
<td>72.3% 31.4%</td>
</tr>
<tr>
<td>Evaluation of a determinant</td>
<td>85.9%</td>
</tr>
</tbody>
</table>
In general, those programs performing non-numeric processing showed greater performance improvements. This is in keeping with the findings of Hans Jeans [JEAN65]. Jeans found that arithmetic operations were hardware bound and that the decrease in instruction fetch and execute time was less significant than in logical operations.
With the increased use of the writable control store, thought turned to automating the manual process of tuning. In this way each group of similar programs, or each program, can have its own machine architecture with instructions chosen to assure its optimal execution [ELAY77]. This architecture is dynamically loaded into control store prior to program execution. When a unique architecture is created for each program, the process is called dynamic problem oriented redefinition of computer architecture via microprogramming [RAUS76, RAUS75, RAUS78, RAUS80]. When similar programs are grouped together and an improved instruction set created for each group, this is called heuristic tuning [ABDA74].
Several methodologies exist for determining which groups of instructions should be replaced by new machine instructions to yield the greatest execution gains with minimal control store requirements. (Because of its expense, control store must be considered a limited resource. If this were not the case, entire programs could be microcoded.) These include the frequency of sequential occurrence and the frequency of execution. Heuristic tuning [ABDA74] and [RAUS76]'s dynamic redefinition of architecture enlist both methods in an attempt to achieve optimal performance improvement.
Frequency of Sequential Occurrence
This method is based on the observation that instructions charact--
ristically occur in identical sequences. This is found not only on the machine instruction level, but also on the intermediate level generated
by high-level language compilers. The intermediate language code is analyzed to locate such sequences and to determine how frequently they occur. From this information, sequences which would yield the greatest execution time savings if microcoded are selected for new machine instructions. The difficulty here is the length of sequences to be chosen. Longer sequences occur less frequently, while shorter sequences occur more often.
While execution time improvements are achieved through this process, they are not optimal. Typically, no attempt is made to determine relationships between sequences. Additionally, run time information is not available. There is no way to determine if some sequences are executed repeatedly as in a loop. [RAUS76]
Frequency of Execution
This method uses the execution behavior of a program to choose the instructions to be microcoded. The program is analyzed into program blocks. These are straight-line sections of code such that if the first operation in the block is performed, all succeeding code is executed. By knowing the number of times the block is executed and the execution time improvement of a single microprogrammed block over that of the corresponding machine instructions, total execution time savings can be calculated. Those blocks with the greatest savings are chosen as new machine language instructions.
Combination Methods
Rauscher and Agrawala [RAUS76, RAUS75, RAUS78] devised a dynamic redefinition scheme combining the previous two methods. Program sequences are considered for microcoding on the basis of both their frequency of occurrence and their frequency of execution.
The intermediate representation from the high-level language compiler is analyzed to determine the different instruction sequences and where they occur in the program. The expected savings from microcoding each segment are calculated. Estimates are made on how often each segment will be executed. With this additional information, total projected time saved can be calculated. Those instruction sequences with the highest potential time savings are microcoded as new instructions. Experimentally, execution time improvements were found to be between twenty-five and fifty percent.
In heuristic tuning [ABDA74], programs are classified according to their application, language and/or special requirements. A trace file for each application class is built by monitoring the execution of numerous programs from that class. This data may be collected through hardware, software or microprogrammed means. The trace file data is then processed to obtain statistics on instruction execution frequencies, instruction dependencies and similar information. These statistics make it possible for a synthesis algorithm to modify existing microcode or create new microprograms which will allow optimal execution of class programs.
The first step performed by the algorithm is the location of program loops using trace file information. Within the most frequently executed loop, an optimization step takes place. In it, arrangements are made so that the most frequently accessed data is preloaded into internal registers. Next, the instructions within the loop are decomposed into micro-operations, optimized, and loaded into control store. The trace file is updated and an estimate of the performance improvement is made. This algorithm repeats, each time choosing the loop to be tuned from those remaining. It terminates when the algorithm determines that any further execution improvements could not be greater than the overhead they generate.
If the algorithm determines there are either no loops or very long loops, synthesis of new instructions can still take place. The trace file supplies the most frequently occurring code sequences and an algorithm similar to the one used above combines the dependent code into new instructions.
The new instruction set is tested to determine that it is functionally equivalent to the general instruction set. Once its results can be trusted, the new architecture is ready to be incorporated into the existing system. Several software modifications must be made.
Two programs were heuristically tuned and their original execution compared with that using the synthesized instruction. A data movement example resulted in a 87.4% improvement in execution time. A Fibonacci number generation program achieved a 77% improvement [ABDA74]. It should be emphasized that these examples reflect specific algorithms.
More complicated application programs would probably achieve somewhat less of an improvement.
OPTIMIZATION
Optimization has several meanings depending on what concepts the particular author wants to emphasize. While optimization can refer to decreasing the number of bits in a microinstruction, this material will concentrate on methods of reducing the execution time by decreasing the number of microinstructions in a microprogram [MALL78]. There are two categories of algorithms which optimize microcode. Vertical optimization deals with decreasing the number of microoperations in a sequence of microoperations. Horizontal or microcode optimization, also known as compaction, is the process of creating microinstructions in a machine whose control word allows a great deal of parallelism. This means a large number of hardware devices can perform concurrently, and therefore a number of microoperations can execute simultaneously. Compaction can be performed locally within straight-line sequences or globally throughout the program. Additionally, register allocation is discussed because of its relationship to program performance.
Vertical optimization
Until recently, microcode was painstakingly optimized by hand. When automation of the process was considered, the natural route was to modify processes that were used in standard compilers on predominantly sequential code [ALLE69, ALLE72]. Kleir and Ramamooty [KLEI71] were
the first to consider vertical optimization algorithms for microcode as an alternative to hand optimization.
One method of optimization borrowed from traditional compilers is removal of nonessential operations such as redundant actions and negated actions. Redundant actions are those available from a previous operation which used identical input and identical destination of output. Additionally, the output destination must have remained unchanged, and the execution of one action must guarantee execution of the other to assure consistent performance. Negated actions are those from which the output is never used.
Code motion is the act of decreasing the number of operations performed in those segments of the program most frequently executed. Using program activity as a guide, dynamic analysis rates areas as to their execution frequency to determine proper code movement. Unfortunately, dynamic analysis is prone to suffer from false assumption and extensive time usage. Code motion without the benefit of execution statistics is called static analysis. It is assumed that the level of nesting indicates frequency of execution. Actions are migrated from inner to outer segments [KLEI71].
Code consolidation is another possible vertical optimization. It is feasible on certain machines that a sequence of simpler instructions can be replaced by one complex instruction. This concept has been used in the PL/MP compiler [TAN78].
The PL/MP compiler takes a high-level language and by applying both machine dependent and machine independent optimizations, produces
microinstructions. Similar compilers and the corresponding high-level languages are discussed in a number of articles [ECKH71, TIRR73, BLAI74, BOND74, CLAR72, DASG78b, DASG80, LEWI79, DEWI76a, SCHR74, FRIE76, DEWI76b, LLOY74b, RAMA74, TSUC72].
The intermediate representation of the compiler is a stream of primitive register-to-register operations. After other vertical optimizations are applied, the code is searched for sequences which satisfy predefined substitution rules. As an example:
- an add-immediate instruction: \( \text{ADDIM } b, @b, 100 \)
- a load indirect: \( \text{LOADI } y, b \)
- and an add instruction: \( \text{ADD } z, x, y \)
...can be performed by one instruction
- add-indirect-with-offset: \( \text{ADDIO } z, x, @b, 100 \)
The substitution would be made only if the first three instructions could be deleted. If a subsequent operation uses \( y \), this is not the case.
Horizontal Optimization
Local Compaction
Local compaction has been the subject of much recent study [YAU74, AGER76, LLOY74a, RAMA73, RAMA74, DASG76a, DASG76b, JACK74, TSUC74, TSUC76, DEWI76a, TABA74, TOKO77, MALL78, WOOD78, WOOD79, FISH79].
Because of the stringent performance expectations placed on microcode, earlier study was directed toward producing optimal code. The amount of work this requires is prohibitive since all possible mappings of microoperations to microinstructions are found [ASTO71]. The time required for such exhaustive work increases exponentially with the number of microoperations [LAND80]. Methods have been developed more recently [TOKO77, MALL78, WOOD79, FISH79] which result in optimal or near-optimal code in polynomial or linear time. This breakthrough makes horizontal microcode compilers feasible.
Local compaction algorithms typically require two inputs: a straight-line microcode section which contains no internal branches or entry points; and some representation of the relationship between the microoperations. To maintain data integrity, it is vital that the original semantics of the sequential microcode be preserved. This obviously requires that certain microoperations be executed in a particular order. Such microoperations are said to have a data interaction. Given two sequential microoperations a and b, the following three conditions define data interaction:
1. b requires an output resource of a as an input
2. a requires an input source which b modifies
3. a and b modify the same output resource [LAND80].
These interactions must be analyzed and represented so that the information can be used to create microinstructions. Landskov [LAND80] presented a general algorithm to record these data interactions in
graphical form. Each microoperation is added to the graph in a sequential order and is represented by a node. The new node must be connected to previously placed nodes to indicate the data interactions. It is linked only to those nodes on which it is directly data dependent. This implies that nodes further up the path from the lowest node having a data interaction with the new node are not also linked to it. The search first checks the last node on each path (the leaves) for data interaction. If the interaction is present the nodes are linked. Otherwise, testing takes place on each preceding node working up from the leaves until the test is positive or a node is reached which is already indirectly linked with the new node. Once all microoperations are added to the graph, the resulting tree indicates the sequencing necessary to assure semantical equivalency.
Compaction Algorithms
Landskov [LAND80] suggests four categories for compaction algorithms. These include linear analysis, critical path, branch and bound, and list scheduling. Each algorithm consists primarily of two parts. First, a data dependency analysis orders the microoperations so as to assure the semantics are not compromised when microinstructions are created. Secondly, a conflict analysis monitors the assignment of microoperations to particular microinstructions to assure there are no hardware conflicts.
Linear Algorithm
The data dependency analysis begins with a list of microinstructions (originally empty). Proceeding sequentially, microoperations are selected and the listed of microinstructions are searched from bottom to top to determine the rise limit. This is the first microinstruction in which the microoperation can be included given the data dependencies.
The microinstruction list is searched top-to-bottom in conflict analysis. Beginning at the rise limit, search proceeds downward until a microinstruction is found which can include the microoperation without hardware conflict. If a microoperation having a rise limit cannot be included in an existing microinstruction, a new one is created at the bottom of the list. If the microoperation has no rise limit, it is included in a new microinstruction at the beginning of the list. The linear algorithm does not guarantee optimal code. Dasgupta's work has produced an algorithm that is the model for Landskov's [DASG76, DASG77, BARN78, DASG78a, DASG79].
Critical Path Algorithm
An early partition is created as the first step. By working down through the microoperations using the data dependency graph, the earliest that each microoperation can be executed, assuming there is no conflicts for hardware, is determined. The total number of steps or frames needed to fit in all microoperations under these conditions is the minimum number of microinstructions which are required for that sequence. Next, working upward, microoperations are placed in the last
possible frame in which they can be executed so that the minimum number of frames determined earlier can be maintained. Critical microoperations are those with the same early and late timing and constitute the critical partition. The critical partition is then modified by dividing any frame in which there is hardware conflicts into two (or more) frames.
To form the final microinstructions, the non-critical microoperations are added to the modified critical partition. Each such microoperation can be included from their frame in the early partition to their frame in the late partition inclusive. If a hardware conflict does not allow a microoperation to be included within these frames, a new microinstruction is inserted just following the last partition position. This algorithm could result in many more microinstructions than is optimal. Its weakness is that adjacent microinstructions which could be combined are not if they are in the different frames. This algorithm was apparently devised from critical path processor scheduling [RAMA79] by Ramamoorthy and Tsuchiya [RAMA74].
Branch and Bound Algorithm
Using the data dependency graph, a data available set (Dset) is built which contains those microoperations which have all of their directly data dependent microoperations allocated to a microinstruction. Obviously, the original Dset consists of those microoperations with no parent nodes in the graph. Using the Dset, microinstructions are formed such that every possible member of the Dset are included. This is a
complete instruction and the collection of these is used to build a tree whose nodes correspond to the microinstructions. In BAB exhaustive, a complete tree is built whose paths represent every microinstruction ordering possible. From this tree, the path of optimal length is found and the corresponding complete instructions are the shortest possible microcode. [LAND80] and [MALL78] explain this algorithm in great detail. This microcode compaction algorithm was originally presented by Yau, Schowe, and Tsuchiya [YAU74]. Additionally, different heuristics can be applied to the branch and bound algorithm to decrease the number of possible paths and to cut execution time dramatically while still achieving near-optimal results [MALL78].
List Scheduling
In list scheduling algorithms, only a single branch of a tree is formed by choosing the "best" complete instruction from each Dset. "Best" is determined by a weighting function. The choice of function is important to achieving optimal microcode [FISH79]. One possible weight for a microoperation is the number of descendants that microoperation has in the data dependency graph [WOOD78]. Thus when choosing a microoperation from the Dset, the order is from highest to lowest weight. Each microoperation is added to the microinstruction unless a hardware conflict occurs. When all Dset members have been considered, a new Dset is created and a new microinstruction is started.
Global Optimization
Unlike local optimization, global optimization can look beyond segments to include loops and recursive subroutines. Its primary goal is to eliminate the NOP's positioned within microinstructions in the interest of proper timing of operations [LAND80]. Global optimization is an area where little research has been done until very recently. [WOOD79, DASG79, MALL78, FISH79, TOK078] deal with this subject.
[TOK078] has developed a global routine based on a microoperation model called a microtemplate. This microtemplate is two-dimensional, representing both timing and machine resources. [TOK078] uses an optimization algorithm which seeks primarily to eliminate NOP's positioned at the beginning and/or end of sequential segments by consolidation, or entire redundant microtemplates, in either an upward or downward direction. When compared to hand optimized code, the globally optimized code displayed an average improvement of 5.4%. Similar approaches used by [WOOD79] and [FISH79] treat a locally compacted sequence as a primitive in a larger sequence.
Register Allocation
Computers designed for efficient microprogramming typically have a large number of very fast registers. Ideally, there are enough registers so that each variable can be permanently assigned to a register during program execution. When there are too few registers, variables must sometimes be stored in memory. This means a store-and-load sequence is often required when a variable is brought into a register.
This additional overhead has been shown to affect performance. Liu [LIU75] found that having sufficient registers could improve microprogram execution time by thirty-three percent over the time with no registers. Obviously, it is important to determine which variables are most frequently referenced and to assign these variables to registers first. It is also important to have an efficient reallocation algorithm to keep reallocations to a minimum [DEWI76, TAN77].
With memory costs decreasing, manufacturers are tending to include even more high-speed microprogramming registers in their machines. This trend could make the problem of register allocation of minimal concern in the future.
|
{"Source-Url": "http://cardinalscholar.bsu.edu/bitstream/handle/handle/190580/G739_1981GreenwoodDawnD.pdf?sequence=1&isAllowed=y", "len_cl100k_base": 6170, "olmocr-version": "0.1.53", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 64705, "total-output-tokens": 12122, "length": "2e12", "weborganizer": {"__label__adult": 0.0004277229309082031, "__label__art_design": 0.0005645751953125, "__label__crime_law": 0.0003180503845214844, "__label__education_jobs": 0.001495361328125, "__label__entertainment": 0.0001055002212524414, "__label__fashion_beauty": 0.00022423267364501953, "__label__finance_business": 0.0002989768981933594, "__label__food_dining": 0.0004091262817382813, "__label__games": 0.000903606414794922, "__label__hardware": 0.005794525146484375, "__label__health": 0.0005249977111816406, "__label__history": 0.0004432201385498047, "__label__home_hobbies": 0.00017392635345458984, "__label__industrial": 0.0009708404541015624, "__label__literature": 0.0003974437713623047, "__label__politics": 0.00032329559326171875, "__label__religion": 0.0006823539733886719, "__label__science_tech": 0.11456298828125, "__label__social_life": 8.153915405273438e-05, "__label__software": 0.00653839111328125, "__label__software_dev": 0.86328125, "__label__sports_fitness": 0.0004284381866455078, "__label__transportation": 0.000935077667236328, "__label__travel": 0.00021898746490478516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44807, 0.03598]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44807, 0.69166]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44807, 0.88649]], "google_gemma-3-12b-it_contains_pii": [[0, 217, false], [217, 1705, null], [1705, 3187, null], [3187, 4744, null], [4744, 6336, null], [6336, 7797, null], [7797, 8358, null], [8358, 9384, null], [9384, 10606, null], [10606, 11747, null], [11747, 13283, null], [13283, 14644, null], [14644, 16141, null], [16141, 17763, null], [17763, 19199, null], [19199, 20773, null], [20773, 21923, null], [21923, 23446, null], [23446, 24838, null], [24838, 26360, null], [26360, 27894, null], [27894, 29329, null], [29329, 30839, null], [30839, 31532, null], [31532, 33548, null], [33548, 33548, null], [33548, 35693, null], [35693, 37670, null], [37670, 39899, null], [39899, 42047, null], [42047, 44380, null], [44380, 44807, null]], "google_gemma-3-12b-it_is_public_document": [[0, 217, true], [217, 1705, null], [1705, 3187, null], [3187, 4744, null], [4744, 6336, null], [6336, 7797, null], [7797, 8358, null], [8358, 9384, null], [9384, 10606, null], [10606, 11747, null], [11747, 13283, null], [13283, 14644, null], [14644, 16141, null], [16141, 17763, null], [17763, 19199, null], [19199, 20773, null], [20773, 21923, null], [21923, 23446, null], [23446, 24838, null], [24838, 26360, null], [26360, 27894, null], [27894, 29329, null], [29329, 30839, null], [30839, 31532, null], [31532, 33548, null], [33548, 33548, null], [33548, 35693, null], [35693, 37670, null], [37670, 39899, null], [39899, 42047, null], [42047, 44380, null], [44380, 44807, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44807, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44807, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44807, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44807, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44807, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44807, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44807, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44807, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44807, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44807, null]], "pdf_page_numbers": [[0, 217, 1], [217, 1705, 2], [1705, 3187, 3], [3187, 4744, 4], [4744, 6336, 5], [6336, 7797, 6], [7797, 8358, 7], [8358, 9384, 8], [9384, 10606, 9], [10606, 11747, 10], [11747, 13283, 11], [13283, 14644, 12], [14644, 16141, 13], [16141, 17763, 14], [17763, 19199, 15], [19199, 20773, 16], [20773, 21923, 17], [21923, 23446, 18], [23446, 24838, 19], [24838, 26360, 20], [26360, 27894, 21], [27894, 29329, 22], [29329, 30839, 23], [30839, 31532, 24], [31532, 33548, 25], [33548, 33548, 26], [33548, 35693, 27], [35693, 37670, 28], [37670, 39899, 29], [39899, 42047, 30], [42047, 44380, 31], [44380, 44807, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44807, 0.04049]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
82eeb319f06c0868e3061bfd290a2f7f605e228d
|
[REMOVED]
|
{"Source-Url": "http://home.cs.colorado.edu/~alko5368/lecturesCSCI5444/lec09.pdf", "len_cl100k_base": 4380, "olmocr-version": "0.1.53", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 68355, "total-output-tokens": 5741, "length": "2e12", "weborganizer": {"__label__adult": 0.0004482269287109375, "__label__art_design": 0.0004849433898925781, "__label__crime_law": 0.0005893707275390625, "__label__education_jobs": 0.0040740966796875, "__label__entertainment": 0.0001685619354248047, "__label__fashion_beauty": 0.0002410411834716797, "__label__finance_business": 0.00023818016052246096, "__label__food_dining": 0.0008096694946289062, "__label__games": 0.0014781951904296875, "__label__hardware": 0.00203704833984375, "__label__health": 0.0011281967163085938, "__label__history": 0.00043129920959472656, "__label__home_hobbies": 0.00024008750915527344, "__label__industrial": 0.0010156631469726562, "__label__literature": 0.000720977783203125, "__label__politics": 0.0004732608795166016, "__label__religion": 0.0009598731994628906, "__label__science_tech": 0.2164306640625, "__label__social_life": 0.00019240379333496096, "__label__software": 0.0080718994140625, "__label__software_dev": 0.75830078125, "__label__sports_fitness": 0.0005612373352050781, "__label__transportation": 0.0008678436279296875, "__label__travel": 0.0002295970916748047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 12034, 0.03169]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 12034, 0.46143]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 12034, 0.75803]], "google_gemma-3-12b-it_contains_pii": [[0, 27, false], [27, 112, null], [112, 404, null], [404, 1018, null], [1018, 1443, null], [1443, 1554, null], [1554, 1680, null], [1680, 1834, null], [1834, 1950, null], [1950, 2150, null], [2150, 2424, null], [2424, 2746, null], [2746, 3173, null], [3173, 3305, null], [3305, 3711, null], [3711, 3858, null], [3858, 3873, null], [3873, 3949, null], [3949, 4176, null], [4176, 4447, null], [4447, 4941, null], [4941, 5577, null], [5577, 5887, null], [5887, 6401, null], [6401, 6996, null], [6996, 7204, null], [7204, 7696, null], [7696, 8214, null], [8214, 8554, null], [8554, 8657, null], [8657, 9002, null], [9002, 9159, null], [9159, 9449, null], [9449, 9471, null], [9471, 9625, null], [9625, 9919, null], [9919, 10141, null], [10141, 10816, null], [10816, 11619, null], [11619, 11619, null], [11619, 11818, null], [11818, 11924, null], [11924, 12034, null]], "google_gemma-3-12b-it_is_public_document": [[0, 27, true], [27, 112, null], [112, 404, null], [404, 1018, null], [1018, 1443, null], [1443, 1554, null], [1554, 1680, null], [1680, 1834, null], [1834, 1950, null], [1950, 2150, null], [2150, 2424, null], [2424, 2746, null], [2746, 3173, null], [3173, 3305, null], [3305, 3711, null], [3711, 3858, null], [3858, 3873, null], [3873, 3949, null], [3949, 4176, null], [4176, 4447, null], [4447, 4941, null], [4941, 5577, null], [5577, 5887, null], [5887, 6401, null], [6401, 6996, null], [6996, 7204, null], [7204, 7696, null], [7696, 8214, null], [8214, 8554, null], [8554, 8657, null], [8657, 9002, null], [9002, 9159, null], [9159, 9449, null], [9449, 9471, null], [9471, 9625, null], [9625, 9919, null], [9919, 10141, null], [10141, 10816, null], [10816, 11619, null], [11619, 11619, null], [11619, 11818, null], [11818, 11924, null], [11924, 12034, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 12034, null]], "pdf_page_numbers": [[0, 27, 1], [27, 112, 2], [112, 404, 3], [404, 1018, 4], [1018, 1443, 5], [1443, 1554, 6], [1554, 1680, 7], [1680, 1834, 8], [1834, 1950, 9], [1950, 2150, 10], [2150, 2424, 11], [2424, 2746, 12], [2746, 3173, 13], [3173, 3305, 14], [3305, 3711, 15], [3711, 3858, 16], [3858, 3873, 17], [3873, 3949, 18], [3949, 4176, 19], [4176, 4447, 20], [4447, 4941, 21], [4941, 5577, 22], [5577, 5887, 23], [5887, 6401, 24], [6401, 6996, 25], [6996, 7204, 26], [7204, 7696, 27], [7696, 8214, 28], [8214, 8554, 29], [8554, 8657, 30], [8657, 9002, 31], [9002, 9159, 32], [9159, 9449, 33], [9449, 9471, 34], [9471, 9625, 35], [9625, 9919, 36], [9919, 10141, 37], [10141, 10816, 38], [10816, 11619, 39], [11619, 11619, 40], [11619, 11818, 41], [11818, 11924, 42], [11924, 12034, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 12034, 0.10357]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
cf8c7925959e7a05a793f4527b0ac79c58ef319b
|
Autotuning wavefront applications for multicore multi-GPU hybrid architectures
Citation for published version:
Digital Object Identifier (DOI):
10.1145/2560683.2560689
Link:
Link to publication record in Edinburgh Research Explorer
Document Version:
Peer reviewed version
Published In:
General rights
Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights.
Take down policy
The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim.
ABSTRACT
Manual tuning of applications for heterogeneous parallel systems is tedious and complex. Optimizations are often not portable, and the whole process must be repeated when moving to a new system, or sometimes even to a different problem size. Pattern-based programming models provide structure which can assist in the creation of autotuners for such problems. We present a machine learning based auto-tuning framework which partitions the work created by applications which follow the wavefront pattern across systems comprising multicore CPUs and multiple GPU accelerators. The use of a pattern facilitates training on synthetically generated instances. Exhaustive search space exploration on real applications indicates that correct setting of the tuning factors leads to a maximum of 20x speedup over an optimized sequential baseline, with an average of 7.8x. Our machine learned heuristics obtain 98% of this speed-up, averaged across range of applications and architectures.
Categories and Subject Descriptors
C.4 [Performance of Systems]: Design Studies; D.1.3 [Programming Techniques]: Concurrent Programming; Parallel programming
Keywords
wavefront pattern, auto-tuning, multi-GPU
1. INTRODUCTION AND BACKGROUND
The advent of heterogeneous systems comprising multicore CPUs and manycore accelerators such as GPUs, has increased the computational power available to everyday users, but has come at a price to the application developer and programming toolchains. The developer now has to navigate diverse languages and libraries, and integrate these within single applications. Performance tuning of such applications is more complicated than tuning essentially homogeneous systems. Finding a programming methodology and toolchain which can address these challenges is widely recognized as being of major importance, both academically and industrially [5].
Pattern-oriented parallel programming [12] offers a promising approach to the heterogeneous parallelism challenge, by encapsulating parallel decomposition and distribution behind an API which requires the programmer to code only application specific aspects. This approach not only simplifies the programmer’s task but also presents the system with a constrained optimization challenge of choosing between and tuning parameters of a set of candidate, heterogeneous parallelizations. This can provide a basis for performance portability. We present a case study in the application of this approach. Our selected pattern is the wavefront. We have experimented across a range of wavefront applications across systems which incorporate a multicore CPU and multiple GPU accelerators. In order to better understand the tuning tradeoffs, and to assist in the evaluation of our heuristics, we have performed an exhaustive exploration of an interesting fragment of the tuning space, across a collection of systems comprising a CPU and single or multiple GPUs. Since such an exhaustive search would be impractical in a production system, we have investigated the application of machine-learning strategies to reduce the search time. We have experimented across a range of wavefront applications and heterogeneous systems. The wavefront pattern [6] abstracts computations which evaluate a class of multidimensional recurrence relations. Figure 1 gives a graphical representation of a two-dimensional wavefront. The values of the relation are computed into a multidimensional array. Computation starts at position (0,0) and propagates to neighboring elements in a series of diagonal bands, resulting from the dependencies inherent in the pattern. This wave-like sweep of computation gives the pattern its name.
For our purposes, the key characteristics of a wavefront instance are as described in table 1. dim is the number of rows in the array. For simplicity we assume square arrays, but this restriction could be lifted straightforwardly. tsize captures the granularity of the computation at each point in the array, which we assume to be regular as typically the
<table>
<thead>
<tr>
<th>Parameter Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>dim</td>
</tr>
<tr>
<td>tsize</td>
</tr>
<tr>
<td>dsize</td>
</tr>
</tbody>
</table>
Table 1: Input Parameters
Figure 1: (a) Waveflow for a two dimensional instance of size 4 x 6 (b) The number of concurrently computable elements increases from iteration 0 until maximum parallelism is achieved at iterations 3, 4 and 5. Part (b) of the figure is inspired by [1].
case. \( dsize \) refers to the number of floating point data items at each point in the array, providing a measure of data granularity. These characteristics will form the input parameters to our autotuning framework. Their experimental values will be discussed in section 3.1.1.
The remainder of this paper is organized as follows. Section 2 presents our implementation strategy, its tuning points and the trade-offs these create. Section 3 discusses our experimental programme, covering the applications considered, implementation space and overall autotuning strategy. Section 4 discusses the results of our exhaustive evaluation of the tuning space. Section 4.2 evaluates our machine learning strategies used for autotuning. We review related work in section 5 and present our conclusions and future work in section 6.
2. IMPLEMENTATION STRATEGY
Our parallel wavefront execution strategy extends previous work [3] with support for GPU tiling and the use of multiple GPUs.
In a wavefront, data point computation time is roughly homogeneous so maximum parallelism occurs at the diagonal. Within a diagonal, computation of each data point is independent, hence overall diagonal computation is data parallel. The Single Instruction Multiple Thread (SIMT) constraints of the GPU architecture are thus satisfied by the diagonal major representation of data and successive diagonals can be offloaded onto a GPU. However, it is intuitively clear that this is only beneficial for diagonals of sufficient size and/or computational granularity to amortize the costs of transferring data to and from the device and of initializing execution. Determining these diagonals is a machine and application dependent tuning criterion. For the remaining data points, CPU computation is preferable and it is a common optimization to partition this space into rectangular tiles, computing all points in a tile sequentially in order to benefit from cache re-use. Optimal selection of tile size is also machine and problem dependent [10, 13].
Tiling within a GPU [1], reduces global memory access within the GPU and leads to local cache reuse, besides invoking fewer kernel calls from the host CPU. GPU tiles map to work-groups in OpenCL and the elements within the tile map to work-items or GPU threads. Within a work group, the work items have to be synchronized to follow the wavefront pattern. This introduces an overhead. The GPU tile size (our ‘gpu-tile’) tunable parameter is restricted by hardware and problem size.
Our single-GPU parallel implementation strategy therefore has three phases and three tunable parameters - number of diagonals to offload onto a GPU (or ‘band’) and the tile size of CPU and GPU (‘cpu-tile’ and ‘gpu-tile’). In the first phase, tiled parallel computation proceeds using all cores of the CPU. In the second phase, execution switches to the GPU where it proceeds, possibly tiled, diagonal by diagonal. In the third phase, computation reverts to the CPU and is completed in tiled parallel fashion. This implementation strategy is illustrated in the figure 2. The second phase,
Figure 2: Implementation strategy showing three phase computation for 20 x 20 grid. Phase 1 and 3 have CPU tiles of size 4x4 and phase 2 is GPU consisting of its 1D work groups, with each kernel call corresponding to one diagonal
or in principle the first and third phases, may be null. In the latter case, computation is carried out entirely within the GPU.
The presence of multiple GPUs introduces two further tuning parameters. We must decide how many GPUs to exploit (tuning parameter \( \text{gpu-count} \)). Furthermore, partitioning data among multiple GPUs is non trivial and communication among GPUs is expensive. Wavefront dependencies force data in the border regions (or ‘halo’) of partitioned diagonals to be shared among the GPUs. This is shown for two GPUs in figure 3. As successive partitioned diagonals within each GPU get computed, their border data becomes stale. This necessitates halo exchanges (or ‘swaps’) between the neighbouring GPUs, depending on the extent of overlap or halo size. Each time this happens, data elements have to be first transferred to the host (CPU) memory and then transferred to respective destination GPUs. The overhead
from data communication mandates minimising communication between GPUs. However, increasing halo size causes more redundant computation. Thus halo size is our fifth tunable parameter. To summarise, the tunable parameters in
<table>
<thead>
<tr>
<th>Parameter Description</th>
<th>Parameter</th>
</tr>
</thead>
<tbody>
<tr>
<td>cpu-tile</td>
<td>side length of the square tiles for CPU tiling</td>
</tr>
<tr>
<td>band</td>
<td>number of diagonals on each side of the main diagonal, to be computed on the GPU</td>
</tr>
<tr>
<td>gpu-count</td>
<td>number of GPU devices to use</td>
</tr>
<tr>
<td>gpu-tile</td>
<td>the GPU equivalent of CPU tiling</td>
</tr>
<tr>
<td>halo</td>
<td>size of the halo for dual GPUs</td>
</tr>
</tbody>
</table>
Table 2: Tunable Parameters
Figure 3: The partitioning of three diagonals among two GPUs with subsequent halo regions
our implementation strategy are as listed in table 2. These will be the targets of our autotuning framework. In the next subsection we discuss tuning trade-offs. The tunable three phase strategy itself is captured in our library code, using threads to control CPU phases and our own OpenCL harness to control communication with and execution upon the GPU.
2.1 Performance tuning trade-offs
For the wavefront pattern, GPU computation becomes feasible when there is enough parallelism to be exploited. Thus a) the problem size (dim) should be large enough, since smaller sized problems can be computed quicker in the faster CPU cores and b) the granularity of task (tsize) should be high so that computation dominates over the cost of starting a GPU and the communication overhead of transferring data between GPU and CPU. This communication cost naturally increases when data size (dsize) being transferred increases. Another factor that increases communication cost is the number of GPUs employed. While with a single GPU data is transferred from/to CPU only twice, dual GPUs have the additional overhead of exchanging neighbouring data between themselves every few iterations (halo swapping). This overhead becomes more expensive if the data size is large as more time is spent in swapping halos. A reduction in halo swaps is obtained by increasing the halo size. The diagonal major structure of the problem grid in the GPU restricts this halo size to a maximum of the length of the start/end diagonal. Even at maximum size, the advantage gained from fewer swaps has to be traded against redundant computation, which starts affecting performance with increasing granularity of task.
Communication cost is another factor affecting tiling (gpu-tile) the GPU since this reduces the number of kernel calls required but incurs the additional cost of synchronizing work items within each work group. If computation dominates over communication anyway, time spent in kernel calls no longer matters and tiling would then prove to be counterproductive.
Finally, the type of system affects the performance - a fast GPU coupled to a slow CPU means data will mostly be offloaded to the GPU (unless bandwidth is the bottleneck) leading to higher values of band. In such a system, CPU tiling will have negligible effect as most of computation is carried out in the GPU. Likewise, in fast CPU-fast GPU systems, good band values will be correspondingly lower.
3. EXPERIMENTAL PROGRAMME
We now describe our experimental programme. Our overall strategy is presented in figure 4, and is line with standard applications of machine learning to the tuning of computer systems [11]. Our goals are to understand the relationship between settings of the internally tunable implementation parameters and performance, and to use machine learning techniques to control the automatic setting of these parameters. The first phase of our experimental programme deals with training our model, using the synthetic wavefront application. The second phase applies the learned model to real, previously unseen wavefront applications.
Figure 4: Machine Learning Strategy: The training set is created by selecting high performing instances from an exhaustive parameterized search of the synthetic wavefront application. Decision tree models are built from the training set and cross validated. In deployment, the model is passed features of the previously unseen application and returns appropriate tuning parameter settings.
3.1 Training Phase
Training is conducted with a synthetically generated wavefront application. This is parameterizable across a wide range of size and granularities. It is a strength of the pattern-oriented approach that such an approach is feasible, removing the need to find real applications for the training phase.
3.1.1 Parameter Space
In order to gain insights into the shape of the performance space and trade-offs, we first conduct an exhaustive evaluation of our synthetic application, across a range of settings for the input and output parameters, as listed in table 3. dim is straightforward. tsize is measured in units of the execution time of a single iteration of the synthetic kernel function on a single CPU core. The data structure for each element in our synthetic application consists of two int vari-
Parameter Range
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Range</th>
</tr>
</thead>
<tbody>
<tr>
<td>dim</td>
<td>500 to 3100</td>
</tr>
<tr>
<td>tsize</td>
<td>10 to 12000</td>
</tr>
<tr>
<td>dsize</td>
<td>1, 3, 5</td>
</tr>
<tr>
<td>cpu-tile</td>
<td>1, 2, 4, 8, 10</td>
</tr>
<tr>
<td>band</td>
<td>-1 to 2^dsize-1</td>
</tr>
<tr>
<td>gpu-count</td>
<td>0, 1, 2</td>
</tr>
<tr>
<td>halo</td>
<td>-1 to 0.5*(length of first offloaded diagonal)</td>
</tr>
<tr>
<td>gpu-tile</td>
<td>1, 4, 8, 11, 16, 21, 25</td>
</tr>
</tbody>
</table>
Table 3: Parameter Ranges
ables and a varying number of floats, controlled by dsize. For example, dsize=5 means size of each element is 8+5*8=48 bytes and so on.
Values of parameters like dim, tsize, band, halo are spaced irregularly to avoid any cyclic pattern and incorporate a degree of randomness as later the best performing values are used in training our learning models.
To simplify modelling, we have overloaded the band and halo parameters to encode gpu-count. Thus, since a band of $n$ means that $2n+1$ diagonals in total are assigned to the GPU, a band of $-1$ means that the GPU is not to be used. Larger band values mean that at least one GPU is used, with a non-negative halo size meaning that the gpu-count is 2.
To enable us to explore the parameter space within a reasonable time, we set a threshold limit of 90 seconds on the runtime rttime for any execution. This has no impact on our tuning since any point that exceeds this threshold limit is already a very bad configuration which would not be selected as a training example. We removed the threshold in collecting points for our serial baseline in order to correctly compute performance improvement.
3.1.2 Autotuning Strategies
We used decision trees to derive our learning model, using training data drawn from the synthetic application. Training sets are created by subsampling the exhaustive search data as follows: firstly a subset of the problem instances (i.e., by dim, tsize and dsize) are selected by regular sampling; then the best five performance points for these instances (by tunable parameter values) are added to the training set. The intuition is that these should be representative of the good decisions we wish to embed in our models. Initial evaluation is done through cross-validation, meaning evaluation is conducted on instances of synthetic application which were omitted from the training set at the first step, to avoid overfitting. We explored different configurations of the learning model to obtain test results that were at least 90% accurate. This model was then applied to the real applications. This procedure is repeated independently for each system, in line with a scenario which would see the software trained “in the factory”.
During training, we first build a binary SVM based predictor to decide whether or not to exploit parallelism. For those cases in which parallelism is predicted to be beneficial we then apply and evaluate two machine learning heuristics, based on M5P Decision Tree and REP Tree [9]. Previous work [3] found simple Linear Regression models lacking, and upon exploring different learning models we found the decision trees to be most accurate in predicting optimal values for our tunable parameters.
3.2 Evaluation Phase
We evaluated the performance of our learned model on two real world wavefront applications. These two applications are summarized below.
3.2.1 Evaluation Application Suite
Nash Equilibrium [15]: A game-theoretic problem in economics, characterized by small instances but a very computationally demanding kernel. The internal granularity parameter controls the iteration count of a nested loop.
Biological Sequence Comparison [2]: A string alignment problem from Bioinformatics, characterized by very large instances and very fine-grained kernels, varying with detailed comparisons made.
The input parameter values of these real world applications map to our synthetic scale as follows: one iteration of Nash corresponds to a tsize=750 with data granularity of dsize=4, while the Biological Sequence Comparison application has tsize=0.5 and dsize=0.
3.3 Platforms
Our three experimental systems are described in a table 4. ‘HT’ stands for hyper-threaded CPU cores and ‘CU’ refers to the GPU compute units.
<table>
<thead>
<tr>
<th>System</th>
<th>Freq (Mhz)</th>
<th>Cores</th>
<th>Mem (GB)</th>
<th>GPU</th>
<th>Freq (Mhz)</th>
<th>Mem (GB)</th>
</tr>
</thead>
<tbody>
<tr>
<td>13-540</td>
<td>1200</td>
<td>4</td>
<td>4</td>
<td>GTX</td>
<td>1401</td>
<td>15</td>
</tr>
<tr>
<td>17-2600K</td>
<td>1600</td>
<td>8</td>
<td>8</td>
<td>4x(GTX</td>
<td>1215</td>
<td>16</td>
</tr>
<tr>
<td>17-3820</td>
<td>3601</td>
<td>8</td>
<td>16</td>
<td>Tesla</td>
<td>1147</td>
<td>14</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>C2070,</td>
<td></td>
<td>6.4</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>C2075,</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Table 4: Experimental Systems
We measure runtime of the whole program execution using wall clock timers in the host program, averaging across three runs (which exhibited low variance of less than .01).
4. RESULTS AND ANALYSIS
In section 4.1 we investigate the characteristics of the search space created by our synthetic training application, and explore the resulting model. In section 4.2 we evaluate the model on real world applications.
4.1 Training : Exhaustive Search Results
We now present the results of our exhaustive search space exploration of the synthetic application across all three systems.
4.1.1 Optimal performance points
Figure 5 presents a set of four heatmaps for the two multiple GPU systems and two heatmaps for the single GPU system, with all maps having tsize and dim as axes, and plotting the values of band and halo (for multi GPU systems) that result in the fastest execution time. The upper half heat maps correspond to dsize=1 (element size=16 bytes) and lower half with dsize=5 (element size=48 bytes). From
Figure 5: Heatmaps illustrate the band and halo values at the best performing points from our exhaustive search across three systems and element size of 16 bytes ($dsize=1$; 1 float and 2 ints) and 48 bytes ($dsize=5$; 5 floats and 2 ints). The i3 system is a single GPU system, hence no halo heat map is shown. In all maps the x-axis is $tsize$, indicating kernel task granularity and the y-axis is $dim$, indicating problem size. The maps it is clear that computing on the GPU becomes favourable ($band > 0$) when task granularity exceeds a certain threshold and that this threshold varies depending on the problem size, data size and the hardware. Consider the case of $dsize=1$ (element size=16 bytes) for the i7 systems with fast CPU cores, where the GPU is used from $tsize \geq 500$ and $dim \geq 1900$ onwards. This differs from the i3 system with its slower CPU cores where GPU use becomes feasible at a lower threshold of $tsize \geq 100$ and $dim \geq 1100$. Apart from the hardware affecting performance parameters, the effect of $dsize$ can be seen in all three systems, where the 48 bytes sized elements make GPU use costly as previously discussed, leading to higher thresholds values of $tsize \geq 2000$ for $dim \geq 1900$ and $tsize \geq 700$ for $dim \geq 1100$ in the i7 and i3 systems respectively. Note that halo sizes for the multi-GPU systems are higher when $tsize$ values are lower owing to the trade-off between redundant computation cost and lesser communication cost, as discussed in 2.1.
We conclude the heatmap observations by noting that GPU tiling was not beneficial in our search space. This was because tiled GPU performed better than the untiled GPU implementation in cases where the communication costs dominated over computation costs, $tsize < 50$. However in these situations, the CPU only parallel implementation dominated over any GPU based implementation due to the additional overhead incurred from starting the GPU.
4.1.2 Comparison with simple schemes
Next we investigate the quality of these heatmap points, by comparing the average speed-up obtained from using these optimal points against the three simple schemes of carrying out computation a) serially in the CPU, b) in parallel across all CPU cores with no GPU phase and c) entirely in the GPU (figure 6).
Figure 7: Average case comparison for the Synthetic Application. The x-axis is dim-tsize, indicating groups of problem sizes whose kernel task granularity varies from 10 to 12K and the y-axis is rtime, indicating actual runtime. Best is the best exhaustive rtime (ber), AVG is the average rtime from all configurations, S.D. is the standard deviation from average. dsize refers to the number of floats in our synthetic data structure containing 2 int variables. Total element size = 16 bytes (dsize=1; 1 float and 2 ints) and 48 bytes (dsize=5; 5 floats and 2 ints).
We note that in case of the i7 systems, on average, doing everything on the GPU, is worse than doing everything on the CPU. This is because the fast CPU outperforms the GPU by a large margin for low task granularity points (up to 10x for tsize≤100, dim ≤ 1100).
4.1.3 Average case comparison
The next comparison evaluates optimal heatmap points against average behaviour. This is seen in detail in figure 7, which representing the best exhaustive runtime (abbreviated to ber) and the runtime (rtime) averaged across all possible combinations of tunable parameters. The figure includes corresponding standard deviations. The x-axis shows groups of dim-tsize with dim varying 500 to 2700 with each dim grouping tsize varying from 10 to 12000. The y-axis is the rtime in seconds. Both halves show the performance across all three systems when element size=16 bytes and 48 bytes respectively. For dsize=1 (element size=16 bytes), the ber is 1.5-2 times faster than the average. The standard deviation steadily increases from dim=500 to dim=1900 due to the widening gap between the best performing and worst performing points. At dim=2700 there is a sharp drop as the rtime values exceeded our 90 second threshold. These points were excluded from the average. In case of dsize=5 (element size=48 bytes), the gap between ber and average rtime for dim=2700 at tsize=8K,10K and 12K narrows down to being just 20%. With higher dsize, the GPU overheads become larger and more points get excluded for exceeding the threshold.
4.1.4 Sensitivity analysis
We now explore how sensitive the best points are to changes in parameter values. Higher sensitivity would indicate that finding these points is challenging, whereas low sensitivity would indicate that simple random methods might suffice. Owing to space limitations we restrict our discussion of the exact distribution of points to two samples of dim=700 and dim=2700 belonging to the i7-2600K system. Figure 8 shows violin plots (a combination of box-plots and kernel densities) for these examples. We picked these two samples for dsize={1,5} as they are close to the boundary cases in our search space and they conclusively highlight how difference...
in problem size and data granularity (and corresponding variation in kernel task granularity within them) impacts the search space. For $\text{dim}=700$ we note that most of the points in $\text{tsize}=100$ to 1K are dispersed around the median value (represented as the white dot) with the best and worst points at the extreme ends. This is due to the best configuration in these cases being all CPU (see the heatmap in figure 5 showing $\text{band}=-1$ for i7-2600K where $\text{dim}=700, \text{tsize} \leq 2$K). In that case the tunable parameters are only $\text{cpu-tile}$ and $\text{dsize}$ resulting in configurations numbering in tens instead of thousands. Contrast this with $\text{tsize} \geq 2$K and for all points in $\text{dim}=2700$ where there are many points less than the median value, as seen from the flat base of each violin. These cases correspond to various combinations of the tunable parameters $\text{band}$, $\text{halo}$ and $\text{gpu-tile}$ in addition to $\text{cpu-tile}$. We also observe that in case of $\text{dim}=2700$, $\text{dsize}=5$ variations in the former three parameters do not affect performance as much as for $\text{dim}=700$. This is also confirmed by the lower gap between average $\text{rtime}$ and $\text{ber}$ (see figure 7). However selecting the worst points in these cases, such as computing on the CPU only with $\text{band}=-1$ when $\text{dim}=2700$, $\text{dsize}=1$ and $\text{tsize} \geq 4$K, is quite costly (up to 8 times slower). The worst case in these cases are the best points for $\text{dim}=700$, $\text{tsize} \leq 2$K. Thus, while variation in tunable parameter values from the best values within a subset of input configurations may not affect performance, it can affect performance in other subsets.
We note that the best points in some subsets were the worst ones in others and vice versa, meaning that any attempt to hand code heuristics for each case quickly becomes impractical. The exhaustive search results vindicate our choice to pursue auto-tuning strategies based on machine learning.
4.1.5 The learned model
A fragment of the learned model which predicts the optimum $\text{halo}$ values for the i7-2600K system is shown in figure 9. The regression equation (LM1) shows that $\text{halo}$ depends on other tunable parameters like $\text{band}$ and $\text{cpu-tile}$. This agrees with our intuition as $\text{halo}$ values are a measure of the extent of overlap among partitioned diagonals offloaded onto GPUs. Hence, $\text{halo}$ values depend on $\text{band}$ and $\text{cpu-tile}$ values, apart from the input parameters of task granularity and data granularity.


search we found \textit{gpu-tile} values corresponded to either 1 or 0 (meaning a GPU was not employed), so it was a binary decision that was accurately predicted using REP Tree. \textit{cpu-tile} and \textit{band} values, like \textit{halo} values, were predicted using the M5 pruned tree model.
4.2 Evaluation : Autotuning Results
For the fine grained Smith-Waterman string compare application autotuning was trivial as the band prediction were 100% accurate, i.e. do everything on the CPU. Our learning model had predicted band=1 for all \textit{tsize} < 100, across our search space of \textit{dim} \leq 3100. Thus in the context of our search space only the predicted \textit{cpu-tile} values differed and selecting the best points was trivial.
A summary of our auto-tuner’s performance for the Nash application is shown in figure 10. This figure describes for each system, the average optimal speed-up against a sequential baseline found during exhaustive search of Nash, and the speed-up obtained by our auto-tuner.
The super-optimal performance in the case of the i3-540 is explained by the fact that our regression model based tuner is free to select parameter values which lie outside the set of cases explored in the (necessarily finite) full search. The better quality predictions for the i3-540 can be explained by considering a) it is a single GPU system with only two tunable parameters \textit{band} and \textit{cpu-tile}, i.e. less parameter values to predict as compared to the multi-GPU systems and b) its four CPU cores are slow relative to its GPU, meaning most of the data is often offloaded onto the GPU, easing prediction as compared to the i7 systems with fast CPU cores.
We conclude this section with a detailed visualization of how our auto-tuning fares against the best exhaustive runtime or ‘ber’ (figure 11). The \textit{rtimes} after autotuning is slightly lower than the ber for the i3-540 at many points (as discussed above), while it is slightly higher for the i7 systems as prediction is harder.
5. RELATED WORK
\textsc{CO2P3S} [4] is a wavefront framework that generates parallel programs from user supplied methods and data. However, it is restricted to shared memory architectures and does not employ any optimization techniques for any combination of its application dependent properties. The wavefront abstraction in [15] targets multicore and distributed systems. However, its tunable parameters are specific to distributed systems. It also employs processes instead of threads as they are more adaptable to distributed systems but overhead from processes can impact performance.
Stencils have similar issues, but a different dependency pattern to wavefront. Autotuning for the stencil pattern has been widely investigated (e.g. [10, 14, 16]). A multi-GPU framework to handle stencils is covered in [18]. A key difference with our implementation is the absence of dependence between elements in a stencil pattern, which means halo swapping is less frequent for stencils distributed over multiple GPUs than for wavefronts. Dynamic autotuning of multi-GPU/multicore CPU systems can also be based on analytical models [17]. However the problem class considered doesn’t belong to the dynamic programming class of problems and auto-tuning is done without resorting to machine learning. Among dynamic auto tuning frameworks, the Active Harmony framework [8] uses the greedy or Nelder Mead algorithm to search a high dimensional space and the tuning results are then treated as a new experience to update the
data characteristics database for future reference. Performance models for wavefront applications on GPU-enhanced HPC systems are presented in [7]. Machine learning techniques have been successfully employed to efficiently explore the CPU-GPU optimization space in [11], though here the decision tree models were used to select either multi-core CPU or GPU implementation and not a hybrid CPU + multi-GPU setup.
6. CONCLUSIONS AND FUTURE WORK
We have presented a framework that successfully encapsulates decomposition and distribution of wavefront computations across CPU cores and GPUs while automatically selecting high quality configurations with respect to problem size, data size and kernel task granularity. We demonstrated that well chosen settings for the number of diagonals to be offloaded (band) and length of overlap of computation between GPUs (halo) can produce significant improvements in the performance, while tiling inside the GPUs (gpu-tile) did not affect performance within our simple search space. Correspondingly, poorly chosen settings resulted in performance which was far from optimal. Our decision tree based auto-tuners were modelled on training data from instances of a synthetic application. This successfully predicted the optimal values for various tunable parameters for the fine grained Biological Sequence Comparison and coarse grained Nash wavefront applications, across three different systems, finding an average of 98% of the performance achieved by an exhaustive search. In future we plan to extend our framework to incorporate other dynamic programming problems, beyond simple wavefronts, such as the 0/1 knapsack problem [19]. We aim to enhance our tiled multi-GPU strategy by incorporating more than two GPUs and plan to upgrade our offline auto-tuner to tune at runtime.
7. REFERENCES
|
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/18615208/PMAMpaper.pdf", "len_cl100k_base": 7600, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31550, "total-output-tokens": 9360, "length": "2e12", "weborganizer": {"__label__adult": 0.00048422813415527344, "__label__art_design": 0.0007390975952148438, "__label__crime_law": 0.00046634674072265625, "__label__education_jobs": 0.0010280609130859375, "__label__entertainment": 0.00015819072723388672, "__label__fashion_beauty": 0.00026154518127441406, "__label__finance_business": 0.00037217140197753906, "__label__food_dining": 0.00043320655822753906, "__label__games": 0.0010576248168945312, "__label__hardware": 0.004474639892578125, "__label__health": 0.0008435249328613281, "__label__history": 0.0005474090576171875, "__label__home_hobbies": 0.0002058744430541992, "__label__industrial": 0.0009412765502929688, "__label__literature": 0.0003371238708496094, "__label__politics": 0.0004291534423828125, "__label__religion": 0.00078582763671875, "__label__science_tech": 0.357421875, "__label__social_life": 0.00011348724365234376, "__label__software": 0.009613037109375, "__label__software_dev": 0.6171875, "__label__sports_fitness": 0.0004696846008300781, "__label__transportation": 0.00116729736328125, "__label__travel": 0.00031280517578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38525, 0.04475]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38525, 0.16123]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38525, 0.88209]], "google_gemma-3-12b-it_contains_pii": [[0, 1544, false], [1544, 5849, null], [5849, 10372, null], [10372, 15477, null], [15477, 21187, null], [21187, 23498, null], [23498, 26265, null], [26265, 29487, null], [29487, 33038, null], [33038, 38525, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1544, true], [1544, 5849, null], [5849, 10372, null], [10372, 15477, null], [15477, 21187, null], [21187, 23498, null], [23498, 26265, null], [26265, 29487, null], [29487, 33038, null], [33038, 38525, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38525, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38525, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38525, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38525, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38525, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38525, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38525, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38525, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38525, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38525, null]], "pdf_page_numbers": [[0, 1544, 1], [1544, 5849, 2], [5849, 10372, 3], [10372, 15477, 4], [15477, 21187, 5], [21187, 23498, 6], [23498, 26265, 7], [26265, 29487, 8], [29487, 33038, 9], [33038, 38525, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38525, 0.18831]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
ea3b6abf6d402c933d612d54b201ad7dadba833c
|
Towards a Generalized Architecture for the Integration of Tools in LMSs
doi:10.3991/i-jet.v4s1.792
J. Fontenla, M. Caeiro and M. Llamas
University of Vigo, Vigo, Spain
Abstract—In this article we introduce the main components of a generalized architecture to facilitate the integration of tools in LMSs. This proposal tries to improve the reuse possibilities of the tools in e-learning systems. Reusability has been suggested as a key solution to reduce the high costs of the development of educational experiences in e-learning systems. Up to now, reutilization has focused mainly on the educational contents around metadata standards, contents formats, packaging systems, etc. However, educational practices usually involve tools to facilitate the communication, collaboration and work of students and teachers as well. This proposal is part of a wider solution based on the language PoEML, in which not only the possibility of the inclusion of tools in LMSs is considered, but also the management of its utilization.
Index Terms—Educational modeling languages, Perspective-oriented EML, Learning Management Systems, Web Services, Reusability.
I. INTRODUCTION
Since it was noticed that e-learning was not a cheap alternative to traditional learning, the development of specifications and standards to promote the reuse of educational resources has been one of the priorities of the research community. Up to now this initiative has focused mainly on educational contents, leading to results such as ADL SCORM [1] or IMS QTI [2]. Basically, these proposals define how to structure and to arrange the media contents to support and facilitate the development of educational experiences according to different pedagogical purposes. For this reason, a new business model for the inclusion of external tools in EML-based LMSs is considered, which the development of LMSs and tools can be split is the approach the implementation of external tools. This consideration of external tools is thus transferred to the integration of external tools. In this model, the LMS provides functionalities to achieve the planning, coordination and management of educational events. On the other hand, tools provide the necessary functionalities for students and teachers to communicate, collaborate and generally work in the development of their educational tasks.
The proposal of using external tools requires the consideration of several problems that must be solved. Firstly it could be posed the possibility of linking the LMS with a set of external tools in a static or a dynamic way. If dynamic, search systems should be needed in order to locate tools suitable for the desired characteristics. Middleware systems are also needed in order to control the access of users from the LMS to the external tools, handling their authentication, the sessions’ control, the management of the actuation of the users in the external tools, and in general all the communications that might take place between the LMS and the tools.
Nowadays there are several groups working in specifications and recommendations that face some of these problems, although they are in an early stage and maybe with a narrower and less generalized sight than the one described here. In this article we present an architecture to deal with the different needs identified in the integration of external tools in EML-based LMSs. This proposal is part of a wider solution based on the educational modeling language PoEML [4]. In this language other aspects apart from functionalities provision with external tools are taken to account, with the general purpose of supporting the development of educational experiences according to different pedagogical approaches, mainly those based in collaboration and practice.
This article is organized as follows. Section 2 briefly introduces educational modeling languages, focusing in PoEML. Section 3 reviews in a critical way some specifications and recommendations on the use of Web Services, and Communications Technologies (ICTs), with which students and teachers can interact, experiment, communicate and even manipulate the contents themselves. Current LMSs usually include a basic set of tools that broadly cover the most common needs, but the support to the inclusion of third-party tools or to the development of new tools is very expensive or even nonexistent. So, for the sake of developing versatile e-learning systems at low cost, the need of facilitating the reuse of tools arises.
Services in e-learning environments. Section 4 depicts the proposed architecture to solve those problems identified in Section 3. Finally, Section 5 ends up with some conclusions.
II. EDUCATIONAL MODELING LANGUAGES
Educational modelling languages (EMLs) have been proposed to allow the creation of descriptions (or models) of didactic units. The objective of such descriptions is to allow their processing by suitable computational applications, namely LMSs based on the EML of the description, to support the development of didactic units. In this sense, it could be said that EMLs are an executable notation, involving those elements and processes that take part in educational scenarios.
Nowadays there is a de-facto standard in these languages named IMS Learning Design (IMS-LD) [5]. Some projects have dedicated resources for years to the creation of a LMS based on IMS-LD, but they have not been able to obtain a ready-to-use system. Perspective-oriented EML (PoEML) is a new EML that tries to improve the description and computational support of didactic units. The main characteristic of this language is the separation of the models of didactic units in several parts, called perspectives that, to a great extent, can be tackled separately. A first consequence of this separation is that each part may include several alternative designs, with independence of the descriptions made in other parts or with a controlled dependency. This way, the reuse of the models is facilitated.
PoEML divides the modelling of didactic units in 13 perspectives. These perspectives have been thoroughly described in other publications [6], so here we’ll just describe briefly those relevant for this article: those regarding the integration of external tools. The Tools perspective models the characteristics (functional and behavioural) of the tools required in the environments; one of the most original characteristics is that, unlike IMS-LD, it allows an indirect or decoupled characterization of educational tools used in a didactic unit, as well as allowing their explicit description. By this characterization, a tool is defined according to some functional (the expected functionalities) and behavioural (the permissions that allows to grant, the events that notifies, and the operations that allows to invoke automatically) requirements. Later, the LMS will be responsible of integrating an external tool satisfying such characterization. With that, the dynamic inclusion of external tools is achieved.
Apart from facilitating the inclusion of external tools, PoEML proposes three specific perspectives to do the management and control of the use that students and teachers make of such tools. These perspectives are:
- The Authorization perspective, that allows the assignment of permissions to the participants of the didactic unit. In this perspective, for example, it could be modeled that students can send messages to a forum, but only teachers are allowed to create new threads or to erase posts.
- The Awareness perspective, in which it can be modeled the capture of relevant events triggered by the interaction of the participants (students and teachers) with tools. Apart from the capture of events it can also be modeled their processing (e.g. filtering) or the report to some participants of those events of interest. The notification of events is very useful in collaborative educational scenarios, in which the rest of participants must be aware of those changes performed over some shared resource, as a text file or some code fragment.
- The Interaction perspective, in which it’s possible to describe the automatic and controlled invocation of operations during the realization of a didactic unit. The automatic invocation of methods is useful in practical scenarios in which guided demonstrations are needed. Operations may also be invoked just when some events take place. For example, in this perspective it can be modeled that a chat tool must send a welcome message every time a new participant logs in.
III. EXISTING RECOMMENDATIONS ON TOOLS INTEGRATION
Service Oriented Architectures (SOA) are being seen as a highly flexible way to build up complex applications from decoupled components. During last years, some projects following this approximation have started in the field of e-learning. In this section we will comment the methodology of the three most remarkable ones according to us, and we will discuss some of their limitations, that will be taken to account when we propose an architecture based on the perspectives of PoEML.
The E-Learning Framework (ELF) [7] is an initiative due to the United Kingdom’s Joint Information Systems Committee (JISC), in collaboration with the Australian Department of Education, Science and Training (DEST) and the United States’ Learning Services Architecture Lab (LSAL). ELF does not specify any concrete architecture to integrate external tools in a LMS; on the contrary, its main purpose is to facilitate the development of architectures of LMSs based on Web Services. It identifies more than 40 necessary modules in a LMS providing a comprehensible set of functionalities. Thus, it allows the community to have a shared “vocabulary” and a reference framework for the development of e-learning systems. At the present moment there exist several projects related to some of the components identified in ELF [8], although there is not enough cohesion among them to build a LMS with a minimum of functionality.
The specification IMS Tools Interoperability (IMS-TI) [9] proposes a more specific framework than that of ELF. IMS-TI makes use of a combination of Web Services and proxying to integrate external tools in a LMS. The specification, currently at its version 1.0 and with the 2.0 under development, tries to eliminate the necessity of proprietary interfaces between e-learning platforms and tools which, ultimately, would allow that both classes of systems could follow independent development processes, thus promoting specialization, innovation and competition. The configuration of the tools is done by editing a XML file at the LMS side, although it’s expected that in version 2.0 this can be done in a more automatic way.
In our opinion, IMS-TI has two main drawbacks. The first one is the lack of reference implementations that could be used as a guide for new developments. There are
only few implementations, such as the public demonstration for the “alt-i-lab 2005 Conference” [10], or those prepared for the “Google Summer of Code 2008” [11], which is not in agreement with the expectative put on IMS-TI. The second one is that, despite it allows the seamless execution of remote tools, IMS-TI doesn’t provide any ways to control and manage the use of the tools by teachers and students.
Finally, CopperCore Service Integration (CCSI) [12] is another architecture proposed for integrating tools in IMS-LD-based LMSs. CCSI is an intermediate layer between the IMS-LD engine CopperCore and the presentation layer built upon CopperCore. Every time the presentation layer wants to invoke a tool (for example, an assessment tool) it will access the CCSI layer, which in turn will invoke the tool. The latter will send the results to CCSI, which in turn will forward them to the presentation layer and to CopperCore. The communication between the presentation layer and the different tools is possible because CCSI shows an interface with a set of predefined methods for each kind of service that may be accessed; after that, CCSI will handle the adaptation of the call of the presentation layer to invoke the concrete tool.
CCSI has some limitations that make that its acceptance is not as big as it would be desired. Firstly, as IMS-TI, neither it supplies any mechanisms to control and manage the use of the tools, nor it allows to supervise the activity of the students, nor it allows to configure the automatic invocation of methods. Secondly, only one tool of each kind can be integrated (e.g. it’s not possible to integrate two different text editors), which may dramatically reduce the possibilities of the system to satisfy the needs, preferences and personal limitations of the users. Thirdly, a complicated editing process of XML files must be accomplished in order to integrate new tools, which may be difficult for those users which are not familiar with this language. Finally, the architectural design itself of CCSI implies extra work for the developers of applications, as they must supply the tools as well as extra modules for their integration with CCSI.
**IV. NEEDS OF THE INTEGRATION OF EDUCATIONAL TOOLS**
The above-mentioned architectures have some drawbacks making their implementation level quite reduced. A common point to all of them is that, although they offer a framework to integrate tools, they don’t allow to control and manage the way that teachers and students use them.
In this section, the architecture of a PoEML-based LMS will be proposed, allowing to solve these problems. The processes needed to integrate a tool in this LMS and to configure its permissions, events and operations will be shown, and some related problems will be discussed.
**A. Architecture of a PoEML-based LMS**
The decomposition into perspectives carried out in PoEML allows us to tackle the design of the LMS in a modular fashion, discarding the monolithic design of current LMSs which would be difficult to develop and to maintain, see Fig. 1. In this figure we can see three different parts:
- **The central layer is the Engine.** This part supplies the core functionalities of the system. For the development of this engine, the development of independent modules according to the perspectives of PoEML is proposed. One of the modules, the one related to the Tools perspective, would be the responsible for managing the configuration of the tools, the capture of the events triggered by the tools, the automatic invocation of operations, the control of sessions and instances, and the data transfer.
- **On top of the engine we can see the Presentation layer,** in which they can be found those applications that build up the user interface for teachers and students. The use of a presentation layer allows us to use the same engine and the same infrastructure, but offering different functionalities and appearances.
- **Beneath the engine is the Infrastructure layer.** This layer supplies a set of storing functionalities and general purpose services, from databases with the data and marks of the students, to common tools that should not have availability problems such as forums, email or calendars. It’s worth mentioning that this layer is the one that will receive the PoEML-formatted file that will be processed and played by the Engine.
Given the loose coupling of the parts that build up the Engine, inherited from the one among the perspectives of PoEML, it’s possible to develop them independently. Thus, none of the modules of the other perspectives will influence on the way the modules of the Tools, Authorization, Awareness and Interaction perspectives are implemented, which are the focus of this article. So, in the following we can ignore the architecture of the rest of the LMS.
**B. Classification and communications with tools**
One of the most remarkable characteristics of PoEML is the decoupled characterization of tools, given its functionalities, permissions, events and operations. However, PoEML does not specify how tools must be...
The availability of a vocabulary for the characterization of tools is the basis of the development of systems to classify, search and configure them. To allow the configuration of tools, the use of a public interface is proposed. This interface has both read methods and write methods. Read methods allow us to know the permissions, events, operations and functionalities of a tool. In response, the tool will return a subset of the values defined in the ontology. An example of a read method could be getPermissions(), that returns a description of the available permissions of the tool, in agreement with the concepts of the ontology. On the other hand, write methods allow us to activate or to disable some of the characteristics of the tools, receiving as parameters the characteristic to be modified and a boolean value with its new state. An example of a write method could be setEvents(“newMember”, true), which says that the tools must notify events when a new member joins. Through this set of methods, it’s possible to configure systematically all tools using an only interface (and so, an only application), and the LMS may automate the integration of tools.
Other projects such as CCSIT also define a generic API, but they do not have methods to describe the tool’s being integrated, nor to perform the control and management of its use according to its permissions, events and operations.
### C. Configuration, integration and use of tools
The use of a generic API facilitates the process ranging from the configuration of a remote tool to its use by a student or a teacher, shown in Fig. 2. The course manager will ask the tool for the list of parameters that can be configured (1), invoking the appropriate methods of the API. As a result, the tool will send some data (functionalities, permissions, events and operations) about itself (2). The course manager will choose from these data those permissions, events and automatic operations that are suitable for the course, and will create a profile with them (3), again using known methods of the API. This profile will be applied to the sessions of all those users (teachers and students) that access the tool. Should another configuration be necessary (for example, if teachers had to do some management over the course), another profile with different parameters should be created. Next, the course manager will store in the tools database the data concerning the tool that has just been configured (i.e. storing those permissions, events and operations supported by the tool and configured in the profile, and the URL of the tool) for future use. The tool is thus correctly configured to be used.
The process continues when a student, using a web browser, wants to resume his/her activities. Firstly, he/she will authenticate when logging in the LMS (5), after which he/she will receive an affirmative or negative confirmation (6). If affirmative, the browser of the student will automatically request to the LMS a list of the courses in which the student is participating (7). The LMS will return such list (8), and the student will choose a concrete course (9). When a course has been chosen, the LMS will send another tree-shaped listing (10), whose nodes are the different educational scenarios that build up the course (e.g. “Theory” and “Practice”, which in turn could be built up by the educational scenarios “Practice 1” and “Practice 2”). The student will choose a educational scenario (11), and finally the LMS will display a web page with all the necessary information for the development of the educational scenario, including links to external tools (12). These tools may already have running instances (e.g. a collaborative text editor that is already being used by other students) or they may not, in whose case it should be launched; the Tools perspective module will be responsible for managing the number of instances of each tool. In any case, when a student accesses a tool (13) he/she will authenticate himself/herself (for example, using some hash code sent will the HTTP POST method), and the tool will look for the profile to be applied. From this moment, the tool will display a user interface in accordance with the permissions assigned to the student, will notify events to the LMS as configured in the profile (14), and will execute all those operations required by the LMS. Finally, with all the events notified the LMS will generate logs, which will be stored for future use (15).

**Figure 2.** Configuration and use of external tools.
### D. Management of permissions, events and operations
One subject that must be addressed is the management of permissions, events and operations during the process described above.
1) **Management of permissions**
The permissions that users are granted are given by the active profile. This implies that users can’t modify their permissions during a session, unless they change their profiles. This change of profile is possible using different authorization specifications. As pointed in Section 2, a perspective may include different specifications that may be activated or deactivated dynamically (e.g. according to the marks of the student), so it’s possible to assign different permissions to a participant. The interface displayed will be in accordance with the permissions granted, hiding those options of the tools that require more privileges.
2) **Management of events**
The profile also contains all the information regarding the notification of events. Firstly, the URL where events must be sent to (namely, LMS’s URL) is included; once events have been received, the Awareness perspective module will process (e.g. filter) and notify them to those participants interested or to other tool (e.g. for their persistent storage).
3) **Management of operations**
The events received at the LMS may be used to determine when some operations must be invoked automatically. For example, it could be interesting to invoke an operation to silence a chat room when the “The teacher has joined the room” event is received. Other operations must be invoked according to a temporal specification, independently from the events generated by the use of the tool, for example “Create a chat room for the subject Distributed Computing on June 29th at 12:30”.
In any case, in the same call of the invocation of an operation, the concrete instance of the tool must be specified (e.g. in the example of the chat tool, the operation to silence the room must only be applied to the instance of the students, not to the instance of the teachers).
E. **Data persistence among sessions**
Another important subject is the persistence of data between different sessions of the tools. A user should be able to resume the work in the same state it was when the last session concluded, in the same way it would be possible if the tool were integrated in the LMS (e.g. in the case of a collaborative text editor, it should be possible to continue editing the text of the last session). Three possibilities are proposed:
- **Client-side storage**: the user browser will use techniques such as cookie sending. The cookie field will contain the data of the last session. This technique has the important drawback that it’s only feasible when the amount of data is low, so it wouldn’t be suitable to send, for example, a whole file.
- **LMS-side storage**: whenever a remote tool is invoked, session data will be transferred from the LMS. This solution has the advantage that the LMS has control over session data, thus avoiding problems of data loss due to availability problems of the tools. However, it implies that LMSs and educational tools can’t be developed independently. Indeed, tools developers must not assume that there will be external systems (in this case, the LMSs) that will store and manage session data.
- **Tool-side storage**: data are stored at the tool. This option is the most interesting, as it could be desired that a remote tool could be used in a standalone way, with no need of other systems supplying it data.
So, the system storing and managing session data are the remote tools themselves. However, an intermediate solution could be applied; to deal with availability problems of the tools, backup copies of the data could be stored at the LMS.
V. **Conclusions**
The task of developing a new LMS can be extremely complex, as all aspects of a didactic unit must be taken into account. Following the separation of concerns approach the problem can be faced in a more easy way. PoEML is an EML following such approach, so it’s natural to build a PoEML-based LMS.
One of the perspectives considered in PoEML is the Tools perspective, which allows their decoupled description, therefore trying to promote the reuse of the models of didactic units. Despite nowadays there are some recommendations to promote the reuse of didactic units, they are only focused on educational contents and ignoring the tools used to manipulate them.
The use of Web Services in e-learning environments is a promising approach to complement such initiatives. Firstly, software developers can specialize and focus their efforts either in the LMS or in the external tools. This implies lower developing costs and a shorter period between the releases of new versions. Secondly, it’s possible to develop ad-hoc tools for a concrete didactic unit, and use them in different LMSs. Thirdly, teachers could choose the most suitable tools for the didactic units among a broad set of tools, as they wouldn’t be exclusive of a concrete LMS. Finally, it would be possible to build up LMSs supporting a bigger amount of users, as computational load would be spread through the servers of the LMS and the tools.
In our opinion, the existing specifications on this subject (mainly IMS-TI and CCSI) are still in an early stage, or they don’t fully support the control and management of the tools.
**REFERENCES**
TOWARDS A GENERALIZED ARCHITECTURE FOR THE INTEGRATION OF TOOLS IN LMSs
[12] CopperCore Project official site. Last accessed on may, 2008 at: http://coppercore.sourceforge.net/
AUTHORS
J. Fontenla is a Telecommunications Engineer and a PhD Student in the University of Vigo. He is currently Assistant Teacher at the Department of Telematic Engineering, University of Vigo.
M. Caeiro received his PhD in Telecommunications Engineering from the University of Vigo in 2007. He is currently Assistant Teacher at the Department of Telematic Engineering, University of Vigo. He has received several awards by the W3C, NAE CASEE new faculty fellows and the IEEE Spanish Chapter of the Education Society.
M. Llamas received his Eng. degree (1986) and his Ph.D. degree (1994) from the Polytechnic University of Madrid. From 1994 to 1997 he was Vicedean of the Higher Technical School of Telecommunication Engineers. From 1999 to 2003 he was the head of the ICT Area of the University of Vigo. He is member of ACM, IEEE and IFIP WG3.6 (Distance Education). He has received several awards by the W3C and IEEE.
This work has been funded by the Spanish Ministerio de Educación y Ciencia under grant TIN2007-68125-C02-02, and by the Galician Consellería de Innovación e Industria under grant PGIDIT06PXIB322270PR.
This article was modified from a presentation at X International Symposium on Computers in Education (SIIE2008) 1st-3rd October 2008, Salamanca, Spain. Manuscript received 08 January 2009. Published as submitted by the authors.
|
{"Source-Url": "https://www.learntechlib.org/p/45238/article_45238.pdf", "len_cl100k_base": 5542, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 17840, "total-output-tokens": 6530, "length": "2e12", "weborganizer": {"__label__adult": 0.0007071495056152344, "__label__art_design": 0.0015010833740234375, "__label__crime_law": 0.0007982254028320312, "__label__education_jobs": 0.39453125, "__label__entertainment": 0.0002684593200683594, "__label__fashion_beauty": 0.0003962516784667969, "__label__finance_business": 0.0009412765502929688, "__label__food_dining": 0.0008959770202636719, "__label__games": 0.0013456344604492188, "__label__hardware": 0.0020732879638671875, "__label__health": 0.0014934539794921875, "__label__history": 0.0011386871337890625, "__label__home_hobbies": 0.0002684593200683594, "__label__industrial": 0.0010232925415039062, "__label__literature": 0.0012540817260742188, "__label__politics": 0.000591278076171875, "__label__religion": 0.0013990402221679688, "__label__science_tech": 0.0545654296875, "__label__social_life": 0.0004549026489257813, "__label__software": 0.046112060546875, "__label__software_dev": 0.4853515625, "__label__sports_fitness": 0.0006489753723144531, "__label__transportation": 0.0014848709106445312, "__label__travel": 0.0007119178771972656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29492, 0.02022]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29492, 0.47009]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29492, 0.9279]], "google_gemma-3-12b-it_contains_pii": [[0, 4498, false], [4498, 10878, null], [10878, 15983, null], [15983, 21433, null], [21433, 27712, null], [27712, 29492, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4498, true], [4498, 10878, null], [10878, 15983, null], [15983, 21433, null], [21433, 27712, null], [27712, 29492, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29492, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29492, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29492, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29492, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29492, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29492, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29492, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29492, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29492, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29492, null]], "pdf_page_numbers": [[0, 4498, 1], [4498, 10878, 2], [10878, 15983, 3], [15983, 21433, 4], [21433, 27712, 5], [27712, 29492, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29492, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
3d1f576ad8b398b0d9fe7f852de10fce1840de3c
|
A brief introduction
• Joe Davis
• Lead Developer Support Engineer, PowerVR Graphics
• With Imagination’s PowerVR Developer Technology team for 6 years
• PowerVR Developer Technology
• SDK, tools, documentation and developer support/relations (e.g. this session 😊)
PowerVR Rogue Hardware
PowerVR Rogue
Recap
- **Tile-based deferred renderer**
- Building on technology proven over 5 previous generations
- **Formally announced at CES 2012**
- **USC - Universal Shading Cluster**
- New scalar SIMD shader core
- General purpose compute is a first class citizen in the core …
- … while not forgetting what makes a shader core great for graphics
### TBDR
*Tile-based*
- **Tile-based**
- Split each render up into small tiles (32x32 for the most part)
- Bin geometry after vertex shading into those tiles
- Tile-based rasterisation and pixel shading
- Keep all data access for pixel shading on chip
- Deferred rasterisation
- Don’t actually get the GPU to do any pixel shading straight away
- HW support for fully deferred rasterisation and then pixel shading
- Rasterisation is pixel accurate
TBDR
Bandwidth savings
- Bandwidth savings across all phases of rendering
- Only fetch the geometry needed for the tile
- Only process the visible pixels in the tile
- Efficient processing
- Maximize available computational resources
- Do the best the hardware can with bandwidth
TBDR
Power savings
- Maximizing core efficiency
- Lighting up the USC less often is always going to be a saving
- Minimizing bandwidth
- Texturing less is a fantastic way to save power
- Geometry fetch and binning is often more than 10% of per-frame bandwidth
- Saves bandwidth for other parts of your render
Rogue USC
Architectural Building Block
- **Unified Shading Cluster**
- Basic building block of the Rogue architecture
- Laid out in pairs, with a shared TPU
- **1, 0.5 and 0.25 USC designs are special**
- Different balance in the design
- Tend to find their way into non-gaming applications
Rogue USC
Shader Architecture
- 16-wide in hardware
- 32-wide branch granularity
- We run half a task/warp per clock
- Scalar SIMD
- Optimized ALU pipeline
- Mix of F32, F16, integer, floating point specials, logic ops
Rogue USC
Pipeline datapaths
- **Configurable in the IP core**
- F16 paths were sometimes optional, thankfully not any more
- F16 paths performance increased significantly after the first generation
- **Performance in your shader**
- F32 paths are dual FMAD
- F16 paths can do different things per cycle depending on shader
- ISA is available for you to interrogate though, with disassembling compilers
Rogue USC
Scalar
- Scalar ALUs
- Hard to understate what a benefit this is
- Seems obvious to do, right?
- Vector architectures are just hard to program well
- Scalar isn’t a free lunch
- Makes performance a lot more predictable for you
Rogue USC
Programmable output registers
- The pixel output registers in the ISA are read/write
- One per pixel
- Width depends on IP core
- We expose it programmatically with Pixel Local Storage
- Worked closely with ARM
Evolution
Health Warning: Really Bad Diagrams™
Rogue Evolution
- Architecture has changed quite a bit over time
- Rogue in 2010 still mostly looks like a Rogue today
- Significant evolutionary changes across the architecture
- Lots of it driven by developers before the IP is baked
- Lots of it driven by also analysing your stuff anyway
PowerVR Series6XT Rogue
- Host CPU Interface
- Vertex Data Master
- Pixel Data Master
- Compute Data Master
- System Memory Interface
- Core Mgmt Unit
- Unified Shading Cluster Array
- USC0
- USC1
- USCn
- Texture Unit
- Control and Register Bus
- System Memory Bus
- Coarse Grain Scheduler
- Multi-level Memory Cache Unit (MCU)
- Tiling Co-Processor
- Pixel Co-Processor
- 2D Core (TLA)
Extra low power GFLOPS
Supports both LDR and HDR ASTC formats
PowerVR Series6XT Unified Shading Cluster Array
PowerVR Series6XT USC
ALU core (FP32)
FLOP
FLOP
ALU core (FP32)
FLOP
FLOP
ALU core (FP16)
FLOP
FLOP
ALU core (FP16)
FLOP
FLOP
ALU core (FP16)
FLOP
FLOP
ALU core (FP16)
FLOP
FLOP
Special function
FLOP
16 pipelines
8 clusters
Series6 to Series6XT
Lots of lessons learned
- Improved scheduler
- Streamlined ISA
- Improved compute task efficiency
- Completely new F16 datapath
- Improved front-end for sustained geometry performance
- ASTC
PowerVR Series7XT Unified Shading Cluster Array
PowerVR Series7XT USC
- ALU core (FP32)
- FLOP
- FLOP
- ALU core (FP16)
- FLOP
- FLOP
- ALU core (FP64)
- FLOP
- FLOP
Special function FLOP
16 pipelines
2-16 clusters
USC
Optional
Series6XT to Series7XT
Adding features and smoothing off rough edges
- Changed how the architecture scales
- Improved USC
- Streamlined ISA
- Features
- Hardware tessellation
- DX11-compliant USC (precision mainly)
- FP64
Into the future
- Exciting changes being worked on across the architecture
- USC
- Front-end
- Scaling
- Stuff you want!
- You can help
- We love feedback about the architecture and how it could best fit what you’re doing
- Don’t be shy
PowerVR Wizard
Ray Tracing Update
What is Ray Tracing?
Ray tracing is the ability for the shader program for one object to be aware of the geometry of other objects.
PowerVR Architecture
PowerVR Series 6XT
- Host CPU Interface
- Vertex Data Master
- Pixel Data Master
- Compute Data Master
- Coarse Grain Scheduler
- Unified Shading Cluster Array
- USC
- Shared Texture Unit
- Tiling Coprocessor
- Pixel Coprocessor
Control and Register Bus
System Memory Interface
- Core Management Unit
- Multi-level Memory Cache Unit (MCU)
- 2D Core (TLA)
3 Unique Features of Wizard
- Fixed-function Ray-Box and Ray-Triangle testers
- Coherence-Driven Task-Forming and Scheduling
- Streaming Scene Hierarchy Generator
Fixed-Function Ray-Box and Ray-Triangle Testers
44x Less Area for Box Testing
The Coherency Engine lets us process all these rays at the same time.
Streaming Scene Hierarchy Generator
What is Ray Tracing?
Ray tracing is the ability for the shader program for one object to be aware of the geometry of other objects.
Just a few use cases
- Hybrid Shadows, Reflections, etc.
- Augmented Reality
- Production-Quality Renders
- Order-Independent Transparency
- Ambient Occlusion
- Asset creation / compression
- Global Illumination
- Physics & Collision Detection
- Virtual Reality
- Lens correction, Ultra-low latency rendering, Lenticular Displays
- A.I. & Line of Sight Calculations
- Rapid photo-quality output
Ray Tracing Requirements
Sustained Ray Throughput at 1080p, 60fps
Technique vs Ray throughput
<table>
<thead>
<tr>
<th>Technique</th>
<th>Ray throughput (GRays/s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Physics / AI / etc.</td>
<td>0</td>
</tr>
<tr>
<td>In-Engine Lightmap baking</td>
<td>0.15</td>
</tr>
<tr>
<td>Hybrid, Reflections</td>
<td>0.5</td>
</tr>
<tr>
<td>Hybrid, Soft Shadows, 1 light</td>
<td>1</td>
</tr>
<tr>
<td>Dynamic AO</td>
<td>1.5</td>
</tr>
<tr>
<td>Interactive GI, (Light Probes)</td>
<td>2.5</td>
</tr>
<tr>
<td>GI, Lens Effects, etc.</td>
<td>3</td>
</tr>
<tr>
<td>Fully ray traced game</td>
<td>3.5</td>
</tr>
</tbody>
</table>
Note: The chart shows the sustained ray throughput for different techniques at 1080p, 60fps.
Session – 11:00-12:00
Enhancing Traditional Rasterization Graphics with Ray Tracing
- James Rumble
- Developer Technology Engineer, Imagination Technologies
- Session includes:
- Ray tracing fundamentals
- Ray tracing pipeline & API key concepts
- Applications for PowerVR raytracing, e.g. efficient soft-shadows in deferred lighting renderers
PowerVR developer tools
PowerVR Tools
*Release schedule*
• **PowerVR Tools release process**
• Minor revision roughly every 6 months
• **Recent/upcoming releases**
• 3.5 SDK (April 2015)
• 4.0 SDK (due October/November 2015)
PVRTrace
What is PVRTrace?
OpenGL ES API tracer
- OpenGL ES 1.x, 2.0 and 3.x recording libraries
- GUI for analysis
Features
- Inspect, analyse and playback captured data
# New shader analysis
## Shader Analysis
### Selected Pixel (488, 729)
<table>
<thead>
<tr>
<th>UID</th>
<th>Call</th>
<th>Writes</th>
<th>Program</th>
<th>Fragment</th>
</tr>
</thead>
<tbody>
<tr>
<td>2000</td>
<td>glClear</td>
<td>1</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>2024</td>
<td>glDrawElements</td>
<td>52</td>
<td>54</td>
<td></td>
</tr>
<tr>
<td>2347</td>
<td>glDrawElements</td>
<td>55</td>
<td>57</td>
<td></td>
</tr>
</tbody>
</table>
### Frame Summary
- Total Vertices: 52,823
- Total Vertex Cost: 4,535,852
- Total Shaded Fragments: 4,782,532
- Total Fragment Cost: 27,113,399
- Average Overdraw: 1.62168
### Fragment Cost
- **Program Costs**: Program 1: 23,648 cycles, 70,944 cost, 1 Tex Reads, 23,648 total reads
- **Shader Costs**: Shader 46: 6,583 cycles, 98,745 cost, 1 Tex Reads, 6,583 total reads
- **Fragment Costs**: Fragment 52: 4,007,496 cycles, 24,044,976 cost, 1 Tex Reads, 4,007,496 total reads
- **Fragment 55**: 203,111 cycles, 1,218,666 cost, 1 Tex Reads, 203,111 total reads
## Draw Calls
<table>
<thead>
<tr>
<th>UID</th>
<th>Call</th>
<th>Vertices</th>
<th>Vertex Cost</th>
<th>Fragments</th>
<th>Fragment Cost</th>
<th>Total Cost</th>
</tr>
</thead>
<tbody>
<tr>
<td>2921</td>
<td>glDrawElements</td>
<td>456</td>
<td>61,104</td>
<td>0</td>
<td>61,104</td>
<td></td>
</tr>
<tr>
<td>2940</td>
<td>glDrawElements</td>
<td>342</td>
<td>45,828</td>
<td>0</td>
<td>45,828</td>
<td></td>
</tr>
<tr>
<td>2972</td>
<td>glDrawElements</td>
<td>666</td>
<td>78,588</td>
<td>846</td>
<td>83,664</td>
<td></td>
</tr>
<tr>
<td>2990</td>
<td>glDrawElements</td>
<td>1,275</td>
<td>150,450</td>
<td>3,076</td>
<td>18,456</td>
<td>168,906</td>
</tr>
<tr>
<td>3014</td>
<td>glDrawElements</td>
<td>666</td>
<td>89,244</td>
<td>376</td>
<td>1,880</td>
<td>91,124</td>
</tr>
<tr>
<td>3033</td>
<td>glDrawElements</td>
<td>1,275</td>
<td>170,850</td>
<td>1,043</td>
<td>5,215</td>
<td>176,065</td>
</tr>
<tr>
<td>3064</td>
<td>glDrawElements</td>
<td>666</td>
<td>78,588</td>
<td>8,305</td>
<td>45,830</td>
<td>128,418</td>
</tr>
<tr>
<td>3082</td>
<td>glDrawElements</td>
<td>1,275</td>
<td>150,450</td>
<td>8,292</td>
<td>49,752</td>
<td>202,202</td>
</tr>
<tr>
<td>3106</td>
<td>glDrawElements</td>
<td>666</td>
<td>89,244</td>
<td>1,272</td>
<td>6,360</td>
<td>95,604</td>
</tr>
<tr>
<td>3125</td>
<td>glDrawElements</td>
<td>1,275</td>
<td>170,850</td>
<td>955</td>
<td>4,775</td>
<td>173,625</td>
</tr>
<tr>
<td>3149</td>
<td>glDrawElements</td>
<td>24</td>
<td>384</td>
<td>2,849</td>
<td>8,547</td>
<td>9,931</td>
</tr>
<tr>
<td>3158</td>
<td>glDrawElements</td>
<td>6</td>
<td>96</td>
<td>35,421</td>
<td>106,283</td>
<td>106,359</td>
</tr>
<tr>
<td>3167</td>
<td>glDrawElements</td>
<td>12</td>
<td>192</td>
<td>9,609</td>
<td>28,827</td>
<td>29,019</td>
</tr>
<tr>
<td>3179</td>
<td>glDrawElements</td>
<td>6</td>
<td>96</td>
<td>0</td>
<td>0</td>
<td>96</td>
</tr>
<tr>
<td>3188</td>
<td>glDrawElements</td>
<td>6</td>
<td>96</td>
<td>0</td>
<td>0</td>
<td>96</td>
</tr>
<tr>
<td>3200</td>
<td>glDrawElements</td>
<td>6</td>
<td>96</td>
<td>0</td>
<td>0</td>
<td>96</td>
</tr>
<tr>
<td>3212</td>
<td>glDrawElements</td>
<td>6</td>
<td>96</td>
<td>0</td>
<td>0</td>
<td>96</td>
</tr>
<tr>
<td>3221</td>
<td>glDrawElements</td>
<td>6</td>
<td>96</td>
<td>0</td>
<td>0</td>
<td>96</td>
</tr>
<tr>
<td>3233</td>
<td>glDrawElements</td>
<td>6</td>
<td>288</td>
<td>7,889</td>
<td>23,667</td>
<td>23,955</td>
</tr>
<tr>
<td>3245</td>
<td>glDrawElements</td>
<td>6</td>
<td>96</td>
<td>0</td>
<td>0</td>
<td>96</td>
</tr>
<tr>
<td>3257</td>
<td>glDrawElements</td>
<td>6</td>
<td>96</td>
<td>15,489</td>
<td>46,467</td>
<td>46,563</td>
</tr>
<tr>
<td>3269</td>
<td>glDrawElements</td>
<td>48</td>
<td>768</td>
<td>314,562</td>
<td>943,686</td>
<td>944,454</td>
</tr>
<tr>
<td>3277</td>
<td>glDrawArrays</td>
<td>4</td>
<td>64</td>
<td>2,595</td>
<td>7,785</td>
<td>7,849</td>
</tr>
<tr>
<td>3282</td>
<td>glDrawArrays</td>
<td>4</td>
<td>64</td>
<td>2,595</td>
<td>7,785</td>
<td>7,849</td>
</tr>
<tr>
<td>3287</td>
<td>glDrawArrays</td>
<td>4</td>
<td>64</td>
<td>2,595</td>
<td>7,785</td>
<td>7,849</td>
</tr>
<tr>
<td>3292</td>
<td>glDrawArrays</td>
<td>4</td>
<td>64</td>
<td>2,595</td>
<td>7,785</td>
<td>7,849</td>
</tr>
<tr>
<td>3297</td>
<td>glDrawArrays</td>
<td>4</td>
<td>64</td>
<td>2,595</td>
<td>7,785</td>
<td>7,849</td>
</tr>
<tr>
<td>3302</td>
<td>glDrawArrays</td>
<td>4</td>
<td>64</td>
<td>2,595</td>
<td>7,785</td>
<td>7,849</td>
</tr>
<tr>
<td>3310</td>
<td>glDrawElements</td>
<td>18</td>
<td>288</td>
<td>48,786</td>
<td>146,358</td>
<td>146,646</td>
</tr>
<tr>
<td>3318</td>
<td>glDrawArrays</td>
<td>4</td>
<td>64</td>
<td>2,436</td>
<td>7,308</td>
<td>7,372</td>
</tr>
</tbody>
</table>
PVRShaderEditor
What is PVRShaderEditor?
Shader profiler & editor
- OpenGL ES shader & OpenCL kernel support
Features
- Syntax highlighting
- As you type performance stats
- Shader disassembly
Session: 14:55-15:40
Optimizing Games for PowerVR
- Paul Sobek
- Developer Support – Android & Web Browsers
- Session includes:
- Overview of the new features in the 4.0 SDK developer tools
- Live demos!
PowerVR SDK & Framework
PowerVR SDK
Release schedule
• PowerVR SDK release process
• Minor revision roughly every 6 months
• Recent/upcoming releases
• 3.5 SDK (April 2015)
• 4.0 SDK (due October/November 2015)
PowerVR SDK
What’s new?
• SDK 4.0: Framework
• New, modular framework written from the ground up
• Designed for explicit APIs (e.g. Vulkan). Heavily optimized for OpenGL ES too
• SDK 4.0: Examples
• New artwork
• New examples – more to come in the 4.1 release!
Session: 13:30-14:15
Introducing the new PowerVR SDK Framework
■ Gerry Raptis
■ Leading Developer Technology Engineer, Imagination Technologies
■ Session includes:
■ Intro to the new framework and why we have created it
■ In-depth overview of the features and how we have abstracted explicit APIs
■ Source code examples
Rogue graphics driver
Rogue graphics driver
_Release schedule_
- **DDK (Driver Development Kit) release process**
- Reference driver source code released to PowerVR IP licensees
- Minor revision roughly every 6 months
- Top-tier customers engage early. Drivers in products shortly after official DDK release
Rogue graphics driver
1.4 DDK
- **Release date**
- Q4 2014 (release 1)
- Q1 2015 (release 2)
- **OpenGL ES: Key features (release 1)**
- OpenGL ES 3.1
- Compute shaders, shader storage buffer objects, draw indirect and more
- **OpenGL ES: Key features (release 2)**
- Android Lollipop support
Rogue graphics driver
1.5 DDK
• Release date
• Q2/Q3 2015
• OpenGL ES: Key features
• Android Extension Pack (AEP)
• ASTC, blend equation advanced, GPU shader model 5 and more
• sRGB PVRTC
• Pixel local storage
• 128/256 bits per-pixel on-chip
Rogue graphics driver
1.6 DDK
• **Release date**
• Q4 2015
• **OpenGL ES: Key features**
• Bicubic texture filtering
• Shader group vote
• Polygon offset clamp
• Pixel local storage 2
• Simultaneously write to pixel local storage and a framebuffer attachment
Session: 13:00-13:00
The Golden Rules of Mobile Graphics Performance
- Paul Ly
- Developer Support Engineer, Imagination Technologies
- Session includes:
- Our “Golden Rules” for using OpenGL ES efficiently
Session: 15:50-16:20
Optimizing OpenGL ES Games for Android (Google)
- Shanee Nishry
- Developer Advocate, Google
**Session includes:**
- OpenGL ES performance tips for Android
- OpenGL ES vs. Vulkan
- Source code examples!
Vulkan
About
• **What is Vulkan?**
• New open standard API developed by the Khronos group
• Designed for high-efficiency access to graphics and compute on modern GPUs
• **Key features**
• Minimizes driver overhead and enables multi-threaded GPU command preparation
• Designed for mobile, desktop, console and embedded platforms
• Designed for all GPUs - tile based GPUs are first-class citizens!
• SPIR-V – binary intermediate language for shaders
Vulkan
PowerVR driver status
- **PowerVR Vulkan driver**
- Driver development on-going
- Working with key partners on initial content bring up
- Gnome Horde SIGGRAPH demo
Vulkan
Gnome Horde
Session: 14:25-14:55
Great Looking Graphics on Modern PowerVR GPUs
- Ashley Smith
- Leading Applications Engineer, Demo Engineering
- Session includes:
- Overview of the latest PowerVR marketing demos
- Introduction to the rendering techniques used and how they were optimized for PowerVR
- Initial thoughts on the Vulkan API after writing the Gnome Horde demo
PowerVR Graphics
*Future roadmaps*
- What drives our roadmaps?
- Market analysis
- Customer feedback
- Developer feedback
Questions?
www.imgtec.com/idc
|
{"Source-Url": "http://cdn.imgtec.com/sdk-presentations/IDC2015_UK_PowerVR_Graphics_Latest_Developments_and_Future_Plans.pdf", "len_cl100k_base": 5305, "olmocr-version": "0.1.53", "pdf-total-pages": 60, "total-fallback-pages": 0, "total-input-tokens": 68961, "total-output-tokens": 7143, "length": "2e12", "weborganizer": {"__label__adult": 0.0009365081787109376, "__label__art_design": 0.002017974853515625, "__label__crime_law": 0.00048828125, "__label__education_jobs": 0.0004107952117919922, "__label__entertainment": 0.0002608299255371094, "__label__fashion_beauty": 0.0004119873046875, "__label__finance_business": 0.0003604888916015625, "__label__food_dining": 0.000530242919921875, "__label__games": 0.0039215087890625, "__label__hardware": 0.0531005859375, "__label__health": 0.0006151199340820312, "__label__history": 0.0004572868347167969, "__label__home_hobbies": 0.0002009868621826172, "__label__industrial": 0.001453399658203125, "__label__literature": 0.00022542476654052737, "__label__politics": 0.0002815723419189453, "__label__religion": 0.000973224639892578, "__label__science_tech": 0.08221435546875, "__label__social_life": 6.771087646484375e-05, "__label__software": 0.0217437744140625, "__label__software_dev": 0.8271484375, "__label__sports_fitness": 0.0008630752563476562, "__label__transportation": 0.0009403228759765624, "__label__travel": 0.00036025047302246094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16064, 0.0462]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16064, 0.06357]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16064, 0.72932]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 273, false], [273, 296, null], [296, 660, null], [660, 922, null], [922, 1123, null], [1123, 1414, null], [1414, 1734, null], [1734, 2035, null], [2035, 2260, null], [2260, 2676, null], [2676, 2925, null], [2925, 3150, null], [3150, 3198, null], [3198, 3490, null], [3490, 3948, null], [3948, 4227, null], [4227, 4441, null], [4441, 4441, null], [4441, 4690, null], [4690, 4920, null], [4920, 5171, null], [5171, 5205, null], [5205, 5338, null], [5338, 5740, null], [5740, 5740, null], [5740, 5904, null], [5904, 5983, null], [5983, 6053, null], [6053, 6089, null], [6089, 6222, null], [6222, 6619, null], [6619, 7470, null], [7470, 7824, null], [7824, 7848, null], [7848, 7848, null], [7848, 8058, null], [8058, 8232, null], [8232, 12034, null], [12034, 12232, null], [12232, 12445, null], [12445, 12469, null], [12469, 12665, null], [12665, 12937, null], [12937, 13267, null], [13267, 13289, null], [13289, 13583, null], [13583, 13892, null], [13892, 14155, null], [14155, 14431, null], [14431, 14644, null], [14644, 14872, null], [14872, 14872, null], [14872, 15335, null], [15335, 15514, null], [15514, 15534, null], [15534, 15905, null], [15905, 16035, null], [16035, 16046, null], [16046, 16064, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 273, true], [273, 296, null], [296, 660, null], [660, 922, null], [922, 1123, null], [1123, 1414, null], [1414, 1734, null], [1734, 2035, null], [2035, 2260, null], [2260, 2676, null], [2676, 2925, null], [2925, 3150, null], [3150, 3198, null], [3198, 3490, null], [3490, 3948, null], [3948, 4227, null], [4227, 4441, null], [4441, 4441, null], [4441, 4690, null], [4690, 4920, null], [4920, 5171, null], [5171, 5205, null], [5205, 5338, null], [5338, 5740, null], [5740, 5740, null], [5740, 5904, null], [5904, 5983, null], [5983, 6053, null], [6053, 6089, null], [6089, 6222, null], [6222, 6619, null], [6619, 7470, null], [7470, 7824, null], [7824, 7848, null], [7848, 7848, null], [7848, 8058, null], [8058, 8232, null], [8232, 12034, null], [12034, 12232, null], [12232, 12445, null], [12445, 12469, null], [12469, 12665, null], [12665, 12937, null], [12937, 13267, null], [13267, 13289, null], [13289, 13583, null], [13583, 13892, null], [13892, 14155, null], [14155, 14431, null], [14431, 14644, null], [14644, 14872, null], [14872, 14872, null], [14872, 15335, null], [15335, 15514, null], [15514, 15534, null], [15534, 15905, null], [15905, 16035, null], [16035, 16046, null], [16046, 16064, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16064, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16064, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16064, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16064, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16064, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16064, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16064, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16064, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16064, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16064, null]], "pdf_page_numbers": [[0, 0, 1], [0, 273, 2], [273, 296, 3], [296, 660, 4], [660, 922, 5], [922, 1123, 6], [1123, 1414, 7], [1414, 1734, 8], [1734, 2035, 9], [2035, 2260, 10], [2260, 2676, 11], [2676, 2925, 12], [2925, 3150, 13], [3150, 3198, 14], [3198, 3490, 15], [3490, 3948, 16], [3948, 4227, 17], [4227, 4441, 18], [4441, 4441, 19], [4441, 4690, 20], [4690, 4920, 21], [4920, 5171, 22], [5171, 5205, 23], [5205, 5338, 24], [5338, 5740, 25], [5740, 5740, 26], [5740, 5904, 27], [5904, 5983, 28], [5983, 6053, 29], [6053, 6089, 30], [6089, 6222, 31], [6222, 6619, 32], [6619, 7470, 33], [7470, 7824, 34], [7824, 7848, 35], [7848, 7848, 36], [7848, 8058, 37], [8058, 8232, 38], [8232, 12034, 39], [12034, 12232, 40], [12232, 12445, 41], [12445, 12469, 42], [12469, 12665, 43], [12665, 12937, 44], [12937, 13267, 45], [13267, 13289, 46], [13289, 13583, 47], [13583, 13892, 48], [13892, 14155, 49], [14155, 14431, 50], [14431, 14644, 51], [14644, 14872, 52], [14872, 14872, 53], [14872, 15335, 54], [15335, 15514, 55], [15514, 15534, 56], [15534, 15905, 57], [15905, 16035, 58], [16035, 16046, 59], [16046, 16064, 60]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16064, 0.10805]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
26f8683a07c8b841f4b251cc6b225194f83a2997
|
What’s in CX Commerce Cloud?
20A release detail
PURPOSE STATEMENT
Oracle CX Commerce is a cloud-native, fully featured, extensible SaaS commerce solution, delivered in the Oracle Cloud, supporting B2C and B2B models in a single platform. CX Commerce grants greater agility and cost savings, with the extensibility and control required in the ultra-competitive digital commerce market.
- **SIMPLIFY** your technology footprint.
- **INNOVATE** to stay ahead of demands and competitors in a low-risk way.
- **DELIVER** to every customer, every time to increase loyalty and revenue.
DISCLAIMER
**CX Commerce has frequent releases. Please ensure you have the latest documentation**
This document in any form, software or printed matter, contains proprietary information that is the exclusive property of Oracle. Your access to and use of this confidential material is subject to the terms and conditions of your Oracle software license and service agreement, which has been executed and with which you agree to comply. This document and information contained herein may not be disclosed, copied, reproduced or distributed to anyone outside Oracle without prior written consent of Oracle. This document is not part of your license agreement nor can it be incorporated into any contractual agreement with Oracle or its subsidiaries or affiliates.
This document is for informational purposes only and is intended solely to assist you in planning for the implementation and upgrade of the product features described. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described in this document remains at the sole discretion of Oracle.
Due to the nature of the product architecture, it may not be possible to safely include all features described in this document without risking significant destabilization of the code.
# TABLE OF CONTENTS & PRODUCT FEATURES
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>Purpose Statement</td>
<td>1</td>
</tr>
<tr>
<td>Disclaimer</td>
<td>1</td>
</tr>
<tr>
<td>CX Commerce has frequent releases. Please ensure you have the latest</td>
<td>1</td>
</tr>
<tr>
<td>documentation</td>
<td></td>
</tr>
<tr>
<td>Unified Admin</td>
<td>3</td>
</tr>
<tr>
<td>Core Platforms and APIs</td>
<td>3</td>
</tr>
<tr>
<td>Modular, Headless Options</td>
<td>4</td>
</tr>
<tr>
<td>Responsive Storefront</td>
<td>4</td>
</tr>
<tr>
<td>Guided Search</td>
<td>4</td>
</tr>
<tr>
<td>SEO</td>
<td>5</td>
</tr>
<tr>
<td>Drag and Drop Experience Creation</td>
<td>5</td>
</tr>
<tr>
<td>Support for Multiple Catalogs</td>
<td>6</td>
</tr>
<tr>
<td>Catalog Management</td>
<td>6</td>
</tr>
<tr>
<td>Promotions</td>
<td>7</td>
</tr>
<tr>
<td>Multi-site</td>
<td>8</td>
</tr>
<tr>
<td>Buy Online, Pick-Up in Store</td>
<td>8</td>
</tr>
<tr>
<td>Personalization with <em>Audiences</em></td>
<td>8</td>
</tr>
<tr>
<td>A/B Testing with <em>Experiments</em></td>
<td>9</td>
</tr>
<tr>
<td>Product Recommendations</td>
<td>10</td>
</tr>
<tr>
<td>Loyalty Framework</td>
<td>10</td>
</tr>
<tr>
<td>Content</td>
<td>11</td>
</tr>
<tr>
<td>Transactional and Registration Emails</td>
<td>11</td>
</tr>
<tr>
<td>Social Wishlist and Plug-ins</td>
<td>11</td>
</tr>
<tr>
<td>Payments and Tax Integrations</td>
<td>12</td>
</tr>
<tr>
<td>B2C and B2B in a Single Platform</td>
<td>12</td>
</tr>
<tr>
<td>Agent Console Call Center Application</td>
<td>13</td>
</tr>
<tr>
<td>Reporting</td>
<td>14</td>
</tr>
<tr>
<td>Leverage the Oracle Cloud Marketplace to Reduce Integration Cost &</td>
<td>14</td>
</tr>
<tr>
<td>Complexity</td>
<td></td>
</tr>
<tr>
<td>Integrations with Oracle Applications</td>
<td>15</td>
</tr>
<tr>
<td>Purchasing and using CX Commerce</td>
<td>16</td>
</tr>
<tr>
<td>What’s included with the Subscription Service?</td>
<td>16</td>
</tr>
<tr>
<td>Simple Purchasing</td>
<td>16</td>
</tr>
<tr>
<td>Working with CX Commerce</td>
<td>16</td>
</tr>
<tr>
<td>Create Beyond the Boundaries of Traditional SaaS</td>
<td>16</td>
</tr>
<tr>
<td>Leverage Oracle Cloud Services to Drive Down IT Complexity and Cost</td>
<td>17</td>
</tr>
<tr>
<td>See How Commerce Cloud Can Transform Your Business</td>
<td>17</td>
</tr>
</tbody>
</table>
UNIFIED ADMIN
Oracle CX Commerce unifies all admin tools in a single interface to simplify management and consolidate activities in a single location. CX Commerce features different “studios” for developers and business users, with drag-and-drop UIs that streamline daily tasks. These intuitive admin UIs are responsive and are supported in 35 languages.
- **Design Studio:** Merchants can leverage optimized AIs to easily create and personalize experiences with total creative control via drag-and-drop tools.
- **Merchant Studio:** Offers all of the tools needed to manage and merchandise the site experience for shoppers.
- **Developer Studio:** Provides capabilities for developers to build and manage their configurations and customizations for any device.
CORE PLATFORMS AND APIs
CX Commerce was built from the ground-up with an API-first architecture and a complete REST web services framework for agile, standards-based development and simplified integrations.
- **API-first:** All functionality is accessible through easy-to-use REST web services. Oracle-built, partner-built and customer-built storefront and applications all use the same APIs. And the API documentation is publicly available.
- **Standards-based, flexible:** There is nothing proprietary about working with CX Commerce. CX Commerce leverages standards-based skills, allowing for fast development and scalability. The storefront is built in HTML5, CSS3, JavaScript, and NodeJS. Extensions can be built client-side and server-side depending on the requirement.
- **Simplified integrations:** The API and Webhooks framework allow for faster, cheaper, less complex integrations to Oracle, third party, and homegrown solutions. Additionally, CX Commerce features an adapter for Oracle Integration Cloud Service (ICS) for ‘drag-and-drop’ integrations and data mapping between Oracle and third-party applications. Another benefit is being able to leverage the Oracle Cloud Marketplace to access pre-built extensions and connectors with various technology partners to reduce costs and accelerate integrations.
MODULAR, HEADLESS DEPLOYMENT OPTIONS
Because CX Commerce is a flexible application with an API-first architecture and a complete REST web services framework, merchants can implement CX Commerce to best suit their business needs.
- **Fully integrated**: Leverage everything that comes with CX Commerce, fully integrated with a curated storefront. The UI layer and the commerce services layer are connected for a tight integration between commerce tooling and experience management.
- **Headless, non-integrated**: With this approach, the UI layer is separate from the set of backend commerce services, and communication is based on web services. Leverage CX Commerce’s services, and an external UI tool for managing the user experience in this headless, non-integrated model.
- **Headless, integrated**: With this hybrid approach, the UI layer is separate from the backend services but, the UI layer is still integrated with the CX Commerce application, so CX Commerce capabilities like site design tools, personalization, and A/B Testing, can be leveraged.
RESPONSIVE STOREFRONT
A customizable, out-of-the-box responsive storefront helps merchants get live quickly with fully featured experiences. Storefronts are supported in 40 languages and 60 global currencies. The storefront, which is easily configured by business users, can be customized and extended to meet branding and experience needs.
The storefront has pre-integrated features providing customers extra value while accelerating time to market. Sample storefront features included with the subscription include:
- Catalog, pricing, inventory management
- Promotions
- Social commerce
- Integrated tax solutions
- Responsive & adaptive design
- Content Management
- Transactional emails
- Multisite
- Drag & drop experience management
- Image scaling
- PCI & GDPR complaint
- Personalization
- Guided search & navigation
- A/B testing
- Payment gateway integrations
- B2B-specific support
- SEO management
- Product recommendations
- Edge Caching (CDN)
- Server-side extension framework
GUIDED SEARCH
CX Commerce features leading Search and Guided Navigation capabilities, with streamlined admin tools for more efficient and scalable management of search within the shopping experience. CX Commerce includes:
- Pre-integrated, intelligent storefront search and navigation features, like the search type-ahead mega-menu, did you mean?, spell correction, auto-correct, keyword redirects, thesaurus, and more.
- Access to many back-end configuration options via Search Application Configuration API, enabling customization for advanced search functionality, Boost and Bury, etc.
- Order search facets by statistical significance or control the simplicity and granularity of the navigation menu for different collections
- International language support.
- API support to control the order of navigation facets.
SEO
SEO is critical to any commerce program and CX Commerce simplifies how a business user can optimize their site(s) for SEO gains. Features include:
- Delivering a full https site that is Google desired.
- Integrated Edge Caching (CDN) for faster load time – also Google desired.
- Mobile-friendly support for both responsive and adaptive models – also Google desired.
- Streamlined ability to customize, optimize and configure URLs, tags, and metadata to impact search ranking.
- Auto-generation of a sitemap.
- Pre-render based snapshot generation service including the ability to configure web crawlers to receive snapshot.
- Ability to manage robots.txt.
- API 1:1 301 / 302 redirects allowing merchants to create 1:1 redirects per site or globally.
- Automatic application of canonical tags and rel attribute.
- Supports Open Graph social meta tags, and schema.org microdata.
DRAG AND DROP EXPERIENCE CREATION
Oracle CX Commerce Design Studio features UIs to create experiences with a full drag-and-drop interface easily. A layout and widget framework delivers dynamic experiences based on unique needs. Widgets are modular pieces of functionality with business rules that fit into layouts. CX Commerce ships with 20+ out-of-the-box page layouts and 70+ prebuilt widgets and elements. Merchants can also create their own templates, layouts and reusable widgets.
Functionality includes:
- Out-of-the-box libraries for widgets (70+), page layouts (20+), and themes.
- Ability to create and re-use new widgets, layouts, and Storefront themes.
- Ability to drag-and-drop widgets onto layouts and resize / organize them.
- Admin support for role-specific restrictions, role-specific access to Storefront layouts and widgets.
- Schedule full publishing events, and selectively publishing.
- Widget configuration including the ability to edit HTML, JavaScript and CSS.
- Business-user-friendly configurations for widget behavior.
- Associate Page Layouts to Products, Collections, and Product Types.
- Layout management for different viewports.
SUPPORT FOR MULTIPLE CATALOGS
Manage independent catalogs for brand sites or other sites that have unique catalog hierarchies. Includes support for truly unassigned products and collections, linking collections across multiple catalogs, and allowing a collection to exist multiple times in the same catalog hierarchy.
- With support for multiple catalogs, merchants can now perform a catalog-specific export, as well as manage each separate catalog within the Admin UI.
- The business user can also now edit any catalog directly, including adding unique collections and products to those catalogs.
- The enhanced multi-catalog model is useful when a merchant has unique catalog structures that need to be merchandised independently.
- It is also beneficial for B2C or B2B customers that have smaller catalogs, but that need uniqueness in the catalog contents.
CATALOG MANAGEMENT
Oracle CX Commerce delivers robust catalog management capabilities that give merchants total control over their products, pricing, and inventory. Business users have full control over their products with an intuitive UI and can simplify SKU management, associated media, custom properties, and search. With CX Commerce, business users can:
- Import and export catalog data.
- Use embedded search to find what you need in the catalog easily.
- Curate catalog and organize products into Collections (categories).
- Manage product types, custom attributes, variants, child SKU definition.
- Use product properties to drive Collections and search faceting.
- Create SKU properties at the base or custom product type level.
- Create SKU bundles.
- Easily manage inventory, support for location-based inventory.
- Support for Add-on Products: additional features shoppers can select and add to cart (i.e., monogramming, product customization, gift wrap).
- Support for Pre-Order and Back Order.
- Manage list, sale, and VAT-inclusive pricing.
- Support to leverage external pricing, if desired.
- Support to leverage externally priced shipping methods.
Media Library to manage Collection, Product, and General media assets, upload and assign product images to support different image sizes.
- Select a subset of items for publishing to production.
- Include dynamic properties on Collections
- Search for a SKU within inventory
Additional B2B-specific catalog features include support for customer-specific, account-based catalogs, pricing and orders (B2B page).
**PROMOTIONS**
Oracle CX Commerce has out-of-the-box promotion templates and a streamlined UI for simplified setup and management. In addition to out-of-the-box templates, an open promotions API framework allows merchants to create custom promotions of their choice. Out-of-the-box promotions templates include:
- Order, item, shipping levels.
- Get item discount.
- Spend Y in X, get item discount.
- Buy One, Get One.
- Buy X, get discount.
- Buy X, get Y.
- Buy X, get free shipping.
- Spend Y, in X get discount.
- Spend Y in X, get shipping discount.
- Tiered order discounts.
- Batch coupons.
- Gift with purchase.
- Shipping discounts on shipping groups.
- Create a discount by catalog property.
- Discount by SKU.
- Support for tiered offers.
- Support for stacking rules.
- Support for single-use coupons.
- Support for multiple promotions per coupon.
- Support for promotions by audience (CX Audiences)
- Support promotions from external source.
- Support promotions by credit card type.
- Open API to create custom promotions.
- Ability to clone promotions.
- Ability to assign promotions to folders.
- Ability to add promotional upsell message.
**MULTI-SITE**
Deliver multiple websites on the same scalable infrastructure with a single subscription of CX Commerce. CX Commerce multisite enables merchants to quickly add country-specific, branded and microsites - with the flexibility to make each site consistent, or unique. With a single admin tool, central (or distributed) teams can deliver sites that engage their target audience, without starting from scratch.
- Share or customize catalogs, pricing, content, layouts, settings, and promotions.
- Localize languages, shipping methods, and payments by site.
- Manage personalization, search, and SEO strategies.
- Preview by site.
- Filter reports by site.
- Manage shopper settings by site.
- B2B multisite account management.
- Manage global email settings by / across sites.
- Manage extensions by site.
- Storefront supports 40 languages and 60+ currencies; UI supports 35 languages.
- Agent Console call center support for multiple sites (page 16).
**BUY ONLINE, PICK-UP IN STORE**
We now support Buy Online, Pick-Up in Store (BOPIS) via API and out-of-the-box Storefront widgets. Allows a shopper to order online and choose a store location to pick up the order from. Payment can be done online or in-store. Includes an “online-only” flag for products and SKUs to differentiate items that can be picked up in-store from those that cannot. Also includes API support to associate sites with lists of store locations, as well as distinguish store locations that are inventory locations from ones that are pickup locations.
This feature provides a mechanism to capture BOPIS-specific data, including pickup location, inventory location, contact details of the person picking up the order, available pickup dates and time, and shopper preferred pickup dates and times. Admin API and webhooks have been updated to support all BOPIS data.
**PERSONALIZATION WITH AUDIENCES**
CX Commerce introduces the concept of Audiences – a new way to manage and scale personalization in a user and site-friendly way. Personalization can be used for both registered and anonymous users.
Audiences includes:
- Ability to build audiences using standard and custom shopper profile attributes. Standard samples include:
- Spend: Lifetime spend, lifetime average order value, last purchase amount.
- Visitor: Visitor birthdate or visitor type.
- Ability to use custom query parameters to trigger audiences
- Frequency: number of orders, registration date, first purchase date, last visit date.
- Support for rule building based on standard or custom date properties.
- Ability to use “slots” to show different content to shoppers in different audiences.
- Ability to create promotions by audience.
- Integrated with Experiments to allow for A/B testing by audience.
- Manage sizes of audiences.
- Get reports by audience.
- Support for custom account properties that allows merchants to show tailored content to different B2B accounts.
- Support for Geolocation with Audiences so you can personalize around specific geolocations and regions
- Ability to personalize by UTM Query Parameters
- Ability to personalize by Landing Page and Referring Site
- Ability to personalize using AddThis Interests
- Ability to target promotions to specific Audience segments
- Ability to preview by Audience
**A/B TESTING WITH EXPERIMENTS**
CX Commerce delivers integrated *Experiments* A/B testing for site optimization while reducing spend and eliminating the need for integration. Native A/B testing gives merchants greater insight, more control over what can be tested, and the ability to update sites to focus on high-value optimization immediately.
At a high-level, *Experiments*:
- Grants flexibility to support simple and advanced page modifications.
- Can be associated with layouts, widgets, collections, and product types.
- Updates result dynamically to show the impact of in-progress experiments.
- Allows business users to leverage out-of-the box or set up custom goals.
- Can A/B test on cart and checkout flows.
- Integrated reporting gives merchants visibility to core KPIs, including site metrics and monetary metrics (measured for each currency on the website).
- Allows merchants to schedule tests in advance and allocate traffic percentages for each variation.
- Enables business users to experiment on variations of the same widget, or even compare different widgets.
- Is integrated with Audiences personalization for A/B testing capabilities by Audience.
PRODUCT RECOMMENDATIONS
CX Commerce has embedded product recommendations to expose more products via tailored suggestions. Merchants can automatically deliver contextually relevant upsells and cross-sells to promote more of their catalog and to drive higher order values. Because recommendations come out-of-the-box and can be placed with page layouts using widgets, the cost of having a third party product recommendations engine is eliminated, and complexity of integration and management is greatly reduced. With CX Commerce products recommendations, business users can:
- Deliver dynamic or curated recommendations for suggested or related products.
- Surface-related upsells and cross-sells to increase order values.
- Surface most-recently-viewed.
- Deliver in-session or cross-session (multiple sessions) recommendations.
- Enable in-category restrictions.
- Include recommendations in Abandoned Order and New Account emails.
LOYALTY FRAMEWORK
To boost engagement and customer lifetime value, CX Commerce features a loyalty framework to integrate with an Oracle product or an external loyalty program or to support enrollment, accrual and redemption. Merchants can:
- Leverage integration with Oracle, or a third-party.
- Configure programs against a site.
- Leverage out-of-the-box widget to pay with points (points as currency).
- Create separate tax settings for points.
- Set up the conversion rate for converting currency to points.
- Set up a secondary currency for converting taxes/shipping to points.
- Set up a new payment method to handle loyalty points.
- Support for zero-value orders (I.e.: special coupons, samples, free merchandise).
- Leverage web-hooks and APIs to send the loyalty details against profile and order details to external systems.
CONTENT
CX Commerce has native content creation and management capabilities to create rich, non-catalog Article pages, and integration with Oracle CX Content to streamline content creation and publishing within the commerce experience.
- Easily create non-catalog content pages with drag-and-drop tools.
- CX Commerce gets automatic alerts of new assets from CX Content.
- Manage product and editorial content within a single layout.
- Schedule full publishing events, and selectively publish.
- Vary content presented to Audiences, or by date.
- If desired, leverage external content creation systems and repositories via API, or integration with Oracle CX Content.
TRANSACTIONAL AND REGISTRATION EMAILS
CX Commerce can be configured to send emails to shoppers based on site-related activities. Business users can control the branding and timing of email communications. Sample transactional emails include:
- Shopper profile registration email with secure link.
- Thank you for your order / order completed email.
- Idle cart email reminder / abandoned cart.
- Back in stock notification.
- Return request.
- Refund issued.
- Scheduled order.
- B2B-specific emails for account or order updates.
Additional benefits of transactional emails include the ability to:
- Manage global email settings across sites (if leveraging multisite).
- Support marketing orchestration emails via native integration with Oracle CX Marketing or via Oracle Marketplace connectors to non-Oracle email systems.
SOCIAL WISHLIST AND PLUG-INS
Empower shoppers to share your products across their social networks with out-of-the-box plug-ins and shareable wish lists.
Merchants can:
- Allow shoppers to create, manage and share any product wish list to email or social channels such as Facebook, Twitter, and Pinterest.
- Allow wish lists to be set to ‘Private,’ ‘Shared,’ or ‘Group’ mode to allow for collaborative shopping.
- Allow users to create wish lists per site (if leveraging multi-site).
- Post comments on any product in a wish list.
- Allow shoppers to have unlimited wish lists.
- Support sharing products from any product detail page on Facebook, Twitter, Pinterest or email.
- Leverage the Social Metatag Widget, which includes Open Graph and schema.org microdata support to enable better discovery of site, brand, and products.
PAYMENTS AND TAX INTEGRATIONS
CX Commerce reduces the complexity of integrating to payment gateways. CX Commerce has out-of-the-box integrations that only require entering credentials to get started and enable the merchant to configure custom payment types and tax processors of their choice. CX Commerce payment features include:
- Out-of-the-box integrations with PayPal, Cybersource, Chase, and PayU LATAM.
- Out-of-the-box integrations with tax providers Avalara and Vertex.
- An open payment and tax framework to integrate with payment providers of choice.
- Connectors with global payments partners available in the Oracle Cloud Marketplace (page 18).
- Pay by invoice.
- Pay by gift card.
- Pay with points.
- Pay with points and currency.
- Ability to support deferred payments (i.e., cash on delivery).
- Ability to support split payments.
- External tax Webhook.
- Support for VAT-inclusive pricing.
- Support for tax exemption management.
- Tax Included/Excluded by Price Group.
- Support for zero-value orders (i.e., coupons, samples, free merchandise).
- Support for Stored credit cards
- Support for integrated payment fraud solutions
- Support of Payment Service Directive 2 (PSD 2)
B2C AND B2B IN A SINGLE PLATFORM
CX Commerce simplifies how companies with multiple business models manage their sites and operations. In addition to delivering superior consumer shopping experiences, CX Commerce is designed to meet the complex needs of organizations selling to other businesses. It is the only enterprise SaaS commerce solution on the market that can support B2C and B2B selling natively, with a single platform and single UI.
Sample B2B-specific functionality includes:
- Account management: contacts, contracts, roles, and permissions.
- Account-specific catalogs & price groups.
- Volume-based pricing.
- Custom payment terms and pay by invoice.
- Purchase lists.
- Recurring (scheduled) orders.
- Support for Punch-out (integrating to the buyer’s procurement system).
- Support for SSO (create contact and account in external system).
- Support for ‘Quick Order’.
- Enhanced search for accounts and contacts.
- Support for account hierarchies and account hierarchy reporting.
- Access Control for account (Storefront) and users (Buyer and Admin).
- Delegated administration.
- Support for custom order approvals.
- B2B-specific email communications for orders and account updates.
- Support for B2B in the ‘Agent Console Call Center Application.'
AGENT CONSOLE CALL CENTER APPLICATION
(10 seats included with subscription)
Commerce Cloud features an integrated call center application that enables service representatives to deliver informed, consistent experiences to shoppers with a complete view of cross-channel behavior and history. Customer Service Representatives (CSRs) can use Agent Console to deliver superior customer experiences – and uncover additional sales opportunities.
Agent Console capabilities include:
- Access to customer shopping carts and profiles.
- Create, edit, and delete orders, and initiate returns and refunds.
- Assist with completion of orders initiated in other channels.
- Initiate and complete new orders.
- Support for coupon usage, price groups, tiered discounts, custom order properties, or custom shopper profile properties.
- Ability to reset customer passwords.
- This interface can be customized to better fit your team’s workflow or branding needs.
REPORTING
Commerce Cloud has integrated near real-time reporting dashboards to help you continually monitor and measure your site performance and to put insight into action.
Sample capabilities include:
- Near real-time reporting based on core commerce KPIs.
- Sales reports: By time, products, and other attributes.
- Site traffic reports: By key traffic indicators such as page views, visits, or conversions.
- Reports by audience.
- Embedded Experiments A/B testing reporting.
- Ability to export reports for further analysis.
- Account support.
- Ability to filter reports by site (if leveraging multi-site).
ORACLE CLOUD MARKETPLACE: REDUCE INTEGRATION COST & COMPLEXITY
The Oracle Cloud Marketplace allows merchants to access pre-built extensions and connectors with these technology partners for use within their storefront:
- Payments: CyberSource, Chase Paymentech, PayPal, PayU, AliPay/WePay, SnapPay, Stripe, Integra Payments for Tokenization
- Tax: Avalara, Vertex
- Ratings and Reviews: Verified Reviews, PowerReviews
- Marketplace and Channel Management: Mirakl, GoDataFeed, Yami
- Visual Search and Merchandising: Snaptech, Macty, Duel
- Social and Chatbot: AddShoppers, ChatCom, Roobot, Annex Cloud
- Performance: Yottaa
- Order Management and Logistics: Jagged Peak, Freestyle, Intelipost
- Marketing: Bluecore, SmarterHQ, Infinite Analytics
A full list is available at [https://cloudmarketplace.oracle.com/marketplace/product/commerce](https://cloudmarketplace.oracle.com/marketplace/product/commerce).
INTEGRATIONS WITH ORACLE APPLICATIONS
Oracle CX Commerce has out-of-the-box connectors and integrations with other Oracle applications to reduce cost and time to market, while improving the customer experience.
These include:
- **Oracle Autonomous Oracle Integration Cloud (OIC):** OIC is a drag-and-drop environment for mapping and integrating multiple Oracle and third-party applications. OIC can help merchants dramatically reduce the time and cost of integrating applications and mapping / passing data.
- **Oracle CX Marketing – Responsys:** Connect commerce with orchestrated marketing communications to send abandoned cart emails, make personalized suggestions, and complete user profile data.
- **Oracle CPQ:** Configure, price, and quote engine for custom product configuration.
- **Oracle Retail Cloud Order Management System:** Leverage customer information more effectively throughout the purchasing transaction and as part of marketing, merchandising, and customer service efforts.
- **Oracle CX Content:** Oracle Content Cloud allows for enhanced content collaboration and streamlining of content creation and publication for commerce.
PURCHASING AND USING CX COMMERCE
What’s included with the Subscription Service?
- **Modern SaaS, Hosted in the Oracle Cloud:** Oracle deploys all CX Commerce sites in the Oracle Cloud. Oracle manages and guarantees SLA, uptime, and offers scaling for peak periods.
- **Security and compliance:** Oracle adheres to and manages all compliance standards including PCI, and GDPR requirements.
- **Access to three environments:** Subscription includes three environments - production, development and staging – with preview capability.
- **Regular, automatic push upgrades:** CX Commerce pushes automatic updates on a regular cadence to customer’s pre-production environments. Customers can access the most modern technology faster and don’t need to invest heavily in order to deliver innovation, or manage upgrades.
- **Simplified integrations to other cloud and on-premise technologies:** CX Commerce’s API and Webhook architecture reduces the time, cost, and complexity of integrations to other Oracle, third-party, or homegrown solutions critical to our customers’ businesses. Additionally, CX Commerce customers can leverage the Oracle Cloud Marketplace to access prebuilt extensions and connectors with various technology partners to reduce costs and accelerate integrations.
Simple Purchasing
- **Predictable, transparent pricing:** Fees can be based on annual page views (consumption model) or revenue share, if desired. The ‘Page View’ model sizes customers accordingly, helping them know exactly what they will pay within the page count; thus there are no hidden fees or minimums.
- **Flexible subscription model:** CX Commerce is sold as a subscription service, which moves many merchants from a CapEx to an OpEx model. Fees can be paid on a monthly, quarterly, or annual basis, per the contract.
- **Service model:** Oracle hosts and handles the infrastructure to eliminate the need for customers to purchase and manage additional systems (e.g., database, app server, hardware, software, etc.). Oracle offers a variety of cloud-based services to assist with any virtual infrastructure, integration, or platform needs.
Working with CX Commerce
- **Business users:** A non-technical business user can easily manage many daily tasks. Drag-and-drop tools and other intuitive UIs, make traditionally complex IT tasks streamlined and accessible by the business user.
- **Developers:** There is nothing proprietary about working with Oracle CX Commerce. CX Commerce leverages standards-based languages: HTML5, CSS3, JavaScript, and NodeJS for client-side and server-side extensions. Functionality can further be extended while maintaining upgradability in the Oracle Cloud. These functionalities make finding developers easy and more affordable, and developers can build and extend experiences with modern, scalable technology.
Create Beyond the Boundaries of Traditional SaaS
- **Customize** the look and feel of your site(s) without vendor boundaries using HTML5, CSS3, and JavaScript.
• **Extend** functionality with unique, modern server-side extension model using Node JS, and leverage other components of the Oracle Cloud without impacting upgradability.
• **Maintain upgradability and compatibility** with client-side extension and customization models which allow merchants to take new push upgrade releases without disrupting previous customizations or the site(s).
**Leverage Oracle Cloud Services to Drive Down IT Complexity and Cost**
Oracle Cloud helps organizations drive innovation and business transformation by increasing business agility, lowering costs, and reducing IT complexity. The Oracle Cloud allows merchants to meet their goals faster, providing a platform for fast development and innovation, while substantially reducing infrastructure footprint and simplifying integrations. Some of the Oracle Cloud Services that compliment CX Commerce are:
• **Oracle Integration Cloud Services (ICS):** Maximize the value of your investments in SaaS and on-premises applications through a simple and powerful integration platform in the cloud that enables simplified data passing.
• **Oracle Data as a Service (DaaS):** Leverage a myriad of data sources to connecting you to the right customers, making every interaction personal, and effective.
• **Oracle Infrastructure as a Service (IaaS):** Provides a set of core capabilities, such as elastic compute, storage, networking, bare metal, migration tools, and container to help you quickly increase business value and performance.
• **Oracle Platform as a Service (PaaS):** Develop, test, and deploy the next generation of applications in the cloud in a secure, cost-effective manner that speeds time to market and increases competitive advantage.
• **Oracle Developer Cloud Service:** Allows developers, IT professionals, and business leaders to quickly develop, test, and deploy the next generation of extensions and custom applications in any language in a secure, cost-effective manner. Development can be done in popular IDEs using Oracle Cloud environments provisioned in seconds.
• **Oracle Mobile Cloud Service:** Makes mobile app development and integration quick, secure, and easy to deploy.
**SEE HOW CX COMMERCE CAN TRANSFORM YOUR BUSINESS**
Website: [oracle.com/commerce](http://oracle.com/commerce)
|
{"Source-Url": "https://www.oracle.com/a/ocom/docs/whats-in-oracle-commerce-cloud-brochure.pdf", "len_cl100k_base": 7140, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 49213, "total-output-tokens": 7893, "length": "2e12", "weborganizer": {"__label__adult": 0.0007205009460449219, "__label__art_design": 0.0007839202880859375, "__label__crime_law": 0.0008778572082519531, "__label__education_jobs": 0.0006747245788574219, "__label__entertainment": 0.00028634071350097656, "__label__fashion_beauty": 0.0004661083221435547, "__label__finance_business": 0.10186767578125, "__label__food_dining": 0.0006546974182128906, "__label__games": 0.0021572113037109375, "__label__hardware": 0.001720428466796875, "__label__health": 0.0004677772521972656, "__label__history": 0.00024509429931640625, "__label__home_hobbies": 0.0002008676528930664, "__label__industrial": 0.0007524490356445312, "__label__literature": 0.00027370452880859375, "__label__politics": 0.0003840923309326172, "__label__religion": 0.0004029273986816406, "__label__science_tech": 0.005504608154296875, "__label__social_life": 0.0001443624496459961, "__label__software": 0.399658203125, "__label__software_dev": 0.480224609375, "__label__sports_fitness": 0.0003437995910644531, "__label__transportation": 0.000698089599609375, "__label__travel": 0.0005102157592773438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36243, 0.00386]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36243, 0.00371]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36243, 0.89351]], "google_gemma-3-12b-it_contains_pii": [[0, 49, false], [49, 1974, null], [1974, 5208, null], [5208, 7293, null], [7293, 10172, null], [10172, 12223, null], [12223, 14252, null], [14252, 15823, null], [15823, 19117, null], [19117, 20291, null], [20291, 22065, null], [22065, 24396, null], [24396, 26869, null], [26869, 27818, null], [27818, 29346, null], [29346, 30499, null], [30499, 33495, null], [33495, 36243, null], [36243, 36243, null]], "google_gemma-3-12b-it_is_public_document": [[0, 49, true], [49, 1974, null], [1974, 5208, null], [5208, 7293, null], [7293, 10172, null], [10172, 12223, null], [12223, 14252, null], [14252, 15823, null], [15823, 19117, null], [19117, 20291, null], [20291, 22065, null], [22065, 24396, null], [24396, 26869, null], [26869, 27818, null], [27818, 29346, null], [29346, 30499, null], [30499, 33495, null], [33495, 36243, null], [36243, 36243, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 36243, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36243, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36243, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36243, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36243, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36243, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36243, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36243, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36243, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36243, null]], "pdf_page_numbers": [[0, 49, 1], [49, 1974, 2], [1974, 5208, 3], [5208, 7293, 4], [7293, 10172, 5], [10172, 12223, 6], [12223, 14252, 7], [14252, 15823, 8], [15823, 19117, 9], [19117, 20291, 10], [20291, 22065, 11], [22065, 24396, 12], [24396, 26869, 13], [26869, 27818, 14], [27818, 29346, 15], [29346, 30499, 16], [30499, 33495, 17], [33495, 36243, 18], [36243, 36243, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36243, 0.10104]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
9d111def10f050702ecad3babcd1a293856e9aba
|
Control-Flow Analysis
Chapter 8, Section 8.4
Chapter 9, Section 9.6
Phases of the Compilation Process
Front end
- Lexical analysis
- Syntax analysis
- Semantic analysis (e.g., type checking)
- Generation of three-address code
Middle/Back end
- Code optimization: machine-independent optimization of three-address code
- Code generation: target code (e.g., assembly)
Control-Flow Graphs
Control-flow graph (CFG) for a procedure/method
- A node is a basic block: a single-entry-single-exit sequence of three-address instructions
- An edge represents the potential flow of control from one basic block to another
Uses of a control-flow graph
- Inside a basic block: local code optimizations; done as part of the code generation phase (e.g., Section 8.5)
- Across basic blocks: global code optimizations; done as part of the code optimization phase
- Other aspects of code generation: e.g., global register allocation
Control-Flow Analysis
Part 1: Constructing a CFG
Part 2: Finding dominators and post-dominators
Part 3: Finding loops in a CFG
- What exactly is a loop? Cannot simply say “whatever CFG subgraph is generated by while, do-while, and for statements” – need a general graph-theoretic definition
Part 4: Finding control dependences in a CFG
- Needed for optimizations: cannot violate dependences
- Needed for analyses in software tools: e.g., slicing
Part 1: Constructing a CFG
Nodes: basic blocks; edges: possible control flow
**Basic block**: maximal sequence of consecutive three-address instructions such that
– The flow of control can enter only through the first instruction (i.e., no jumps to the middle of the block)
– Can exit only at the last instruction
Advantages of using basic blocks
– Reduces the cost and complexity of compile-time analysis
– Intra-BB optimizations are relatively easy
CFG Construction
Given: the entire sequence of instructions
First, find the leaders (starting instructions of all basic blocks)
– The first instruction
– The target of any conditional/unconditional jump
– Any instruction that immediately follows a conditional or unconditional jump
Next, find the basic blocks: for each leader, its basic block contains itself and all instructions up to (but not including) the next leader
Note: this example sets array elements \(a[i][j]\) to 0.0, for \(1 \leq i,j \leq 10\) (instructions 1-11). It then sets \(a[i][i]\) to 1.0, for \(1 \leq i \leq 10\) (instructions 12-17). The array accesses in instructions 7 and 15 are done with offsets computed as described in Section 6.4.3, assuming row-major order, 8-byte array elements, and array indexing that starts from 1, not from 0.
<p>| | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>1.</td>
<td>(i = 1)</td>
</tr>
<tr>
<td>2.</td>
<td>(j = 1)</td>
</tr>
<tr>
<td>3.</td>
<td>(t1 = 10 \times i)</td>
</tr>
<tr>
<td>4.</td>
<td>(t2 = t1 + j)</td>
</tr>
<tr>
<td>5.</td>
<td>(t3 = 8 \times t2)</td>
</tr>
<tr>
<td>6.</td>
<td>(t4 = t3 - 88)</td>
</tr>
<tr>
<td>7.</td>
<td>(a[t4] = 0.0)</td>
</tr>
<tr>
<td>8.</td>
<td>(j = j + 1)</td>
</tr>
<tr>
<td>9.</td>
<td>if ((j \leq 10)) goto (3)</td>
</tr>
<tr>
<td>10.</td>
<td>(i = i + 1)</td>
</tr>
<tr>
<td>11.</td>
<td>if ((i \leq 10)) goto (2)</td>
</tr>
<tr>
<td>12.</td>
<td>(i = 1)</td>
</tr>
<tr>
<td>13.</td>
<td>(t5 = i - 1)</td>
</tr>
<tr>
<td>14.</td>
<td>(t6 = 88 \times t5)</td>
</tr>
<tr>
<td>15.</td>
<td>(a[t6] = 1.0)</td>
</tr>
<tr>
<td>16.</td>
<td>(i = i + 1)</td>
</tr>
<tr>
<td>17.</td>
<td>if ((i \leq 10)) goto (13)</td>
</tr>
</tbody>
</table>
Artificial ENTRY and EXIT nodes are often added for convenience.
There is an edge from $B_p$ to $B_q$ if it is possible for the first instruction of $B_q$ to be executed immediately after the last instruction of $B_p$. This is conservative: e.g., if $(3.14 > 2.78)$ still generates two edges.
Single Exit Node
Single-exit CFG
– If there are multiple exits (e.g., multiple return statements), redirect them to the artificial EXIT node
– Use an artificial compiler-created return variable \( \text{ret} \)
– \( \text{return expr; \} \) becomes \( \text{ret} = \text{expr; goto exit;} \)
It gets ugly with exceptions
– Java: \( \text{throw; \} \) uncaught exceptions (e.g., null pointer exception, or an exception thrown by a callee)
– C: \( \text{setjmp and longjmp} \)
– We will ignore these
Common assumption
– Every node is reachable from the entry node
– The exit node is reachable from every node
• Not always true: e.g., a server thread could be \( \text{while(true) \} \)
– A number of techniques depend on having a single exit and on the reachability assumption
Practical Considerations
The usual data structures for graphs can be used
- The graphs are sparse (i.e., have relatively few edges), so an adjacency list representation is the usual choice
- Number of edges is at most $2 \times$ number of nodes
Nodes are basic blocks; edges are between basic blocks, not between instructions
- Inside each node, some additional data structures for the sequence of instructions in the block (e.g., a linked list of instructions)
- Often convenient to maintain both a list of successors (i.e., outgoing edges) and a list of predecessors (i.e., incoming edges) for each basic block
Part 2: Dominance
• A CFG node $d$ dominates another node $n$ if every path from ENTRY to $n$ goes through $d$
– Implicit assumption: every node is reachable from ENTRY (i.e., there is no dead code)
– A dominance relation $\text{dom} \subseteq \text{Nodes} \times \text{Nodes}$: $d \text{ dom } n$
– The relation is trivially reflexive: $d \text{ dom } d$
• Node $m$ is the immediate dominator of $n$ if
– $m \neq n$
– $m \text{ dom } n$
– For any $d \neq n$ such $d \text{ dom } n$, we have $d \text{ dom } m$
• Every node has a unique immediate dominator
– Except ENTRY, which is dominated only by itself
ENTRY $dom\ n$ for any $n$
1 $dom\ n$ for any $n$ except ENTRY
2 does not dominate any other node
3 $dom\ 3, 4, 5, 6, 7, 8, 9, 10, \text{EXIT}$
4 $dom\ 4, 5, 6, 7, 8, 9, 10, \text{EXIT}$
5 does not dominate any other node
6 does not dominate any other node
7 $dom\ 7, 8, 9, 10, \text{EXIT}$
8 $dom\ 8, 9, 10, \text{EXIT}$
9 does not dominate any other node
10 $dom\ 10, \text{EXIT}$
Immediate dominators:
1 $\rightarrow$ ENTRY 2 $\rightarrow$ 1
3 $\rightarrow$ 1 4 $\rightarrow$ 3
5 $\rightarrow$ 4 6 $\rightarrow$ 4
7 $\rightarrow$ 4 8 $\rightarrow$ 7
9 $\rightarrow$ 8 10 $\rightarrow$ 8
EXIT $\rightarrow$ 10
A Few Observations
• Dominance is a transitive relation: $a \ dom \ b$ and $b \ dom \ c$ means $a \ dom \ c$
• Dominance is an anti-symmetric relation: $a \ dom \ b$ and $b \ dom \ a$ means that $a$ and $b$ must be the same
– Reflexive, anti-symmetric, transitive: partial order
• If $a$ and $b$ are two dominators of some $n$, either $a \ dom \ b$ or $b \ dom \ a$
– Therefore, $dom$ is a total order for $n$’s dominator set
– Corollary: for any acyclic path from ENTRY to $n$, all dominators of $n$ appear along the path, always in the same order; the last one is the immediate dominator
Dominator Tree
The parent of $n$ is its immediate dominator
The path from $n$ to the root contains all and only dominators of $n$
Post-Dominance
• A CFG node \( d \) post-dominates another node \( n \) if every path from \( n \) to EXIT goes through \( d \)
– Implicit assumption: EXIT is reachable from every node
– A relation \( pdom \subseteq \text{Nodes} \times \text{Nodes}: d pdom n \)
– The relation is trivially reflexive: \( d pdom d \)
• Node \( m \) is the immediate post-dominator of \( n \) if
– \( m \neq n; m pdom n; \forall d \neq n. d pdom n \Rightarrow d pdom m \)
– Every \( n \) has a unique immediate post-dominator
• Post-dominance on a CFG is equivalent to dominance on the reverse CFG (all edges reversed)
• Post-dominator tree: the parent of \( n \) is its immediate post-dominator; root is EXIT
ENTRY does not post-dominate any other $n$
1 $pdom$ ENTRY, 1, 9
2 does not post-dominate any other $n$
3 $pdom$ ENTRY, 1, 2, 3, 9
4 $pdom$ ENTRY, 1, 2, 3, 4, 9
5 does not post-dominate any other $n$
6 does not post-dominate any other $n$
7 $pdom$ ENTRY, 1, 2, 3, 4, 5, 6, 7, 9
8 $pdom$ ENTRY, 1, 2, 3, 4, 5, 6, 7, 8, 9
9 does not post-dominate any other $n$
10 $pdom$ ENTRY, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
EXIT $pdom$ n for any n
Immediate post-dominators:
ENTRY $\rightarrow$ 1 1 $\rightarrow$ 3
2 $\rightarrow$ 3 3 $\rightarrow$ 4
4 $\rightarrow$ 7 5 $\rightarrow$ 7
6 $\rightarrow$ 7 7 $\rightarrow$ 8
8 $\rightarrow$ 10 9 $\rightarrow$ 1
10 $\rightarrow$ EXIT
The path from $n$ to the root contains all and only post-dominators of $n$.
Constructing the post-dominator tree: use any algorithm for constructing the dominator tree; just “pretend” that the edges are reversed.
Part 3: Loops in CFGs
- **Cycle**: sequence of edges that starts and ends at the same node
- Example:
- **Strongly-connected (induced) subgraph**: each node in the subgraph is reachable from every other node in the subgraph
- Example:
- **Loop**: informally, a strongly-connected subgraph with a single entry point
- Not a loop:
Back Edges and Natural Loops
• Back edge: a CFG edge \((n,h)\) where \(h\) dominates \(n\)
• Natural loop for a back edge \((n,h)\)
– The set of all nodes \(m\) that can reach node \(n\) without going through node \(h\) (trivially, this set includes \(h\))
– Easy to see that \(h\) dominates all such nodes \(m\)
– Node \(h\) is the header of the natural loop
• Trivial algorithm to find the natural loop of \((n,h)\)
– Mark \(h\) as visited
– Perform depth-first search (or breadth-first) starting from \(n\), but follow the CFG edges in reverse direction
– All and only visited nodes are in the natural loop
Immediate dominators:
1 → ENTRY 2 → 1 3 → 1
4 → 3 5 → 4 6 → 4
7 → 4 8 → 7 9 → 8
10 → 8 EXIT → 10
Back edges: 4 → 3, 7 → 4, 8 → 3, 9 → 1, 10 → 7
Loop(10 → 7) = { 7, 8, 10 }
Loop(7 → 4) = { 4, 5, 6, 7, 8, 10 }
Note: Loop(10 → 7) ⊆ Loop(7 → 4)
Loop(4 → 3) = { 3, 4, 5, 6, 7, 8, 10 }
Note: Loop(7 → 4) ⊆ Loop(4 → 3)
Loop(8 → 3) = { 3, 4, 5, 6, 7, 8, 10 }
Note: Loop(8 → 3) = Loop(4 → 3)
Loop(9 → 1) = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }
Note: Loop(4 → 3) ⊆ Loop(9 → 1)
Loops in the CFG
• Find all back edges; each target $h$ of at least one back edge defines a loop $L$ with $header(L) = h$
• $body(L)$ is the union of the natural loops of all back edges whose target is $header(L)$
– Note that $header(L) \in body(L)$
• Example: this is a single loop with header node 1
• For any two CFG loops $L_1$ and $L_2$
– $header(L_1)$ is different from $header(L_2)$
– $body(L_1)$ and $body(L_2)$ are either disjoint, or one is a proper subset of the other (nesting – inner/outer)
Flashback to Graph Algorithms
• Depth-first search in the CFG [Cormen et al. book]
– Set each node’s color as \textit{white}
– Call DFS(ENTRY)
– DFS($n$)
• Set the color of $n$ to \textit{gray}
• For each successor $m$: if color is \textit{white}, call DFS($m$)
• Set the color of $n$ to \textit{black}
• Inside DFS($n$), seeing a gray successor $m$ means that ($n,m$) is a \underline{retreating edge}
– Note: $m$ could be $n$ itself, if there is an edge ($n,n$)
• The order in which we consider the successors matters: the set of retreating edges depends on it
Reducible Control-Flow Graphs
• For **reducible** CFGs, the **retreating** edges discovered during DFS are all and only **back** edges
– The order during DFS traversal is irrelevant: all DFS traversals produce the same set of retreating edges
• For **irreducible** CFGs: a DFS traversal may produce retreating edges that are not back edges
– Each traversal may produce different retreating edges
– Example:
• No back edges
• One traversal produces the retreating edge $3 \rightarrow 2$
• The other one produces the retreating edge $2 \rightarrow 3$
Reducibility
- A number of equivalent definitions
- One of them is on the previous page
- Another definition: the graph can be reduced to a single node with the application of the following two rules
- Given a node $n$ with a single predecessor $m$, merge $n$ into $m$; all successors of $n$ become successors of $m$
- Remove an edge $n \rightarrow n$
- Try this on the graphs from the previous slides
- More details: p. 677 in the textbook
Reducibility
• The essence of irreducibility: a strongly-connected subgraph with multiple possible entry points
– If the original program was written using `if-then`, `if-then-else`, `while-do`, `do-while`, `break`, and `continue`, the resulting CFG is always reducible
– If `goto` was used by the programmer, the CFG could be irreducible (but, in practice, it typically is reducible)
• Optimizations of the intermediate code, done by the compiler, could introduce irreducibility
• Code obfuscation: e.g., Java bytecode can be transformed to be irreducible, making it impossible to reverse-engineer a valid Java source program
Part 4: Control Dependence: Informally
• The decision made at branch node \( c \) affects whether node \( n \) gets executed
– Thus, \( n \) is control dependent on \( c \) – the control-flow leading to \( n \) depends on what \( c \) does
• A node \( n \) is control dependent on a node \( c \) if
– There exists an edge \( e_1 \) coming out of \( c \) that definitely causes \( n \) to execute
– There exists some edge \( e_2 \) coming out of \( c \) that is the start of some path that avoids the execution of \( n \)
• Informally: \( n \) postdominates some successor of \( c \), but does not postdominate \( c \) itself
Control Dependence: Formally
• (part 1) $n$ is control dependent on $c$ if
– $n \neq c$
– $n$ does not post-dominate $c$
– there is an edge $c \rightarrow m$ such that $n$ post-dominates $m$
• (part 2) $n$ is control dependent on $n$ if
– there exists a path from $n$ to $n$ such that $n$ post-dominates every node on the path
• this happens in the presence of loops; $n$ is the source node of a loop exit edge
Consider all branch nodes \( c: 1, 4, 7, 8, 10 \)
ENTRY does not post-dominate any other \( n \)
1 \( pdom \) ENTRY, 1, 9
2 does not post-dominate any other \( n \)
3 \( pdom \) ENTRY, 1, 2, 3, 9
4 \( pdom \) ENTRY, 1, 2, 3, 4, 9
5 does not post-dominate any other \( n \)
6 does not post-dominate any other \( n \)
7 \( pdom \) ENTRY, 1, 2, 3, 4, 5, 6, 7, 9
8 \( pdom \) ENTRY, 1, 2, 3, 4, 5, 6, 7, 8, 9
9 does not post-dominate any other \( n \)
10 \( pdom \) ENTRY, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
EXIT \( pdom \) \( n \) for any \( n \)
2 is control dependent on 1
3, 4, 5, 6 are control dependent on 4
4, 7 are control dependent on 7
9, 1, 3, 4, 7, 8 are control dependent on 8
7, 8, 10 are control dependent on 10
Finding All Control Dependences
• Consider all CFG edges \((c, x)\) such that \(x\) does not post-dominate \(c\) (therefore, \(c\) is a branch node)
• Traverse the post-dominator tree bottom-up
– \(n = x\)
– while \((n \neq \text{parent of } c \text{ in the post-dominator tree})\)
• report that \(n\) is control dependent on \(c\)
• \(n = \text{parent of } n\) in the post-dominator tree
– Example: for CFG edge \((8, 9)\) from the previous slide, traverse and report 9, 1, 3, 4, 7, 8 (stop before 10)
• Other algorithms exist, but this one is simple and works quite well
Why Does This Work?
- Given: edge \((c, x)\) such that \(x\) does not post-dominate \(c\)
- For any traversed node \(n \neq c\), we know that
- \(n\) does not post-dominate \(c\)
- This is why we stop before the parent of \(c\)
- \(n\) does post-dominate \(x\): thus, if we follow the \((c, x)\) edge, we are guaranteed to execute \(n\)
- Easy to show that this is equivalent to part 1 of the definition of control dependence given earlier
- If we traverse \(c\) itself, this means that \(c\) post-dominates \(x\) (thus, part 2 of the definition holds)
|
{"Source-Url": "https://web.cse.ohio-state.edu/~rountev.1/5343/pdf/ControlFlowAnalysis.pdf", "len_cl100k_base": 5071, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 48346, "total-output-tokens": 6400, "length": "2e12", "weborganizer": {"__label__adult": 0.00032591819763183594, "__label__art_design": 0.0002366304397583008, "__label__crime_law": 0.0003070831298828125, "__label__education_jobs": 0.00045990943908691406, "__label__entertainment": 4.208087921142578e-05, "__label__fashion_beauty": 0.0001277923583984375, "__label__finance_business": 0.00014793872833251953, "__label__food_dining": 0.00034117698669433594, "__label__games": 0.0005321502685546875, "__label__hardware": 0.000965595245361328, "__label__health": 0.0003132820129394531, "__label__history": 0.0001761913299560547, "__label__home_hobbies": 9.989738464355467e-05, "__label__industrial": 0.00036835670471191406, "__label__literature": 0.0001982450485229492, "__label__politics": 0.000232696533203125, "__label__religion": 0.00042057037353515625, "__label__science_tech": 0.005283355712890625, "__label__social_life": 6.35385513305664e-05, "__label__software": 0.002986907958984375, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.00036072731018066406, "__label__transportation": 0.000499725341796875, "__label__travel": 0.00020372867584228516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15923, 0.04529]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15923, 0.31774]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15923, 0.82658]], "google_gemma-3-12b-it_contains_pii": [[0, 69, false], [69, 369, null], [369, 921, null], [921, 1380, null], [1380, 1836, null], [1836, 2268, null], [2268, 3144, null], [3144, 3438, null], [3438, 4221, null], [4221, 4840, null], [4840, 5464, null], [5464, 6079, null], [6079, 6678, null], [6678, 7219, null], [7219, 7924, null], [7924, 8588, null], [8588, 8802, null], [8802, 9140, null], [9140, 9765, null], [9765, 10240, null], [10240, 10754, null], [10754, 11339, null], [11339, 11909, null], [11909, 12357, null], [12357, 12991, null], [12991, 13625, null], [13625, 14050, null], [14050, 14770, null], [14770, 15360, null], [15360, 15923, null]], "google_gemma-3-12b-it_is_public_document": [[0, 69, true], [69, 369, null], [369, 921, null], [921, 1380, null], [1380, 1836, null], [1836, 2268, null], [2268, 3144, null], [3144, 3438, null], [3438, 4221, null], [4221, 4840, null], [4840, 5464, null], [5464, 6079, null], [6079, 6678, null], [6678, 7219, null], [7219, 7924, null], [7924, 8588, null], [8588, 8802, null], [8802, 9140, null], [9140, 9765, null], [9765, 10240, null], [10240, 10754, null], [10754, 11339, null], [11339, 11909, null], [11909, 12357, null], [12357, 12991, null], [12991, 13625, null], [13625, 14050, null], [14050, 14770, null], [14770, 15360, null], [15360, 15923, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15923, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15923, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15923, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15923, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 15923, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15923, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15923, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15923, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15923, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15923, null]], "pdf_page_numbers": [[0, 69, 1], [69, 369, 2], [369, 921, 3], [921, 1380, 4], [1380, 1836, 5], [1836, 2268, 6], [2268, 3144, 7], [3144, 3438, 8], [3438, 4221, 9], [4221, 4840, 10], [4840, 5464, 11], [5464, 6079, 12], [6079, 6678, 13], [6678, 7219, 14], [7219, 7924, 15], [7924, 8588, 16], [8588, 8802, 17], [8802, 9140, 18], [9140, 9765, 19], [9765, 10240, 20], [10240, 10754, 21], [10754, 11339, 22], [11339, 11909, 23], [11909, 12357, 24], [12357, 12991, 25], [12991, 13625, 26], [13625, 14050, 27], [14050, 14770, 28], [14770, 15360, 29], [15360, 15923, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15923, 0.06738]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
c104feda1f8edf16fa7b07464ddc69696978d02c
|
Chase Termination for Guarded Existential Rules
Citation for published version:
Link:
Link to publication record in Edinburgh Research Explorer
Document Version:
Peer reviewed version
Published In:
General rights
Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights.
Take down policy
The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim.
Chase Termination for Guarded Existential Rules
Marco Calautti¹, Georg Gottlob², and Andreas Pieris³
¹ DIMES, University of Calabria, Italy calautti@dimes.unical.it
² Department of Computer Science, University of Oxford, UK georg.gottlob@cs.ox.ac.uk
³ Institute of Information Systems, Vienna University of Technology, Austria pieris@dbai.tuwien.ac.at
Abstract. The chase procedure is considered as one of the most fundamental algorithmic tools in database theory. It has been successfully applied to different database problems such as data exchange, and query answering and containment under constraints, to name a few. One of the central problems regarding the chase procedure is all-instance termination, that is, given a set of tuple-generating dependencies (TGDs) (a.k.a. existential rules), decide whether the chase under that set terminates, for every input database. It is well-known that this problem is undecidable, no matter which version of the chase we consider. The crucial question that comes up is whether existing restricted classes of TGDs, proposed in different contexts such as ontological reasoning, make the above problem decidable. In this work, we focus our attention on the oblivious and the semi-oblivious versions of the chase procedure, and we give a positive answer for classes of TGDs that are based on the notion of guardedness.
1 Introduction
The chase procedure (or simply chase) is considered as one of the most fundamental algorithmic tools in databases — it accepts as input a database $D$ and a set $\Sigma$ of constraints and, if it terminates (which is not guaranteed), its result is a finite instance $D_\Sigma$ that enjoys two crucial properties:
1. $D_\Sigma$ is a model of $D$ and $\Sigma$, i.e., it contains $D$ and satisfies the constraints of $\Sigma$;
2. $D_\Sigma$ is universal, i.e., it can be homomorphically embedded into every other model of $D$ and $\Sigma$.
In other words, the chase is an algorithmic tool for computing universal models of $D$ and $\Sigma$, which can be conceived as representatives of all the other models of $D$ and $\Sigma$. This is precisely the reason for the ubiquity of the chase in database theory. Indeed, many key database problems can be solved by simply exhibiting a universal model.
A central class of constraints, which can be treated by the chase procedure and is of special interest for this work, are the well-known tuple-generating dependencies (TGDs) (a.k.a. existential rules) of the form $\forall X \forall Y (\varphi(X, Y) \rightarrow \exists Z (\psi(Y, Z)))$, where $\varphi$ and $\psi$ are conjunctions of atoms. Given a database $D$ and a set $\Sigma$ of TGDs, the chase adds new atoms to $D$ (possibly involving nulls) until the final result satisfies $\Sigma$.
Example 1. Consider the database $D = \{ \text{person}(Bob) \}$, and the TGD
$$\forall X (\text{person}(X) \rightarrow \exists Y \text{ hasFather}(X, Y) \land \text{person}(Y)),$$
which asserts that each person has a father who is also a person. The database atom triggers the TGD, and the chase will add in $D$ the atoms $\text{hasFather}(Bob, z_1)$ and $\text{person}(z_1)$ in order to satisfy it, where $z_1$ is a (labeled) null representing some unknown value. However, the new atom $\text{person}(z_1)$ triggers again the TGD, and the chase is forced to add the atoms $\text{hasFather}(z_1, z_2), \text{person}(z_2)$, where $z_2$ is a new null. The result of the chase is the instance
$$\{ \text{person}(Bob), \text{hasFather}(Bob, z_1) \} \cup \bigcup_{i>0} \{ \text{person}(z_i), \text{hasFather}(z_i, z_{i+1}) \},$$
where $z_1, z_2, \ldots$ are nulls.
As shown by the above example, the chase procedure may run forever, even for extremely simple databases and constraints. In the light of this fact, there has been a long line of research on identifying syntactic properties on TGDs such that, for every input database, the termination of the chase is guaranteed; see, e.g., [4, 8, 10, 12, 13] — this list is by no means exhaustive, and we refer to [9] for a comprehensive survey. With so much effort spent on identifying sufficient conditions for the termination of the chase procedure, the question that comes up is whether a sufficient condition that is also necessary exists. In other words, given a set $\Sigma$ of TGDs, is it possible to determine whether, for every database $D$, the chase on $D$ and $\Sigma$ terminates? This interesting question has been recently addressed in [6], and unfortunately the answer is negative for all the versions of the chase that are usually used in database applications, namely the oblivious, semi-oblivious and restricted chase. In fact, the problem remains undecidable even if the database is known. This has been established in [4] for the restricted chase, and it was observed in [12] that the same proof shows undecidability also for the oblivious and the semi-oblivious chase.
Although the chase termination problem is undecidable in general, the proof given in [6] does not show the undecidability of the problem for TGDs that enjoy some structural conditions, which in turn guarantee favorable model-theoretic properties. Such a key condition is guardedness, a well-accepted paradigm that gives rise to robust rule-based languages that capture important databases constraints and lightweight description logics. A TGD is guarded if it has an atom in the left-hand side that contains (or guards) all the universally quantified variables [2]. Guardedness guarantees the tree-likeness of the underlying models, and thus the decidability of central database problems. The question that comes up is whether guardedness has the same positive impact on chase termination.
We focus on the (semi-)oblivious versions of the chase, and we show that the problem of deciding the termination of the chase for guarded TGDs is decidable, and we establish precise complexity results. Surprisingly, the present work is to our knowledge the first one that establishes positive results for the (semi-)oblivious chase termination problem. For more details, we refer the reader to [1].
The Chase Termination Problem
The TGD chase procedure (or simply chase) takes as input an instance I and a set $\Sigma$ of TGDs, and constructs a universal model of I and $\Sigma$. The chase works on I by applying the so-called trigger for a set of TGDs on I. The trigger for a set $\Sigma$ of TGDs on an instance I is a pair $(\sigma, h)$, where $\sigma = \varphi \rightarrow \psi \in \Sigma$ and h is a homomorphism that maps $\varphi$ to I. An application of $(\sigma, h)$ to I returns $J = I \cup h'(\psi)$, where $h' \supseteq h$ maps each existentially quantified variable in $\psi$ to a new null value. Such a trigger application is written $I\langle\sigma, h\rangle J$. The choice of the type of the next trigger to be applied is crucial since it gives rise to different versions of the chase procedure. In this work, we focus on the oblivious [2] and semi-oblivious [7, 12] chase.
A finite sequence $I_0, I_1, \ldots, I_n$, where $n \geq 0$, is said to be a terminating oblivious chase sequence of $I_0$ w.r.t. a set $\Sigma$ of TGDs if: (i) for each $0 \leq i < n$, there exists a trigger $(\sigma, h)$ for $\Sigma$ on $I_i$ such that $I_i\langle\sigma, h\rangle I_{i+1}$; (ii) for each $0 \leq i < j < n$, assuming that $I_i\langle\sigma_i, h_i\rangle I_{i+1}$ and $I_j\langle\sigma_j, h_j\rangle I_{j+1}$, $\sigma_i = \sigma_j = \sigma$ implies $h_i \neq h_j$, i.e., $h_i$ and $h_j$ are different homomorphisms; and (iii) there is no trigger $(\sigma, h)$ for $\Sigma$ on $I_n$ such that $(\sigma, h) \not\in \{(\sigma_i, h_i)\}_{0 \leq i \leq n-1}$. In this case, the result of the chase is the (finite) instance $I_n$. An infinite sequence $I_0, I_1, \ldots$ of instances is said to be a non-terminating oblivious chase sequence of $I_0$ w.r.t. $\Sigma$ if: (i) for each $i \geq 0$, there exists a trigger $(\sigma, h)$ for $\Sigma$ on $I_i$ such that $I_i\langle\sigma, h\rangle I_{i+1}$; (ii) for each $i, j > 0$ such that $i \neq j$, assuming that $I_i\langle\sigma_i, h_i\rangle I_{i+1}$ and $I_j\langle\sigma_j, h_j\rangle I_{j+1}$, $\sigma_i = \sigma_j = \sigma$ implies $h_i \neq h_j$; and (iii) for each $i \geq 0$, and for every trigger $(\sigma, h)$ for $\Sigma$ on $I_i$, there exists $j \geq i$ such that $I_j\langle\sigma, h\rangle I_{j+1}$; this is known as the fairness condition, and guarantees that all the triggers eventually will be applied. The result of the chase is defined as the infinite instance $\cup_{i \geq 0} I_i$.
The semi-oblivious chase is a refined version of the oblivious chase, which avoids the application of some superfluous triggers. Roughly speaking, given a TGD $\varphi$ of the form $\varphi \rightarrow \psi$, for the semi-oblivious chase, two homomorphisms $h$ and $g$ that agree on the universally quantified variables of $\varphi$ occurring in $\psi$ are indistinguishable.
Henceforth, we write $o$-chase and so-chase for oblivious and semi-oblivious chase, respectively. A $\ast$-chase sequence, where $\ast \in \{o, so\}$, may be infinite.
Example 2. Let $D = \{p(a, b)\}$, and $\Sigma = \{\forall X \forall Y (p(X, Y) \rightarrow \exists Z (p(Y, Z)))\}$. There exists only one $\ast$-chase sequence of D w.r.t. $\Sigma$, where $\ast \in \{o, so\}$, which is non-terminating, i.e., $I_0, I_1, \ldots$ with
$I_0 = \{p(a, b)\}$ $I_1 = \{p(a, b), p(b, z_1)\}$ $I_i = I_{i-1} \cup \{p(z_{i-1}, z_i)\}$, for $i \geq 2$,
where $z_1, z_2, \ldots$ are nulls of $\mathbb{N}$.
For a set of TGDs, a key question is whether all or some $\ast$-chase sequences are terminating on all databases. Before formalizing the above decision problems, let us recall the following key classes of TGDs:
$CT_o^\ast = \{ \Sigma \mid \forall D, \text{ all } \ast\text{-chase sequences of } D \text{ w.r.t. } \Sigma \text{ are terminating} \}$
$CT_{\exists}^\ast = \{ \Sigma \mid \forall D, \text{ there exists a terminating } \ast\text{-chase sequence of } D \text{ w.r.t. } \Sigma \}.$
The decision problems tackled in this work are as follows: for $q \in \{\forall, \exists\}$:
Instance: A set $\Sigma$ of TGDs.
Question: Does $\Sigma \in CT_q^*$?
We recall that $CT_q^o = CT_q^{\exists} \subset CT_q^{\forall} = CT_q^*$ [7]. This implies that the preceding decision problems coincide for the (semi-)oblivious chase. Henceforth, we refer to the $*$-chase termination problem, and we write $CT^*$ for $CT_q^*$, where $* \in \{o, so\}$.
3 The Complexity of Chase Termination
We focus on the class of guarded TGDs [2], and two key subclasses of it, namely simple linear and linear TGDs [3], and we investigate the complexity of the (semi-)oblivious chase termination problem. Recall that linear TGDs are TGDs with just one atom in the body, while simple linear TGDs forbid the repetition of variables in the body. Notice that, despite their simplicity, simple linear TGDs are powerful enough for capturing prominent database dependencies, and in particular inclusion dependencies, as well as key description logics such as DL-Lite. In the sequel, we denote by $G$ the class of guarded TGDs, which is defined as the family of all possible sets of guarded TGDs. Analogously, we denote by $SL$ and $L$ the classes of simple linear and linear TGDs, respectively; clearly, $SL \subset L \subset G$. Let us first consider the less expressive classes.
3.1 Linearity
By exploiting syntactic conditions that ensure the termination of each (semi-)oblivious chase sequence on all databases, we syntactically characterize the classes $(CT^* \cap SL)$ and $(CT^* \cap L)$, where $* \in \{o, so\}$. We rely on weak-acyclicity [5] and rich-acyclicity [11]. Both weak- and rich-acyclicity are defined by posing an acyclicity condition on a graph, which encodes how terms are propagated among the positions of the underlying schema during the chase. In fact, weak-acyclicity forbids the existence of dangerous cycles (which involve the generation of new null values) in the dependency graph [5], while rich-acyclicity pose the same condition on the so-called extended dependency graph [11]. Let $WA$ and $RA$ be the classes of weakly- and richly-acyclic TGDs, respectively; notice that $RA \subset WA$. For simple linear TGDs we show that:
**Theorem 1.** $(CT^o \cap SL) = (RA \cap SL)$ and $(CT^{so} \cap SL) = (WA \cap SL)$.
In simple words, the above theorem states that, given a set $\Sigma \in SL$: $\Sigma \in CT^o$ iff $\Sigma$ is richly-acyclic, and $\Sigma \in CT^{so}$ iff $\Sigma$ is weakly-acyclic. This result is established by showing that a dangerous cycle in the extended dependency graph (resp., dependency graph) necessarily gives rise to a non-terminating $o$-chase (resp., $so$-chase) sequence.
Let us now focus on (non-simple) linear TGDs. It is possible to show, by exhibiting a counterexample, that a dangerous cycle does not necessarily correspond to an infinite chase derivation. Thus, rich- and weak-acyclicity are not powerful enough for syntactically characterize the fragment of linear TGDs that guarantees the termination of the oblivious and semi-oblivious chase, respectively. Interestingly, it is possible to extend rich- and weak-acyclicity, focussing on linear TGDs, in such a way that the
above key property holds. The obtained formalisms are dubbed *critical-rich-acyclicity* and *critical-weak-acyclicity*, and the corresponding classes are denoted as $LCriticalRA$ and $LCriticalWA$, respectively. We show that:
**Theorem 2.** $(CT^o \cap L) = LCriticalRA$ and $(CT^{so} \cap L) = LCriticalWA$.
The above syntactic characterizations, apart from being interesting in their own right, allow us to obtain optimal upper bounds for the $\star$-chase termination problem for $(S)L$ — we simply need to analyze the complexity of deciding whether a set of (simple) linear TGDs enjoys the above acyclicity-based conditions, which can be formulated as a reachability problem on a graph. In particular, we obtain the following results:
**Theorem 3.** Consider a set $\Sigma$ of TGDs. The problem of deciding whether $\Sigma \in CT^\star$, where $\star \in \{o, so\}$, is
1. $\text{NL}$-complete, even for unary and binary predicates, if $\Sigma \in SL$; and
2. $\text{PSPACE}$-complete, and $\text{NL}$-complete for predicates of bounded arity, if $\Sigma \in L$.
For the hardness results, a generic technique, called the *looping operator*, is proposed, which allows us to obtain lower bounds for the chase termination problem in a uniform way. In fact, the goal of the looping operator is to provide a generic reduction from propositional atom entailment to the complement of chase termination.
### 3.2 Guardedness
We proceed to investigate the (semi-)oblivious chase termination problem for guarded TGDs. Although there is no way (at least no obvious one) to syntactically characterize the classes $(CT^\star \cap G)$, where $\star \in \{o, so\}$, via rich- and weak-acyclicity, as we did for (simple) linear TGDs, it is possible to show that the problem of recognizing the above classes is decidable. For technical reasons, we focus on *standard databases*, that is, databases that have two constants, let say $0$ and $1$, that are available via the unary predicates $0(\cdot)$ and $1(\cdot)$, respectively. In particular, we show the following:
**Theorem 4.** Consider a set $\Sigma \in G$. The problem of deciding whether $\Sigma \in CT^\star$, where $\star \in \{o, so\}$, focussing on standard databases, is $2\text{EXPTIME}$-complete, and $\text{EXPTIME}$-complete for predicates of bounded arity.
The upper bounds are obtained by exhibiting an alternating algorithm that runs in exponential space, in general, and in polynomial space in case of predicates of bounded arity. The lower bounds are obtained by reductions from the acceptance problem of alternating exponential (resp., polynomial) space clocked Turing machines, i.e., Turing machines equipped with a counter. These reductions are obtained by modifying significantly existing reductions for the problem of propositional atom entailment under guarded TGDs, and then exploiting the looping operator mentioned above. The fact that the database is standard, is crucial for establishing the above lower bounds; the upper bounds hold even for non-standard databases.
4 Future Work
Our next step is to perform similar analysis focussing on the restricted version of the chase. We already have some preliminary positive results. In particular, if we focus on single-head linear TGDs, where each predicate appears in the head of at most one TGD, then we can syntactically characterize, via a careful extension of weak-acyclicity, the fragment that guarantees the termination of the restricted chase, and obtain a polynomial time upper bound. We are currently working towards the full settlement of the problem.
Acknowledgements. M. Calautti was supported by the European Commission, European Social Fund and Region Calabria. G. Gottlob was supported by the EPSRC Programme Grant EP/M025268/ “VADA: Value Added Data Systems – Principles and Architecture”, and the Grant ERC-POC-2014 Nr. 641222 “ExtraLytics: Big Data for Real Estate”. A. Pieris was supported by the Austrian Science Fund (FWF), projects P25207-N23 and Y698, and Vienna Science and Technology Fund (WWTF), project ICT12-015.
References
|
{"Source-Url": "https://www.pure.ed.ac.uk/ws/files/28081862/AMW_2015_paper_28.pdf", "len_cl100k_base": 4732, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22679, "total-output-tokens": 5947, "length": "2e12", "weborganizer": {"__label__adult": 0.000606536865234375, "__label__art_design": 0.0005555152893066406, "__label__crime_law": 0.0010175704956054688, "__label__education_jobs": 0.00185394287109375, "__label__entertainment": 0.00021207332611083984, "__label__fashion_beauty": 0.0003426074981689453, "__label__finance_business": 0.0006422996520996094, "__label__food_dining": 0.0009407997131347656, "__label__games": 0.0013132095336914062, "__label__hardware": 0.0008420944213867188, "__label__health": 0.0022869110107421875, "__label__history": 0.0005869865417480469, "__label__home_hobbies": 0.0002281665802001953, "__label__industrial": 0.000980377197265625, "__label__literature": 0.0015096664428710938, "__label__politics": 0.0006322860717773438, "__label__religion": 0.000797271728515625, "__label__science_tech": 0.37548828125, "__label__social_life": 0.0002319812774658203, "__label__software": 0.01134490966796875, "__label__software_dev": 0.595703125, "__label__sports_fitness": 0.0005221366882324219, "__label__transportation": 0.0010728836059570312, "__label__travel": 0.0003285408020019531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20366, 0.02729]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20366, 0.26047]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20366, 0.85717]], "google_gemma-3-12b-it_contains_pii": [[0, 1309, false], [1309, 4079, null], [4079, 7409, null], [7409, 11436, null], [11436, 14571, null], [14571, 17615, null], [17615, 20366, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1309, true], [1309, 4079, null], [4079, 7409, null], [7409, 11436, null], [11436, 14571, null], [14571, 17615, null], [17615, 20366, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20366, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20366, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20366, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20366, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20366, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20366, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20366, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20366, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20366, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20366, null]], "pdf_page_numbers": [[0, 1309, 1], [1309, 4079, 2], [4079, 7409, 3], [7409, 11436, 4], [11436, 14571, 5], [14571, 17615, 6], [17615, 20366, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20366, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
4f16dbb987b687487d7d6fab036b520f517d98dc
|
1981
Programming Processor Interconnection Structures
Lawrence Snyder
Report Number:
81-381
This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact epubs@purdue.edu for additional information.
PROGRAMMING PROCESSOR INTERCONNECTION STRUCTURES*
Lawrence Snyder
Department of Computer Sciences
Purdue University
West Lafayette, IN
47907
CSD-TR-381
October, 1981
ABSTRACT
Parallel computer architecture complicates the already difficult task of parallel programming in many ways, e.g., by a rigid interconnection structure, addressing complexity, and shape and size mismatches. The CHiP computer is a new architecture that reduces these complications by permitting the processor interconnection structure to be programmed. This new kind of programming is explained. Algorithms are presented for several interconnection patterns including the torus and the complete binary tree and general embedding strategies are identified.
*The research described herein is part of the Blue CHiP Project. Funding is provided in part by the Office of Naval Research under Contract No. N00014-80-K-0618 and Contract No. N00014-81-K-0360, Special Research Opportunities Program, Task 3RO-100.
PROGRAMMING PROCESSOR INTERCONNECTION
STRUCTURES*
Lawrence Snyder
Department of Computer Sciences
Purdue University
West Lafayette, IN
47907
Introduction
Although it is a difficult task to design a sequential computer architecture that efficiently hosts sequential algorithms, it is perhaps even more challenging to design a parallel architecture that efficiently hosts parallel algorithms. The aspects of parallel computation that frustrate the harmonious match between algorithm and architecture are many:
Rigid interconnection structure: Parallel architectures tend to provide a fixed interconnection structure between processing elements (PE's). For example, ILLIAC IV is mesh connected; the Massively Parallel Processor [1] has a toroidal structure. But recently developed parallel algorithms use a variety of PE interconnection structures. For example, there are tree algorithms for everything from sorting to graph coloring [2] as well as applicative language expression evaluation [3], hexagonally connected pipelined algo-
*The research described herein is part of the Blue CHIP Project. Funding is provided in part by the Office of Naval Research under Contract No. N00014-80-K-0816 and Contract No. N00014-81-K-0303, Special Research Opportunities Program, Task SRO-100.
gorithms for numeric problems [4], "double trees" for searching and data base operations [5], and many nonstandard interconnection graphs. (See Figure 1.) The problem is that the rigid interconnection structure biases the architecture towards a particular class of algorithms and makes it difficult to use for any other class of algorithms.
Problem shape and size mismatch: Parallel algorithms tend to require a particular number of PE's in a particular shape that is determined by the problem's input, but the architecture provides only one fixed size and shape. For example, an algorithm requiring an n/2 x 2n array of PE's does not "fit" on an n x n mesh connected architecture even though there are enough processors.
Addressing complexity: Certain parallel architectures, e.g., the Ultra Computer [6] and the Cube connected cycles [7], provide a "universal" interconnection structure in which a logical interconnection structure is implemented on the physical structure by means of packet routing operations. Time is wasted in unproductive packet switching. More seriously, the programs stored in the PE's are complicated by the need to compute target addresses.
Paucity of programming languages: Although languages such as APL and Concurrent Pascal have "parallel semantics," most parallel algorithms are specified in an ad hoc manner. Thus there is little guidance from the programming language as to what features to optimize for.
These and other complications explain in large measure why highly parallel computers have been difficult to program.
Figure 1. Interconnection patterns for parallel algorithms (a) mesh, (b) hexagonally connected mesh, (c) torus, (d) binary tree, (e) double tree.
We report on a new family of architectures, the Configurable, Highly Parallel (CHiP) computers, that respond to the demands of parallel algorithms, especially the need for locality and flexibility. The central concept is this:
The processing elements are embedded into a programmable switch lattice that permits not only the programming of the PE's but also the direct programming of their interconnection structure.
This second kind of programming not only ameliorates the difficulties mentioned above, it also permits the convenient composition of parallel algorithms. It has even led to the development of entirely new parallel algorithms [8]. In this paper we give a synopsis of the CHiP architecture and then explore the consequences of this new kind of programming, interconnection structure programming. The main results are algorithms of programming various interconnection structures.
Synopsis of the CHiP Computer
[Readers familiar with the CHiP Computer may wish to omit this section.]
A CHiP Computer is composed of a set of homogeneous microprocessor elements connected at regular intervals to the switches of the switch lattice. The lattice is composed of programmable switches connected by data paths to each other or to the PE's. Perimeter switches are attached to external storage devices. Figure 2 illustrates two examples of this structure.* Each PE has its own local program and data memory and
*Notice that the pictures are not drawn to scale. The PE's are much larger than the switches.
each switch contains enough local memory to store several configuration settings.
(a) (b)
Figure 2. Two lattices. Circles represent switches; squares represent processing elements; lines represent data paths.
A configuration setting is an instruction which, when invoked, causes the switch to form a passive connection between any combination of its incident data paths. Notice that this is circuit switching rather than packet switching and that fan out is possible at the switches. Figure 3(a) shows the configuration settings for a mesh pattern for the lattice of Figure 2(a); Figure 3(b) shows the same lattice configured as a binary tree. To implement an interconnection pattern, the switches are loaded with configuration settings by an external control processor via a "skeleton" that is transparent to this discussion. This activity is usually performed in parallel with the controller's loading of the PE programs.
Figure 3. Two configurations of the lattice in Figure 2(a).
A parallel program is viewed as the composition of several parallel algorithms each with its own processor interconnection pattern. Each of these interconnection patterns and the associated PE code is called a "phase." The controller loads the PE's and switches with the instructions for several phases. Processing begins with a broadcast command from the controller to the switches to invoke a particular stored interconnection pattern. This also causes the PE's to begin synchronously executing their local programs. The interconnection structure remains static throughout the execution of the phase. When the phase completes, another broadcast command causes a different interconnection pattern.
to be invoked and a new phase to be initiated. The action continues in this manner from phase to phase.
Several points are worthy of special emphasis. First, to implement an interconnection pattern requires that all configuration settings be stored in the same location in all of the switches. This is so that the broadcast command can take the simple form "invoke the setting in location x," thus making possible one step phase transitions. Second, switches can provide the ability for data paths to "crossover" one another, i.e., a setting can implement multiple data path interconnections. Third, the PE's need not know to whom they are connected; they simply execute instructions of the form READ EAST, WRITE NORTHWEST, etc. The interconnection pattern explicitly implements the routing. Fourth, the data paths are bidirectional.
Example: Consider the problem of finding the solution to a system of linear equations, \( Ax = b \), where \( A \) is an \( n \times n \) band matrix of width \( p \) and \( b \) is an \( n \) vector. To solve the problem we use the Kung-Leiserson LU decomposition pipelined (systolic) algorithm [4] and their lower triangular system (LTS) solver algorithm. The interconnection pattern (for \( p = 4 \)) is shown in Figure 4. The exact operation of the algorithms is unimportant except to say that they are pipelined and the data moves in the direction of the arrows. Phase 1 decomposes \( A \) into lower and upper triangular matrices, \( A = LU \), and at the same time solves the lower triangular system, \( Ly = b \). Figure 5 shows the embedding into the lattice of Figure 2(a) of these two algorithms -- the \( L \) matrix is transferred directly from the decomposition processor to the LTS solver. The \( x \) vector result can be formed by solving \( Ux = y \), which is done by rewriting \( U \) as a lower triangular matrix and using
FIGURE 4. Kung-Leiserson Systolic arrays [4]. (a) LU-Decomposition; (b) Lower triangular systems solver.
FIGURE 5. The embedding of the LU-Decomposition processors (1-16) and the lower triangular system solver (A-D) of Figure 4 in the lattice of Figure 2(a). The embedding appears in the North-West corner of the lattice.
the LTS algorithm, but $U$ must be completely generated before being rewritten. Thus, phase 1 saves the $U$ matrix and $y$ vector values in preparation for phase 2 by threading them through the lattice. (See Figure 6.) In phase 2 the values are threaded back through the lattice in the opposite direction, which effects the rewriting operation, and they are input to another LTS solver. (See Figure 7.) The result exits from the array at the left end of the LTS solver.
The example is specialized to a band matrix of width $p=4$. A general procedure that solves this problem for arbitrary width bands would differ only in the interconnection structure; the various PE programs required for an arbitrary width solution are all represented in this $p=4$ case. Thus, it is the programming of interconnection patterns that is of central importance.
Programming Interconnection Patterns
We will emphasize the specification of uniform rather than ad hoc interconnection patterns because they are of interest in their own right and they are often the building blocks that are used by the less regular patterns. First, we must consider the lattice that is to host the interconnection pattern.
As indicated in Figure 2, a variety of different lattices are possible, although any particular architecture will use only one. Lattices differ in complexity in several ways: corridor width, degree, and crossover capability. The corridor width, \( w \), is the number of switches separating two adjacent PE's, e.g., the lattice of Figure 2(a) has \( w=1 \) and that of Figure 2(b) has \( w=2 \). Any lattice can embed an arbitrary graph, but to do so may require leaving some PE's unused [9]. A wider corridor width uses PE's more efficiently when embedding complex graphs. The degree, \( d \), of a lattice is the number of data paths incident on a PE or a switch. (If these two numbers are different, \( d \) is the minimum.) For example, Figure 2(a) has \( d=3 \) while Figure 2(b) has \( d=4 \). Finally, the amount of crossover capability \( c \) is the number of distinct data paths that can intersect at one switch. A crossover capability \( c=2 \) permits a crossover while \( c=1 \) does not. In the interest of generality, we will assume the "simplest" lattice suitable for an interconnection pattern.
Programming an interconnection pattern requires that the configuration setting of each relevant switch be defined. For the present discussion it suffices that we give a logical specification of the setting since the actual bit configurations are irrelevant. Accordingly, we will code the compass points with single letters:
and we will assign settings as pairs of these letters. For example, EW is a horizontal connection while ME is a 45° angle. The lattice will always be \( n \times n \) where \( n \) is the number of processors on a side. We name the switches and PE's with a two value index corresponding to its matrix position. See Figure 8. We will name the lattice "L".

**Figure 8.** The two index coding scheme for a lattice.
As an example of this specification method, we observe that the mesh interconnection pattern (Figure 3(a)) can be defined by the two conditions:
(i) \( i \) is odd and \( j \) is even implies \( L[i,j] = NS \)
(ii) \( i \) is even and \( j \) is odd implies \( L[i,j] = EW \)
\*In our presentation of interconnection patterns, we will use a simple declarative specification. We are presently developing a configuration programming language, but until it is completed, we prefer the neutral declarative approach.
provided that the lattice is initially unconfigured. A hexagonally connect-
ed interconnection pattern requires the further condition
(iii) \( i \) is odd and \( j \) is odd implies \( L[i,j] = OF \)
and requires a lattice of degree \( d=6 \) or (for symmetry) \( d=8 \). Notice
that this specification is somewhat more general than that used in
Figure 5.
**Torus Interconnection Patterns**
Since the \( n \times n \) torus interconnection pattern is simply an \( n \times n \)
mesh with the top row and bottom row PE's connected and the left
column and right column PE's connected, (sec Figure 1), one might
expect a one corridor, degree 4, crossover capable (\( c=2 \)) lattice to
suffice to host this pattern. Surprisingly, it does not.
**Theorem.** Let \( L \) be a \( w=1, d=4, c=2 \) \( n \times n \) lattice. \( L \) cannot be set
to connect the PE's into an \( n \times n \) torus.
The proof involves arguing that the perimeter corridors must be used
for two purposes - to support both the vertical and horizontal "wrap
around" and thus cannot lead to an edge disjoint graph embedding.
**Direct Torus Representation.** Even when \( d=8 \), embedding the torus
is not trivial if we are to avoid multiple use of data paths.
**Lattice.** \( w=1, d=8, c=2 \).
**Settings for Crossover Level 1.**
First we connect the PE's in the rows. Then we run a data path from
the Northeast part of the first PE through the corridor above the row
and finally down into the Northeast part of the last PE in the row.
For example,
shows the construction for conditions (i) through (iii).
(i) [PE row connections] $1 < i, j < 2n + 1$ and $i$ is even and $j$ is odd imply $L[i,j] = EW$.
(ii) [Northeast ports] $i < 2n + 1$ and $i$ is odd imply $L[i,3] = AE$ and $L[i,2n+1] = AW$.
(iii) [Corridor above rows] $i < 2n + 1$ and $i$ is odd and $3 < j < 2n + 1$ imply $L[i,j] = EW$.
Settings for Crossover Level 2. A similar strategy is used for the columns.
(iv) [PE column connections] $1 < i, j < 2n + 1$ and $i$ is odd and $j$ is even imply $L[i,j] = NS$.
(v) [Southwest ports] $j < 2n + 1$ and $j$ is odd imply $L[3,j] = HS$ and $L[2n+1,j] = NM$.
(vi) [Corridor left of columns] $j < 2n + 1$ and $j$ is odd and $3 < i < 2n + 1$ imply $L[i,j] = NS$.
Figure 9 illustrates the entire construction.
The difficulty with this interconnection pattern, of course, is that it has long data paths that are subject to propagation delay. Some algorithms can accept such a delay, but generally we would like to reduce it. Accordingly, we prefer the following more intricate pattern that interleaves the row and column processing elements so that there is a fixed bound on the distance a signal must travel.
Figure 9. Direct embedding of the torus into the lattice of Figure 2(a). Edges of like color intersecting at a switch are connected.
Figure 10. Interleaved embedding of the torus into the lattice of Figure 2(a).
Interleaved Torus Representation
Lattice. \( w = 1, d = 0, c = 2 \).
Settings for Crossover Level 1.
First we connect alternate PE's in rows. For example,
```
o o o o o o o o o o
o o o o o o o o o o
```
The end connections are specified by
(i) [East port, end PE's] \( i \) is even implies \( L[i, 3] = EW \) and \( L[i, 2n+1] = NO \).
The westerly port connections of each PE are given by
(ii) [West port] \( i \) is even and \( 3 < j < 2n+1 \) and \( j \) is odd imply \( L[i, j] = OE \).
The connections in the corridor above the row are given by
(iii) [Northeast port] \( i < 2n+1 \) and \( i \) is odd and \( 3 < j < 2n+1 \) and \( j \) is odd imply \( L[i, j] = NE \).
(iv) \( i < 2n+1 \) and \( i \) is odd and \( 3 < j \) and \( j \) is even imply \( L[i, j] = WF \).
Settings for Crossover Level 2. The columns are connected in a manner analogous to the rows.
(i) [South port, end PE's] \( j \) is even implies \( L[3, j] = NS \) and \( L[2n+1, j] = ON \).
(ii) [Northport] \( j \) is even and \( 3 < i < 2n+1 \) and \( i \) is odd imply \( L[i, j] = OS \).
(iii) [Southwest port] \( j < 2n+1 \) and \( j \) is odd and \( 3 < i < 2n+1 \) and \( i \) is odd imply \( L[i, j] = SW \).
(iv) $j < 2n + 1$ and $j$ is odd and $3 < i$ and $i$ is even imply $L[i, j] = NF$.
The entire construction is shown in Figure 10.
Clearly the maximum number of switches that any data item must pass through is three. *We have increased the locality of the torus embedding.* It is, therefore, more amenable VLSI implementation and can be used in an arbitrarily large lattice with only a constant delay.
**Complete Binary Trees**
Although an efficient embedding of complete binary trees into the plane is known [10], its direct application to interconnection pattern programming is very wasteful. (See Figure 11.) In fact, since a complete binary tree of depth $m$ has $2^m - 1$ nodes, we can expect a lattice with $2^k \times 2^k$ PE's to host a complete binary tree of depth $2k$ with one unused node. Call this node a "spare." We can expect that the simplest lattice hosting this pattern will not require crossover capability, since trees are planar, and will require only degree $d = 4$, since trees have at most degree 3 connections. (The lattice then is given by $w = 1, d = 4, c = 1$.) But if the reader attempts to develop an interconnection with these conditions, he will find it to be unexpectedly difficult.
The overall strategy is to begin with small, complete binary trees embedded in square regions of the lattice. To reduce propagation delay the root will be placed in the center of the block. Each block will contain a spare PE. We compose four such square blocks together to form a larger binary tree in a larger square block. Three of the four spare PE's will be used as nodes in the composed tree; the fourth spare will become
Figure 11. Hyper-H tree (Figure 1(d)) embedding [10]. Filled PE’s are unused.
The spare of the new block. The goal is to place the spares so that they will be conveniently located for the composition.
Define three types of tree embeddings:
*Type A* blocks have their spare PE midway along one side adjacent to the exiting edge from the block’s root.
*Type B* blocks have their spare PE in the corner on the same side as the exiting edge from the block’s root.
*Type C* blocks have their spare PE in the corner on the opposite side of the exiting edge from the root.
Figure 12 illustrates the three types of blocks and demonstrates that they can be inductively produced using blocks of these types.
Notice, that as part of the inductive hypothesis, we must argue that the perimeter switches are available for routing the new edges. This is obviously true if they are available in the basis blocks. The smallest blocks that we have been able to find with this property are 4x4 blocks embedding 15 node binary trees. These are illustrated in Figure 13.
The conceptual algorithm is clear. Refer to Figure 14. Begin with an objective block type, e.g., Type B, and a lattice of size \(2^k \times 2^k\) PE's. Recursively embed the four subtrees in lattices of size \(2^{k-1} \times 2^{k-1}\) such that the proper block types are selected. In the basis cases \((2^2 \times 2^2)\), use an explicit embedding. Notice that the results may require reflection. Connect the three spares by appropriate switch settings. This latter operation is always possible based on an inductive argument that depends upon two facts:
Figure 13. Basis blocks for planar binary tree embedding.
(a) After the basis connection, all spares have their origin as Type C basis block elements, and
(b) None of the switches surrounding a Type C basis block spare is used and so there are three directions of access.
This guarantees that the three data paths can always be assigned. The detailed program is omitted.
Clearly, we have achieved our goal of complete PE usage of this simple lattice. If the available lattice were more complex, e.g., had degree 8 or multiple corridors, then the same embedding would work and some minor optimizations would be possible.
Lacing a Corridor
Although we could present many more of our embeddings - a broadcast tree, a double tree, leaves on a line tree, shuffle exchange, etc. - it is perhaps more instructive to illustrate a technique that gives unexpected power for programming complex graphs. It is called "lacing a corridor."
and it takes optimum advantage of a fixed architectural resource, the corridor width.
Suppose one is embedding an interconnection pattern and must move a large number of distinct data paths across a region of the lattice. By definition, the corridor width, $w$, is the number of switches separating adjacent PE's. Thus, if the degree $d=4$, then $w$ distinct data paths can be routed between a pair of PE's. It would appear that for the degree $d=8$ lattice, $w$ distinct data paths are still the maximum that can be routed down a corridor. But we can do much better.
The idea behind lacing is to begin with straight data paths down a corridor and then to add zig-zag paths that exploit the higher degree and the crossover capability of the switches. For example, Figure 15 shows a $w=4$, $d=8$, $c=3$ lattice in which ten distinct data paths have been squeezed through the four available switches! This is the maximum possible since the bisection width of this portion of the lattice is ten. (Bisection width is a concept introduced by Thompson [11] referring to the minimum number of wires cut by a line bisecting a VLSI layout.) If we
expand our scope somewhat and include the switches that bound the corridor, then we can increase the number of distinct paths by two. (We will ignore this optimization in the lacing definition below.)
\textit{Lattice}. \(w \geq 1, d = 8, c = 3.\)
The construction is limited to a region bounded by four PE's. The upper left hand corner PE is \(L[r,s].\)
Settings for crossover level 1. [Horizontal Path]
(i) \(1 \leq i \leq w \text{ and } 0 \leq j \leq w + 1 \text{ imply } L[r+i,s+j] = EW.\)
Settings for crossover level 2. [Dotted Path]
(ii) \(1 \leq i \leq w - 1 \text{ and } 0 \leq j \leq w + 1 \text{ and } j \text{ is even imply } L[r+i,s+j] = AF.\)
(iii) \(1 \leq i \leq w - 1 \text{ and } 0 \leq j \leq w + 1 \text{ and } j \text{ is odd imply } L[r+i+1,s+j] = OM.\)
Settings for crossover level 3. [Dashed Path]
(iv) \(1 \leq i \leq w - 1 \text{ and } 0 \leq j \leq w + 1 \text{ and } j \text{ is even imply } L[r+i+1,s+j] = OM.\)
(v) \(1 \leq i \leq w - 1 \text{ and } 0 \leq j \leq w + 1 \text{ and } j \text{ is odd imply } L[r+i,s+j] = AF.\)
Notice that if the switches had even higher crossover capability \(c = 4,\) which is the maximum for degree 8 switches, then we could even route vertical wires across the laces if they were needed.
\textit{Conclusions}
We have introduced the CHiP architecture and argued that its provision for interconnection pattern programming alleviates many of the difficulties encountered in parallel program development. This simplification is achieved in two ways. First, the rigidity of a fixed interconnection structure is no longer an obstacle when one wants to program an algorithm that uses a different interconnection pattern. And
secondly, there is a clean separation between routing the data and programming the activity of the PE's.
Additionally we have demonstrated that interconnection programming is an interesting and challenging activity. We have shown that locality can be increased by careful study of the torus. We have shown that it is possible to embed the complete binary tree to achieve essentially complete PE utilization. The result involves an interesting assignment of spare PE's. And we have shown that there are general techniques (e.g., corridor lacing) to be found.
Acknowledgments
It is a pleasure to thank Ching C. Hsiao for his original use of lacing and Paul McNabb for developing the software to produce these embeddings and for stimulating discussions of the binary tree embedding. Thanks are due to Paul Morrisett for programming the torus and lacing figures and to Julie Hanover for excellent manuscript preparation.
Compound octagon-square lattice
Chengtu, Szechwan, 1825 A.D.
References
Massively Parallel Processor
Technical Report GER-16684, Goodyear Aerospace Corporation,
July 1979.
[2] Sally A. Browning
The Tree Machine: A Highly Concurrent Programming Environment
[3] Bart Locanthi
The Homogeneous Machine
Systolic Arrays (for VLSI)
Technical Report CS-79-103, Carnegie-Mellon University, December
1979 (also in [10])
A Tree Machine for Searching Problems
In Proceedings of the 5th International Conference on Parallel
Processing, IEEE, pp. 257-266, 1979
Ultracomputers
Transactions on Programming Languages and Systems, ACM, 1980
The Cube connected cycles: A Versatile Network for Parallel
Computation
In Proceedings of the 20th Annual Symposium on the Foundations
of Computer Science, IEEE October, 1979
[8] D. B. Gannon and Lawrence Snyder
Linear Recurrence Algorithms for VLSI: The Configurable, Highly
Parallel Approach
In Proceedings of the 10th International Conference on Parallel
Processing, IEEE, 1981
[9] L. Snyder
Overview of the CHiP Computer
In VLSI 81, Academic Press, 1981
[10] Carver Mead and Lynn Conway
Introduction to VLSI Systems
Addison Wesley, 1980
A Complexity Theory for VLSI
|
{"Source-Url": "https://docs.lib.purdue.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1307&context=cstech", "len_cl100k_base": 6365, "olmocr-version": "0.1.53", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 52775, "total-output-tokens": 7792, "length": "2e12", "weborganizer": {"__label__adult": 0.0006017684936523438, "__label__art_design": 0.0009098052978515624, "__label__crime_law": 0.0005807876586914062, "__label__education_jobs": 0.0012922286987304688, "__label__entertainment": 0.0001538991928100586, "__label__fashion_beauty": 0.00030803680419921875, "__label__finance_business": 0.00042128562927246094, "__label__food_dining": 0.0006465911865234375, "__label__games": 0.0009179115295410156, "__label__hardware": 0.02154541015625, "__label__health": 0.0011014938354492188, "__label__history": 0.0005521774291992188, "__label__home_hobbies": 0.0003261566162109375, "__label__industrial": 0.00183868408203125, "__label__literature": 0.00034117698669433594, "__label__politics": 0.0004305839538574219, "__label__religion": 0.0009851455688476562, "__label__science_tech": 0.450927734375, "__label__social_life": 0.0001093745231628418, "__label__software": 0.0063018798828125, "__label__software_dev": 0.5068359375, "__label__sports_fitness": 0.0005307197570800781, "__label__transportation": 0.001880645751953125, "__label__travel": 0.00031304359436035156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27111, 0.04545]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27111, 0.76126]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27111, 0.91167]], "google_gemma-3-12b-it_contains_pii": [[0, 441, false], [441, 1425, null], [1425, 2714, null], [2714, 4274, null], [4274, 4420, null], [4420, 5935, null], [5935, 6863, null], [6863, 7623, null], [7623, 9503, null], [9503, 9608, null], [9608, 9825, null], [9825, 10671, null], [10671, 12450, null], [12450, 13409, null], [13409, 14938, null], [14938, 16108, null], [16108, 16321, null], [16321, 17570, null], [17570, 19218, null], [19218, 19789, null], [19789, 20831, null], [20831, 20831, null], [20831, 21763, null], [21763, 22903, null], [22903, 24599, null], [24599, 25581, null], [25581, 27111, null]], "google_gemma-3-12b-it_is_public_document": [[0, 441, true], [441, 1425, null], [1425, 2714, null], [2714, 4274, null], [4274, 4420, null], [4420, 5935, null], [5935, 6863, null], [6863, 7623, null], [7623, 9503, null], [9503, 9608, null], [9608, 9825, null], [9825, 10671, null], [10671, 12450, null], [12450, 13409, null], [13409, 14938, null], [14938, 16108, null], [16108, 16321, null], [16321, 17570, null], [17570, 19218, null], [19218, 19789, null], [19789, 20831, null], [20831, 20831, null], [20831, 21763, null], [21763, 22903, null], [22903, 24599, null], [24599, 25581, null], [25581, 27111, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27111, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27111, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27111, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27111, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27111, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27111, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27111, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27111, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27111, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27111, null]], "pdf_page_numbers": [[0, 441, 1], [441, 1425, 2], [1425, 2714, 3], [2714, 4274, 4], [4274, 4420, 5], [4420, 5935, 6], [5935, 6863, 7], [6863, 7623, 8], [7623, 9503, 9], [9503, 9608, 10], [9608, 9825, 11], [9825, 10671, 12], [10671, 12450, 13], [12450, 13409, 14], [13409, 14938, 15], [14938, 16108, 16], [16108, 16321, 17], [16321, 17570, 18], [17570, 19218, 19], [19218, 19789, 20], [19789, 20831, 21], [20831, 20831, 22], [20831, 21763, 23], [21763, 22903, 24], [22903, 24599, 25], [24599, 25581, 26], [25581, 27111, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27111, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
ae2ab5a3525fc849f67504da9078e089a24bbd47
|
Research Statement
Wei Yang
I enjoy doing research in Computer Security and Software Engineering and specifically in mobile security and adversarial machine learning. A primary goal of my research is to build adversarial-resilient intelligent security systems. I have been developing such security systems for the mobile device ecosystem that serves billions of users, millions of apps, and hundreds of thousands of app developers. For an ecosystem of this magnitude, manual inspection or rule-based security systems are costly and error-prone. There is a strong need for intelligent security systems that can learn from experiences, solve problems, and use knowledge to adapt to new situations.
However, achieving intelligence in security systems is challenging. In the cat-and-mouse game between security analysts and adversaries, the intelligence of adversaries also increases. In this never-ending game, the adversaries continuously evolve their attacks to be specifically adversarial to newly proposed intelligent security techniques. To address this challenge, I have been pursuing two lines of research: (1) **enhancing intelligence of existing security systems** to automate the security-decision making by techniques such as program analysis [11, 8, 10, 6, U3], natural language processing (NLP) [9, 7, U7, 1], and machine learning [8, 4, 3, 2]; (2) **guarding against emerging attacks** specifically adversarial to these newly-proposed intelligent security techniques by developing corresponding defenses [13, U1, U2] and testing methodologies [12, 5].
Throughout these research efforts, my general research methodology is to extract insightful data for security systems (through program analysis and NLP techniques), to enable intelligent decision making in security systems (through machine learning techniques to learn from the extracted data), and to strengthen robustness of the security systems by generating adversarial-testing inputs to check these intelligent security techniques and building defense to prevent the adversarial attacks.
With this methodology, my research has derived solutions that have high impact on real-world systems. For instance, my work on analysis and testing of mobile applications (apps) [11, 10] in collaboration with Tencent Ltd. has been deployed and adopted in daily testing of a mobile app named WeChat, a popular messenger app with over 900 million monthly active users. A number of tools grown out of my research have been adopted by companies such as Fujitsu [P1, P2, 13, 6], Samsung [P3, P2], and IBM.
## 1 Enhancing Intelligence of Security Systems
The wide adoption of personal digital devices such as mobile phones increases the need of security systems that can serve for users without information technology expertise. Security systems need to evolve from just warning expert users about potential security threats to making security decisions for common users. Thus security systems are required not only to capture security-sensitive behaviors but also to infer the intentions of security-sensitive behaviors. For example, a benign app may send out a user’s location to find nearby restaurants, while a malicious app may leak the user’s location for its own benefits.
To address this issue, my dissertation research takes a different approach from previous security systems: enhance the intelligence of security systems by mimicking human decision-making process. I have designed and implemented security systems based on a key factor that is frequently overlooked by existing security systems: **user expectations**, i.e., did a user expect a certain functionality (e.g., sending user’s location) to occur? My research contributions range from automatic assessment of security risk [7] and privacy risk [9], through automatic synthesis of natural language security descriptions [U7], to automated malware detection [8, 13].
Of my research contributions, one highlight is **AppContext** [8]. On mobile app markets, I find a large amount of evasive malware that hides malicious intentions by mixing malicious behaviors with expected functionality. For example, a malicious app may present itself as a messaging app that sends SMS messages when the user clicks the send button. However, it also sends SMS messages containing the user’s contact information in the background without notifying the user. Since both behaviors use the same set of security-sensitive permissions and APIs, existing security systems such as information flow analysis are unlikely to distinguish between these cases.
To address this issue, I design and build AppContext by considering **user awareness of the security-sensitive behaviors**. AppContext reveals a key insight that mobile malware leverages two unique characteristics of mobile systems to maximize profits while prolonging its lifetime by deceiving users: frequent occurrences of imperceptible system event and indicative states of external environments. Following such insight, I develop static analysis in AppContext that analyzes executable code of a mobile app to extract the contexts of security-sensitive behaviors that precisely reflect user awareness and the intention of the behaviors. These contexts include (1) system events that malware uses to trigger malicious payloads and (2) external-environment states that malware uses to control the payloads not to trigger too frequently for users to notice anomalies. For example, a malicious app will trigger its payload whenever phone signal strength changes but only during 11PM to 5AM when users are usually sleeping. AppContext then trains a machine learning model based on the contexts to differentiate benign behaviors from malicious ones. Our evaluation results indicate that AppContext can detect malware (including unknown malware) with 87.7% precision and 95% recall.
In addition to bringing the concept of user expectation in program analysis, I also work on various types of textual artifacts to infer user expectations. WHYPER [7] is our pioneering and exemplary work in this space, which automates the risk assessment of Android apps by applying NLP techniques to app descriptions. App markets such as Google Play present a permission list to show what private data an app may access, rather than how and why the app uses the private data, causing users to make uninformed decisions on how to control their privacy. To address this issue, my collaborators and I develop an NLP technique based on semantic models extracted from Android API documents to determine which sentence (if any) in app descriptions indicates the use of a permission. WHYPER was the first to apply NLP to the mobile security domain to analyze the fidelity between app descriptions and permissions. Our results on 581 popular apps show that WHYPER effectively identified the permission-explaining sentences with 82.8% precision and 81.5% recall.
Following WHYPER, my collaborators and I develop a number of approaches that further bridge the semantic gap between user expectations and apps’ security-sensitive behaviors. The most closely related ones are CLAP [U7] and Pluto [9]. CLAP [U7] naturally extends WHYPER by automatically generating meaningful explanations for unexplained permission uses. Prior approaches such as WHYPER rely on the availability of permission-explaining sentences in the app description. If an app does not provide description or provide uninformative description, none of the previous approach would work. To address this issue, we borrow the idea of collaborative filtering in recommendation systems (e.g., recommending movies to one user with the help of other users’ ratings). CLAP uses information retrieval and text mining algorithms to identify explanatory sentences in similar apps and use them to synthesize sentences that can explain permissions for the original app. Our results show that CLAP is effective for generating highly interpretable natural language explanations for unexplained permission requests with over 90% precision.
Pluto [9] takes a step further from WHYPER to assess and quantify the risk of potential exposures of user data (that the users expect to be secure to provide to apps). Pluto leverages NLP, machine learning, and data mining techniques to reveal what private information can potentially be inferred from user inputs, files, and the names of apps installed on the phone. To validate Pluto, we conduct a user study, which establishes the ground truth for user input data and lists of installed apps for about 300 users. Our results show that Pluto achieve 75% recall and 80% precision for user data from app files and user inputs, and even better results for the names of installed apps.
2 Strengthening Intelligent Security Techniques
Although the above-mentioned intelligent techniques bring impressive capabilities to security systems, the robustness of these techniques in adversarial settings is still questionable. My research has explored the feasibility of developing adversarial-resilient techniques in two main areas: program analysis and machine learning.
Adversarial-resilient program analysis. To evade the detection of security systems, adversaries may obfuscate the malicious program code or change the malicious program structures to impede or misguide program analysis. To build more robust security systems, I design and implement program analysis techniques [13, U1, U2] that are resilient to obfuscation and evasive attacks. One of such techniques is EnMobile [13], a program analysis framework characterizing mobile-app behaviors by directly and comprehensively modeling an app’s interactions with its environment.
When developing AppContext, I observe that many evasive malware samples separate malicious behaviors in multiple phases (e.g., downloading, preprocessing), with intermediate computation results stored in temporary files or databases. Existing information flow cannot “stitch together” the segmented flows punctuated with interactions with external entities (e.g., files or databases) to decipher malicious behaviors initiated and controlled by malicious servers, such as initiating spams or launching denial-of-service attacks.
To address this challenge, I propose the concept of entity-based program analysis to complement traditional program analysis based on implementation-dependent structures (e.g., methods, objects). To enable entity-based program analysis, I design two supporting components: an identity-propagation component that conducts a flow- and context-sensitive analysis to establish the correspondence between in-program objects and the external entities with which the object may interact in each execution context, and a stitching component that conducts a flow-sensitive analysis to connect segmented information flows that are feasible in actual executions. I implement EnMobile and provide a practical application of EnMobile in a signature-based scheme for detecting mobile malware. Our evaluation results on a set of 6,614 apps show that EnMobile detects malware with substantially higher precision and recall than state-of-the-art approaches.
While developing EnMobile, I observe that malware further evolves to use program features that cannot be analyzed by static analysis such as native code, dynamically loaded code, or dynamic programming language features (e.g., Java reflection). I address this issue from two different perspectives. On the one hand, I develop ModuDroid [U1] to enable partial installations. ModuDroid generates patches to separate the suspicious code (e.g., code unanalyzable to static analysis) from desired code (e.g., code passed the static checking) in an Android app. ModuDroid guarantees the separation to be impact-free at the component level and our evaluation on 968 benign apps and 977 potentially unwanted apps shows that only fewer than 5% of separated apps can be unsafe. Our evaluation results also show that ModuDroid can successfully separate more than twice of the apps than related approaches.
On the other hand, I propose REINAM [U2] to model the unanalyzable code by automatically inferring grammars
Wei Yang, Research Statement, p. 2 of 5
of the inputs that can be accepted by the unanalyzable program parts. Specifically, REINAM infers the input grammars based on observations from executions of the code. REINAM leverages reinforcement learning to diversify the seed inputs so the dynamic executions will not narrowly focus on a certain area of a program. Our preliminary results suggest that the grammars generated by REINAM are at least four times more comprehensive (while maintaining the same precision) than grammars inferred by existing techniques such as active learning techniques.
Adversarial testing for machine learning. Recent research finds that machine learning algorithms can produce unexpected results to small, specially crafted perturbations. Such perturbations cause learning-based systems to misclassify these well-crafted examples. In security systems, such incorrect behaviors can lead to potentially disastrous consequences. Traditional testing system is not suitable for detecting these incorrect behaviors because the core logics of learning-based systems are embedded in the machine learning models (i.e., arithmetic operations of formulas) instead of the control flow program structures (i.e., program branches/paths) that traditional testing system is based on. An automated testing framework is needed to enable a learning-based security system to detect erroneous behaviors and correct the behaviors before adversaries launch attacks.
MRV is the first of such system that generates adversarial-testing inputs for mobile malware detection systems. Existing approaches on adversarial input generation typically focus on image inputs. Some prior work applies its attack on malware but only manipulates the feature vectors of a malware sample without considering feasibility and impact of the mutation on the malware’s code. Based on our study, when applying the changes of prior work made in feature vectors to the malware’s code, the changes cause the malware to crash, cause undesired behaviors, or disable malicious functionalities (sometimes the modified code cannot even be compiled). To address these issues, I design and implement a systematic adversarial-testing input generation system, Malware Recomposition Variation (MRV). The test inputs generated by MRV satisfy three requirements: malicious (i.e., the generated malware maintains the original malicious purposes), robust (i.e., the generated malware does not crash), and adversarial (i.e., the generated malware can evade the detection of security systems under testing).
To generate such test inputs, I design two input generation strategies (i.e., feature confusion and feature evolution) that follow structures of existing evasive malware or existing malware evolution histories. Upon the given malware, MRV conducts semantic-feature mutation analysis and phylogenetic analysis to synthesize mutation strategies. I build a program transplantation framework capable of inter-method, inter-component, and inter-app transplantation to automatically mutate malware bytecode based on synthesized mutation strategies to generate new malware variants (i.e., test inputs). Our evaluation on existing research and commercial malware detectors shows that MRV is effective to generate adversarial-testing inputs that can evaluate the differentiability of selected features and the robustness of a malware detection model. For these malware detectors, MRV produces 5 to 70 times more adversarial-testing inputs compared to existing adversarial learning approaches.
3 Future Directions
In general, I plan to continue and expand my research in software engineering and security, devising techniques that make security systems more intelligent and robust. I believe that the intersection between program analysis, machine learning, natural language processing, and security is a rich area for future research. New technologies will benefit from interdisciplinary research and advance our understanding of design principles of intelligent security systems.
Specifically, I am enthusiastic to continue my research in two main lines of future work.
Intelligent security techniques with little labeled data. We are living in a world that has many orders of magnitude more data than labels. Most of existing intelligent security techniques including those that I have developed are based on labeled data. Such factor poses barriers for adopting these existing intelligent security techniques in practice because the evasive nature of security subjects such as malware would make the labeling process erroneous and laborious. In the next few years, I plan to address the problem of lacking labeled data from three directions.
First, in the short term, I plan to leverage multiple sources of data to complement each other when labels in one or more data sources are missing. My existing work already extracts program behaviors from different perspectives by analyzing multiple types of artifacts, including static code information, dynamic execution information, textual information such as app descriptions and API documents, and graphical information such as user interfaces. My next goal is to address security-decision making in such security systems using techniques of multimodal representation learning. My existing work already builds modality-specific approaches for each data modality (i.e., each type of data). Based on these modality-specific approaches, I plan to learn joint representations that are shared across multiple modalities. The first essential step is to develop techniques of customized graph embedding to cover program structures such as control-flow graphs or call graphs into high-dimensional vectors.
Second, in the mid term, I plan to tackle the problem by considering the inverse problem of analyzing data: generating data. Many fields have achieved a certain level of success in generating data (e.g., generating images, synthesizing small programs) by using generative models, which do not require any labeled data. I believe that an even more practical use of generative model is to analyze data by learning to generate the data. Through learning to generate a type of data, generative models learn to understand the properties of the data. For example, by generating evasive malware, a generative model forms a relatively complete notion of what makes a malware evasive. Missing
any important characteristic of a malware would make the model generate non-evasive malware or even non-malicious apps. An analogy is that in a generative adversarial network (GAN), the existing research typically is to improve the performance of the generator in a GAN, whereas our focus here is to improve the classifier while the generator becomes more effective.
Last, in the long term, I plan to explore the possibility of using transfer learning to store knowledge gained in solving the tasks with labeled data and apply such knowledge to solve problems where labeled data are not available. For example, the data of vulnerabilities and attacks on mobile apps are abundant, while such information is lacking on Internet of Things (IoT) apps. I plan to investigate ways to transfer the knowledge learned from mobile apps and apply such knowledge on IoT apps. We have already made a first step in such direction by transferring knowledge about inter-component vulnerabilities in Android apps towards inter-rule vulnerabilities in IoT apps [U5].
Testing (deep) learning-based security systems. A natural extension of my current research is to test (deep) learning-based security systems. I plan to advance this direction from three different perspectives.
First, in the short term, I plan to continue working on adversarial testing of learning-based security systems. In addition to generating adversarial-testing inputs, I plan to exploit other potential vulnerabilities such as the privacy vulnerabilities in learning-based security systems. In one of my undergoing projects, we develop a metamodel [U4] that can infer the private properties of the training data of certain black-box machine learning models. Although such issue is not unique to security systems, the consequences of exposing these private properties of the security data would be disastrous. For example, for an anti-virus system, if the configuration of the sandboxing can be inferred from the model and the training data, adversaries can devise malware to specifically evade the detection of the sandboxing. I would like to investigate the potential exposures of such private properties in the testing phase of security systems and develop techniques to mitigate such potential exposures.
Second, in the mid term, I plan to propose and evaluate new testing metrics for learning-based security systems. Traditional testing metrics (e.g., statement, branch, and path coverage) do not work when testing learning-based systems because core logics of learning-based systems are embedded in the machine learning models (i.e., arithmetic operations of formulas) instead of control flow statements in traditional software. Prior work proposes neuron coverage (i.e., the ratio of the number of distinct activated neurons to the total number of neurons in the neural network) as the testing metric. There is one key limitation in this testing metric: it interprets neuron activation as a positive indicator for testing effectiveness, while it is only a status of the respective neuron. I plan to propose combinations of neuron activations and manifolds of training data as testing metrics [5]. Lower-dimension manifolds are good models for many data-related tasks, whose data points might lie in very high-dimensional spaces. Programs can reject inputs that are far away from natural manifolds because these inputs are well beyond the confident regions of the ML model under testing. Such factor also indicates that manifold can tell whether testing inputs are meaningful to the ML model under testing (i.e., reaching the deeper logics of the model).
Last, in the long term, I plan to generate training/testing environments identical to the real-world environments for lab training/testing of safety- and security-critical systems. The training and testing of such systems usually involve techniques (such as reinforcement learning) that need to experience a huge number of failures before a correct model is learned. It is too expensive to develop such model directly in the real-world environments. I plan to develop a generative technique that can generate a model of environments capable of predicting several steps into the future (e.g., how other cars would respond to a stop sign). With this model, I can then perform fuzz testing to mutate various elements in the environments to see how the system under testing responds.
4 References
Refereed Conference Papers
Refereed Journal Articles & Workshops
Under Review / In Preparation
Patent
|
{"Source-Url": "http://weiyang3.web.engr.illinois.edu/wei-research.pdf", "len_cl100k_base": 4244, "olmocr-version": "0.1.51", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 13844, "total-output-tokens": 6052, "length": "2e12", "weborganizer": {"__label__adult": 0.0004830360412597656, "__label__art_design": 0.0005750656127929688, "__label__crime_law": 0.0009455680847167968, "__label__education_jobs": 0.00676727294921875, "__label__entertainment": 0.00010502338409423828, "__label__fashion_beauty": 0.000247955322265625, "__label__finance_business": 0.0002582073211669922, "__label__food_dining": 0.0003938674926757813, "__label__games": 0.0009021759033203124, "__label__hardware": 0.0012722015380859375, "__label__health": 0.00086212158203125, "__label__history": 0.00032901763916015625, "__label__home_hobbies": 0.000156402587890625, "__label__industrial": 0.0004351139068603515, "__label__literature": 0.00048422813415527344, "__label__politics": 0.00037932395935058594, "__label__religion": 0.000545501708984375, "__label__science_tech": 0.09124755859375, "__label__social_life": 0.00026297569274902344, "__label__software": 0.0127410888671875, "__label__software_dev": 0.87939453125, "__label__sports_fitness": 0.00037169456481933594, "__label__transportation": 0.0004901885986328125, "__label__travel": 0.00018155574798583984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28662, 0.01928]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28662, 0.19508]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28662, 0.89177]], "google_gemma-3-12b-it_contains_pii": [[0, 5843, false], [5843, 12178, null], [12178, 18538, null], [18538, 24355, null], [24355, 28662, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5843, true], [5843, 12178, null], [12178, 18538, null], [18538, 24355, null], [24355, 28662, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28662, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28662, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28662, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28662, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28662, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28662, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28662, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28662, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28662, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28662, null]], "pdf_page_numbers": [[0, 5843, 1], [5843, 12178, 2], [12178, 18538, 3], [18538, 24355, 4], [24355, 28662, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28662, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
cf24d8e95815f9888376ad7da496bae6a7f456b2
|
[REMOVED]
|
{"len_cl100k_base": 5280, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 23099, "total-output-tokens": 6230, "length": "2e12", "weborganizer": {"__label__adult": 0.0003960132598876953, "__label__art_design": 0.000278472900390625, "__label__crime_law": 0.0003001689910888672, "__label__education_jobs": 0.0005116462707519531, "__label__entertainment": 5.3048133850097656e-05, "__label__fashion_beauty": 0.00016546249389648438, "__label__finance_business": 0.0002071857452392578, "__label__food_dining": 0.0003333091735839844, "__label__games": 0.0005235671997070312, "__label__hardware": 0.004364013671875, "__label__health": 0.0004148483276367187, "__label__history": 0.0002148151397705078, "__label__home_hobbies": 0.00016987323760986328, "__label__industrial": 0.0007190704345703125, "__label__literature": 0.00017023086547851562, "__label__politics": 0.0001852512359619141, "__label__religion": 0.0004279613494873047, "__label__science_tech": 0.0180511474609375, "__label__social_life": 6.264448165893555e-05, "__label__software": 0.004329681396484375, "__label__software_dev": 0.966796875, "__label__sports_fitness": 0.0003647804260253906, "__label__transportation": 0.0009021759033203124, "__label__travel": 0.0001982450485229492}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26048, 0.0202]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26048, 0.8474]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26048, 0.87274]], "google_gemma-3-12b-it_contains_pii": [[0, 3277, false], [3277, 7612, null], [7612, 11522, null], [11522, 13476, null], [13476, 15132, null], [15132, 16855, null], [16855, 18648, null], [18648, 20110, null], [20110, 22545, null], [22545, 24129, null], [24129, 26048, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3277, true], [3277, 7612, null], [7612, 11522, null], [11522, 13476, null], [13476, 15132, null], [15132, 16855, null], [16855, 18648, null], [18648, 20110, null], [20110, 22545, null], [22545, 24129, null], [24129, 26048, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26048, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26048, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26048, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26048, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26048, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26048, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26048, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26048, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26048, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26048, null]], "pdf_page_numbers": [[0, 3277, 1], [3277, 7612, 2], [7612, 11522, 3], [11522, 13476, 4], [13476, 15132, 5], [15132, 16855, 6], [16855, 18648, 7], [18648, 20110, 8], [20110, 22545, 9], [22545, 24129, 10], [24129, 26048, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26048, 0.0367]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
142fdd4b1024674be57d8a5b41348da09729102d
|
Declarative Process Mining for DCR Graphs
Debois, Søren; Hildebrandt, Thomas T.; Laursen, Paw Høvsgaard; Ulrik, Kenneth Ry
Published in:
Proceedings of the Symposium on Applied Computing
DOI:
10.1145/3019612.3019622
Publication date:
2017
Document version
Publisher’s PDF, also known as Version of record
Citation for published version (APA):
Declarative Process Mining for DCR Graphs
Søren Debois
IT University of Copenhagen
Copenhagen, Denmark
debois@itu.dk
Thomas T. Hildebrandt
IT University of Copenhagen
Copenhagen, Denmark
hilde@itu.dk
Paw Høvsgaard Laursen
IT University of Copenhagen
Copenhagen, Denmark
pawh@itu.dk
Kenneth Ry Ulrik
IT University of Copenhagen
Copenhagen, Denmark
kulr@itu.dk
ABSTRACT
We investigate process mining for the declarative Dynamic Condition Response (DCR) graphs process modelling language. We contribute (a) a process mining algorithm for DCR graphs, (b) a proposal for a set of metrics quantifying output model quality, and (c) a preliminary example-based comparison with the Declare Maps Miner. The algorithm takes a contradiction-based approach, that is, we initially assume that all possible constraints hold, subsequently removing constraints as they are observed to be violated by traces in the input log.
CCS Concepts
- Information systems → Data mining;
- Theory of computation → Logic;
- Computing methodologies → Knowledge representation and reasoning;
Keywords
Declarative process mining; DCR graphs
1. INTRODUCTION
Business process management (BPM) technologies [32] support the management and digitalisation of workflows and business processes by employing explicit process models, following a cycle of process (re)design, validation, execution and monitoring.
Process mining algorithms [31] have been proposed for the identification of process models from process logs, supporting both process design and compliance monitoring.
Most industrial BPM tools and process miners describe processes as imperative flow diagrams such as BPMN. However, flow diagrams tend to get either too rigid or too complex, in particular for knowledge work processes having a high degree of variation [27]. Moreover, flow diagrams only describe how to perform a process, leaving a gap to the legal regulations and guidelines, that are often more declarative in nature, describing why the process must be performed in certain ways, not how exactly it must be performed. For instance, a clinical guideline may state, that a patient must consent to a blood transfusion [13]. It does not state exactly when such consent should be obtained, only “prior to the transfusion”.
For this reason, it is recommended to use flow diagrams only for routine processes, or for describing common standard practices and allow deviations [27]. It has been advocated that declarative notations should be used as output of process mining (e.g. [17]) and for run-time process support (e.g. [24, 23, 28]). For the former, one hopes to extract from a process log the rules obeyed in practice (the “why”) as opposed to a flow-diagram describing the usual executions (the “how”). For the latter, one hopes to guide knowledge workers to activities in conformance with rules and regulations.
Implementation techniques for most declarative models such as Declare [26] and DecSerFlow [30], rely on translating the declarative constraints to an imperative model (e.g., an automaton [20]) to enable execution. Such translation usually entail a state-space explosion, and run-time adaptation of constraints becomes more difficult, because the automaton must be recomputed when constraints change.
A notable exception is the Dynamic Condition Response (DCR) graphs process language [11, 29]. DCR graphs can be executed without intermediate transformation to an imperative model creating the entire transition graph, and more directly support run-time adaptive case management [23, 5]. DCR graphs are supported by industrial design and case management tools (see e.g. dcrgraphs.net and 5).
In the present paper, we present the first process mining algorithm for DCR graphs.
2. DCR GRAPHS
In this Section, we briefly recall DCR graphs. For a formal introduction and applications, refer to [11, 29, 3, 5].
Dynamic Condition Response graphs is a declarative modelling notation describing at the same time a process and its run-time state. The core notation comprises activities, activity states, and four relations between activities. An activity state comprises three booleans, indicating respectively whether the activity has been executed, is included, and
is pending. Intuitively, activities that are not included are treated as temporarily absent from the workflow; activities that are pending must eventually be executed or excluded before the workflow may complete.
Relations between activities govern whether an activity can currently be executed and how executing one activity modifies the state of another. A condition \( A \rightarrow B \) means that the activity \( A \) cannot execute unless \( B \) was previously executed, i.e., the the executed-state of \( B \) is true. Executing an activity clears its pending-state and sets its execution-state. The response \( A \leftarrow B \) means that whenever \( A \) executes, the pending-state of \( B \) is set. An inclusion \( A \rightarrow+ B \) means that whenever \( A \) executes, the inclusion-state of \( B \) is set, and conversely, an exclusion \( A \rightarrow-% B \) means that whenever \( A \) is executed, the inclusion-state of \( B \) is cleared.
Note that excluding an activity voids it as both a condition and a response: If \( A \leftarrow B \) and \( B \) is not executed but also not included, \( A \) is free to execute. Conversely, an activity which is pending but also not included does not prevent the workflow from being completed.
While the condition and response relations has the same meaning as the corresponding relations in DECLARE [25] or DecSerFlow [30], the inclusion and exclusion relations provide the ability to dynamically include and remove conditions and response obligations. They have no direct counterpart in other declarative notations.
3. MODEL METRICS
In this section, we present quality measures quantifying the appropriateness of a DCR graph \( G \) for a given log \( l \). We take as starting point the already established metrics of fitness, precision, generality, and simplicity introduced in [1] in the context of (internally binary) process trees.
3.1 Fitness
Replay fitness is defined in [1] as the normalised ratio of how an alignment between the input process tree and the event log differs over the maximum possible alignment for the model given an arbitrary event log. A variant of this approach was successfully applied to declarative models in [2].
However, within Adaptive Case Management, the core application area of DCR graphs [3], [5], [6], we use declarative models specifically to encompass all admissible behaviours. In this context, we take the view that the appropriate notion of "replay fitness" is simply the ability of the model to replay the traces of the input log exactly. As such, we define fitness to be simply the ratio of input traces in the log \( l \) replayable by the DCR model \( G \):
\[
\text{fitness}(G, l) = \frac{\#\text{ReplayableTraces}(G, l)}{\#\text{Traces}(l)}
\]
3.2 Precision
Precision is defined in [1] essentially as a tally of the amount of behavioural options unused by the log. This idea is straightforward to apply to DCR graphs: replay the log and record, for each reached state in the graph, the activities that are executable in that state as well as how many of these executable activities were actually executed at some point.
We transfer this idea directly to DCR graphs, measuring for each visited state the number of enabled activities actually executed in that state:
\[
\text{precision}(G, l) = \frac{\sum_{s \in \text{VisitedStates}(l)} \#\text{ActivitiesExecuted}(G, s, l)}{\sum_{s \in \text{VisitedStates}(l)} \#\text{ExecutableActivities}(G, s, l)}
\]
As a technical note, "\#ActivitiesExecuted" is only counted up the first time an activity is seen executed in a certain state. If it is observed to be executed from the same state multiple times, we only count the one execution.
However, we question the usefulness of this measure in the context of Adaptive Case Management. One advantage of declarative models in this context is that they afford flexibility for case workers to handle infrequent outlier cases. By definition, these happen only seldom; we cannot expect all such cases to be represented in the input log. Encompassing them, then, entails supporting a very large amount of potential such outlier cases. So it would be the expectation and not the exception that a log uses only a tiny fragment of the options available in the model.
This thinking was confirmed in [9], where a commercial system based on DCR graphs supported at least five orders of magnitude more states than observed in actual logs.
3.3 Simplicity
Simplicity for process trees is defined in [4] (roughly) as the ratio of the size of the internal binary process tree to the amount of activities in the input log. This notion of simplicity was partly motivated by previous findings that size is the main driver of errors in process models [21].
However, these findings have to the best our knowledge not been replicated in the context of declarative process models [9], [10], [33], where key impediments to understandability appear to be the number of constraints as opposed to the number of activities. Moreover, measuring the number of activities in DCR graphs is not a proxy for semantic complexity the way measuring duplicate activity representation is in a process tree is—large graphs are not necessarily complex.
Accordingly, we measure the simplicity of a DCR graph by (1) the number of pairs of related activities (Relation Pairs: RP); (2) the total amount of relations. Note that (2) is greater than (1) when some activities are related by more than one relation. Under this measure, a simplest possible graph is any graph with no relations.
\[
\text{simplicity}(G) = \frac{1 - \#\text{Relations}}{\#\text{PossibleRelations}} + \frac{1 - \#\text{RPs}}{\#\text{PossibleRPs}}
\]
Note that because the number of activities in a declarative model is not necessarily correlated with its complexity, in contrast to [4], we can define simplicity without reference to the events in the particular log \( l \).
We have ignored in this measure (and in this paper) complexity of DCR graphs stemming from nesting [12]. While nesting generally enhance perceived understandability (see, e.g., [34], [35]), it may also implicitly introduce more relations. We leave open the question of exactly what a good measure of simplicity in the presence of nesting might be.
3.4 Generality
The notion of generality is defined in [4] for process trees as the frequency with which each node of the process tree
must be visited in order to produce the given log. Infrequently visited nodes of the process tree decreases generality.
This notion is specific to the notion of process trees and, to a lesser extent, imperative models. DCR graphs have no notion resembling the “inner nodes” of a process tree that can be considered “visited” during executions.
Moreover, generalisation is intended to assess “the extent to which the resulting model will be able to reproduce future behaviour” [1, p. 2]. This is an extremely important quality for both declarative models in general and ACM models in particular. However, we contend that it cannot reasonably be measured without appeal to domain-knowledge: We cannot from the logs alone determine which are useful generalisations (e.g., swapping the order of obtaining authorisation signatures in a loan application) and which are not (e.g., swapping the order of granting the loan and obtaining authorisation).
Altogether, we leave the definition of a notion of generalisation for DCR graphs as future work.
4. DCR MINING
In this Section we present a mining algorithm for DCR graphs: Given a log t, produce a DCR graph G. We take a “contradiction-based” approach to mining for constraint-based modelling languages: Begin with the set of activities and all possible constraints, and remove a constraint whenever the input log has a trace violating it. This approach has proven successful for DECLARE [8, 16, 2, 18], although requiring non-trivial enhancements to curb combinatorial explosion because of the large number of possible DECLARE constraints; to avoid contradictory models [7], and to avoid unhelpful vacuously satisfied constraints [10]. DCR graphs have only 4 relations; checking those for each pair of activities across all input traces is a viable option.
Because include relations by definition trump exclude relations, we do not take as starting point a graph with every possible constraint. Rather, in the interest of beginning with the most restrictive possible graph, we retain exclusions and omit inclusions. Altogether, our initial, restricted over-approximation will have conditions, exclusions, and responses between any pair of events.
In DCR, we have to account not only for constraints, but also initial state. Following the principle behind contradiction-based mining, we opt for the most restrictive possible starting state: each activity is initially not executed, not included, and pending.
4.1 Algorithm
The core mining algorithm is given in Algorithm 1. We comment on specifics below. In the algorithm, for a given trace t, we write t_0, t_1, . . . for the sequence of activities in t.
Include- and exclude-relations. When we observe an activity at the start of a trace, we set the initial included-state of that activity to true. When an activity is observed after the start of a trace, we replace the exclude-relation from the preceding event with an include, again to allow the two activities to be executed in succession.
Response relations. At the completion of a trace, we check that for each activity execution in that trace whether all the activities that had response relations installed have been executed later in the trace. If not, we remove the offending response relations. Moreover, we clear the initial pending-state for all activities not seen in that trace.
The latter of these rules is an over-approximation; pending activities may be discharged either by execution or by being excluded. We make the present choice partly to make an initially-pending state signal that the activity has to be executed in any and all traces, not just excluded, partly to facilitate dynamic mining, see Section 4.2.
Condition relations. When we observe an activity execution in a trace, we remove conditions from non-executed activities in the trace in question.
Algorithm 1 Core DCR mining algorithm
```latex
1: function Mine(log)
2: \hspace{1em} G := activities(log)
3: \hspace{1em} for all x where x activity of G do
4: \hspace{2em} set x excluded, pending, not executed in G
5: \hspace{1em} end for
6: \hspace{1em} for all (x, y) where x, y activities of G do
7: \hspace{2em} G := G \cup \{ x \leftarrow y, x \leftrightarrow y, x \rightarrow y \}
8: \hspace{1em} end for
9: \hspace{1em} for all t \in traces(log) do
10: \hspace{2em} set t_0 included in G
11: \hspace{2em} remove all conditions to t_0 from G
12: \hspace{2em} end for
13: \hspace{1em} for all t \in traces(log) do
14: \hspace{2em} p := t_0
15: \hspace{2em} for i from 1 to |t| - 1 do
16: \hspace{3em} remove p \rightarrow% t_i from G
17: \hspace{3em} add p \rightarrow% t_i to G
18: \hspace{3em} for all x where t_i \leftarrow x \in G do
19: \hspace{4em} if x \not\in \{ t_j \mid j < i \} then
20: \hspace{5em} remove t_i \leftarrow x from G
21: \hspace{4em} end if
22: \hspace{3em} end for
23: \hspace{3em} for all x where t_i \rightarrow x \in G do
24: \hspace{4em} if x \not\in \{ t_j \mid j > i \} then
25: \hspace{5em} remove t_i \rightarrow x from G
26: \hspace{4em} end if
27: \hspace{3em} end for
28: \hspace{2em} p := t_i
29: \hspace{2em} end for
30: \hspace{2em} for all a \notin t do
31: \hspace{3em} set a not pending in G
32: \hspace{3em} end for
33: \hspace{2em} end for
34: \hspace{1em} return G
35: end function
```
offending response relations. Moreover, we clear the initial pending-state for all activities not seen in that trace.
The latter of these rules is an over-approximation; pending activities may be discharged either by execution or by being excluded. We make the present choice partly to make an initially-pending state signal that the activity has to be executed in any and all traces, not just excluded, partly to facilitate dynamic mining, see Section 4.2.
Condition relations. When we observe an activity execution in a trace, we remove conditions from non-executed activities in the trace in question.
4.2 Correctness
Removal of a DCR constraint in general does not preserve admissibility of workflows. Here are two counterexamples:
1. The graph \( A \rightarrow B \) where B is initially excluded admits the traces \( A^* + A(A \mid B)^* \). Removing the inclusion relation reduces the set of admitted traces to \( A^* \).
2. The graph \( B \leftarrow A \mid C \rightarrow% A \) admits the trace \( CB \); removing \( C \rightarrow% A \) makes that trace inadmissible.
This non-monotonicity is a central difference between DCR and DECLARE; it was studied in [4]. It follows that in a naive DCR-miner, whenever we remove a constraint, we
must re-check all previously processed traces to ensure that they are still admissible. Such a naïve approach would lead to practically unacceptable running-times.
Our mining algorithm rests on the observation that these two examples exemplify the only two ways removing a constraint from a DCR graph may reduce its set of accepted traces; Algorithm 1 does not remove such constraints when it is dangerous to do so.
We use this insight to prove Algorithm 1 correct. Write \( G \models t \) if a DCR graph \( G \) accepts a trace \( t \); write \( L(G) \) for the set \( \{ t \mid G \models t \} \).
**Proposition 4.1.** Let \( G \) be a label-deterministic DCR graph, and let \( G' \) be the DCR graph obtained by removing a single constraint \( \gamma \) from \( G \). Suppose \( t \) is a trace s.t. \( G \models t \). Then \( G' \not\models t \) implies either
1. \( \gamma = A \rightarrow B \) for some \( A, B \), or
2. \( \gamma = A \rightarrow\%B \) and \( C \leftrightarrow B \) for some \( A, B, C \), and there exists \( i \) s.t. \( t_i = C \) but for no \( j < i \) do we have \( t_j = B \).
**Proof.** Suppose \( G' \not\models l \). We proceed by cases on \( \gamma \). If \( \gamma \) is a condition or a response, clearly \( L(G) \subseteq L(G') \); contradiction. If \( \gamma \) is an inclusion we are done. So suppose finally \( \gamma = A \rightarrow\%B \). If for no \( C \) we have \( C \leftrightarrow B \) it follows easily that \( L(G) \subseteq L(G') \), so we must have \( C \leftrightarrow B \) for some \( C \). Suppose for a contradiction that for all such \( C \), we have for all \( t_i \) either \( t_i \neq C \) or \( t_i = C \) and for some \( j < i \) we have \( t_j = B \). In either case, it is straightforward to prove by induction on \( t \) that \( G' \models l \); contradiction.
**Lemma 4.2.** Let \( G \) be the value of \( G \) at line 16 and \( G' \) the value of \( G \) at line 28 in Algorithm 7 within the same iteration of the loop. Then \( \forall t \in L(G) \models t \Rightarrow G' \models t \).
**Proof.** For the removed relations, by Proposition 4.1, it is sufficient to verify that we remove no constraint satisfying Items 1 and 2 of that theorem. By inspection, Algorithm 1 does not remove inclusions, and so cannot violate Item 1. By inspection, we see that when the algorithm removes an exclusion (line 16) it also removes conditions that would violate Item 2 (lines 18-22).
For the added inclusion at line 17, it is sufficient to note that adding inclusion may only lead to inadmissible traces if it includes a left-hand side of a condition; however, by line 18-22 only conditions that were executed are retained.
**Theorem 4.3 (Correctness).** Let \( G \) be the output of Algorithm 1 on a log \( t \). Then for all \( t \in L(G) \models t \).
**Proof.** Using Lemma 4.1, it is straightforward to verify by induction on each \( t \in L \) that \( t_i \) was enabled after \( t_{i-1} \) in \( G \) at line 28, and that \( G \) is accepting for \( t_{[i-1]} \) at line 32.
### 4.3 Weighing of constraints
Algorithm 1 does not take into account noise in the log, since we remove every violated constraint. Moreover, in some applications, we may desire not a completely fitting model, but rather one that characterises the "common execution": We may want to trade off fitness for precision.
Following common approaches to process mining, we only remove a relation when our confidence in removing that constraint is above a certain threshold. Each constraint is therefore assigned two values: an invocation counter and a violation counter. The invocation counter tallies the number of traces in which the constraint was invoked, e.g., the number of traces where the source activity of an exclude-relation was executed. The violation counter simply tallies the number of traces in which the constraint was violated.
Exact criteria for invocations and violations are given in Table 1. The ratio of violations to invocations define our confidence in the removal of a constraint. A threshold below 0% will remove all constraints, resulting in a flower model. A threshold of exactly 0% retains only constraints satisfied in every trace. A threshold of 100% will remove no constraints; the output model will allow no runs.
Experimentally, the desired trade-off between precision and fitness occurs in the 0-15% range. A threshold larger than 20% would result in a large amount of the log being unsupported by the resulting graph.
### 5. EXPERIMENTAL RESULTS
An implementation of Algorithm 1 with rudimentary redundancy removal is available at [14]. For an experimental comparison with the Declare Maps Miner, consider the log in Table 2. For the sake of clarity, the log consists of only ten traces and is based on a relatively simple regular expression. For a larger log, see the on-line results at [15].
The test log represents a basic process flow; parallels may be drawn to a real-world process where A is registration for an exam, B, C and D are answers to a multiple-choice question, and the student either passes (E) or fails (F). Failed students may retry if they wish, but if they pass, they can no longer re-take the exam.
Given the sample log, our algorithm, with a constraint-violation threshold of 15%, returns the DCR-graph depicted in Figure 1. Because the log contains only a single occurrence of A followed by D, the exclusion constraint between them remains intact: the one in ten traces does not yield a sufficient statistical percentage of violation (10 < 15). Thus, as no other activity includes D, it is removed entirely from the result-graph as a result of redundancy removal.
The removal of activity D means that the trace A→D→F is no longer allowed, letting the Fitness measure down at 90%. This is, however, an acceptable trade-off for an increase of precision from 72.73% if the threshold were set below 10% to the final 78.57%, as the two measures are now closer to each other. The main cause for this effect on the precision measure is the observed state-space that the execution of D involved. This, along with the fact that paths involving executions of B and C are quite well-traversed, results in a slightly higher, final precision measure.
Table 1: Threshold-dependant constraint removal
<table>
<thead>
<tr>
<th>Constraint</th>
<th>Invocation</th>
<th>Violation</th>
<th>Result</th>
</tr>
</thead>
<tbody>
<tr>
<td>Excluded-state</td>
<td>Each trace</td>
<td>A is first in a Trace</td>
<td>A is Included</td>
</tr>
<tr>
<td>Exclude-relation</td>
<td>A is executed in a trace</td>
<td>B executed immediately after A</td>
<td>A \rightarrow B exclude is changed to include</td>
</tr>
<tr>
<td>Condition-relation</td>
<td>B is executed in a trace</td>
<td>A is not executed before B</td>
<td>A \rightarrow B condition is removed</td>
</tr>
<tr>
<td>Response-relation</td>
<td>A is executed in a trace</td>
<td>B is not executed after A</td>
<td>A \rightarrow B response is removed</td>
</tr>
<tr>
<td>Pending-state</td>
<td>Each trace</td>
<td>A is not executed</td>
<td>A is not Pending</td>
</tr>
</tbody>
</table>
Table 2: Example log. Follows the regular expression (A(B+|C|D|F))*(A(B+|C|D|E))?
<p>| | | | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>ABE</td>
<td>6 ACF</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>ACFABBFF</td>
<td>7 ABFACFACE</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>ACE</td>
<td>8 ABBBF</td>
<td></td>
</tr>
<tr>
<td>4</td>
<td>ADF</td>
<td>9 ABBE</td>
<td></td>
</tr>
<tr>
<td>5</td>
<td>ABFABE</td>
<td>10 ACFACE</td>
<td></td>
</tr>
</tbody>
</table>
5.1 Result comparison: Declare Maps Miner
For comparison, we show the the result graph of running the Declare Maps Miner [18] on the same log (Table 2) in Figure 2. The result is computed using a Declare Maps Miner support of 85 %, i.e., any constraint must be supported by at least 85 % of traces. This corresponds to the constraint-violation threshold of 15 % used by our algorithm above, as the contradiction-based method uses the threshold to tell when to remove a constraint, while Declare uses support for when to include a constraint.
- In the Declare model a trace must begin with A, followed by either B or C and then possibly ending in E, after which it is not permitted to go back to A.
- If C is chosen after A, it is also possible to continue to F, instead of E, and then possible to return to A and start over.
- If B is chosen instead of C, it is then not possible to choose F, despite four instances of this succession in the log.
- The exclusive choice constraint between A and D, combined with A being the initial activity, means that it is not possible to ever execute D. This is similar to the DCR miner never including D.
- Additionally, the Declare model does not have a terminal state. If E is executed, A and F cannot subsequently occur, but the same does not seem to apply for B and C. Thus, these three can be executed arbitrarily after E, even though all traces in the log end in E.
This last point marks the primary difference between the two resulting models. Overall, the results seem to suggest that our miner is slightly better in terms of closely reflecting the underlying process of the test log (its regular expression). We emphasise that these results are only for this single, simple example, and may not necessarily generalise.
6. CONCLUSIONS
We have presented the first process mining algorithm for DCR graphs and a set of metrics quantifying output model quality. The algorithm has been implemented and a preliminary example-based comparison with the Declare Maps Miner has been carried out. We plan as future work to extend the evaluation and use of the algorithm to real-time distributed process mining.
7. REFERENCES
|
{"Source-Url": "https://static-curis.ku.dk/portal/files/227990239/p759_debois.pdf", "len_cl100k_base": 6596, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 24726, "total-output-tokens": 9376, "length": "2e12", "weborganizer": {"__label__adult": 0.0003769397735595703, "__label__art_design": 0.0005540847778320312, "__label__crime_law": 0.0005979537963867188, "__label__education_jobs": 0.002452850341796875, "__label__entertainment": 0.00012612342834472656, "__label__fashion_beauty": 0.0002522468566894531, "__label__finance_business": 0.0014219284057617188, "__label__food_dining": 0.0004706382751464844, "__label__games": 0.0008397102355957031, "__label__hardware": 0.0009293556213378906, "__label__health": 0.0008721351623535156, "__label__history": 0.0004227161407470703, "__label__home_hobbies": 0.00017023086547851562, "__label__industrial": 0.001194000244140625, "__label__literature": 0.000644683837890625, "__label__politics": 0.0003917217254638672, "__label__religion": 0.000530242919921875, "__label__science_tech": 0.322998046875, "__label__social_life": 0.00018131732940673828, "__label__software": 0.02105712890625, "__label__software_dev": 0.64208984375, "__label__sports_fitness": 0.00031685829162597656, "__label__transportation": 0.0007534027099609375, "__label__travel": 0.00019872188568115232}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33983, 0.04245]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33983, 0.31876]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33983, 0.88107]], "google_gemma-3-12b-it_contains_pii": [[0, 636, false], [636, 4894, null], [4894, 11314, null], [11314, 17843, null], [17843, 24088, null], [24088, 28385, null], [28385, 33983, null]], "google_gemma-3-12b-it_is_public_document": [[0, 636, true], [636, 4894, null], [4894, 11314, null], [11314, 17843, null], [17843, 24088, null], [24088, 28385, null], [28385, 33983, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33983, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33983, null]], "pdf_page_numbers": [[0, 636, 1], [636, 4894, 2], [4894, 11314, 3], [11314, 17843, 4], [17843, 24088, 5], [24088, 28385, 6], [28385, 33983, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33983, 0.06222]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
b49e913da6f0c27324dce6f80c617da7b93e2d43
|
Lecture 8: Extensible Markup Language (XML)
Motivation
- Applications consume and transfer data
- Software libraries require files
- Communication between online services
- How to represent such data usefully?
- Option 1: every app defines its own syntax
- Done by, e.g., common UNIX programs
- Requires specialized language design and parsers
- Option 2: use a common "extensible" syntax
- But which one? The relational model?
- Allows for reuse, but often involves challenges: proper decomposition, nulls due to fixed attributes, etc.
- Translation into relations might be an issue
What is XML?
- Depending on who you’re asking
- **Answer 1:** Rich documents that enrich text with markup
- Markup captures mainly formatting, meta data (e.g., title) and links
- **Answer 2:** A hierarchical data model
- Elegantly generalizes the relational model, object model
- Most prominent model of semistructured data
What is XML?
- Depending on who you’re asking
- **Answer 1:** Rich documents that enrich text with markup
- Markup captures mainly formatting, meta data (e.g., title) and links
- **Answer 2:** A hierarchical data model
- Elegantly generalizes the relational model, object model
- Most prominent model of semistructured data
XML Document: Relations vs. XML
<table>
<thead>
<tr>
<th>Number</th>
<th>Tag</th>
<th>Content</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>header</td>
<td>Haifa</td>
</tr>
<tr>
<td>2</td>
<td>normal</td>
<td>Technion, the...</td>
</tr>
<tr>
<td>3</td>
<td>link</td>
<td>Israel, Space</td>
</tr>
</tbody>
</table>
XML Chunk
XML
Objects: Relations vs. XML
```
faculty name="CS" building="Taub">
<member name="Orna Grumberg">
<office>Taub 630</office>
<phone>4327</phone>
</member>
<member name="Irad Yavneh">
<office>Taub 618, Taub 537</office>
<phone>4261, 4262</phone>
</member>
</faculty>
```
XML
Nesting Provides Flexibility
```
<person>
<name>Lisa Simpson</name>
<tel>02-828-1234</tel>
<tel>054-470-7777</tel>
<email>lisa@cs.huji.ac.il</email>
</person>
```
```
<addresses>
<person>...<person>
...<person>
<addresses>
```
record (tuple)
```
addresses
<person>...<person>
...<person>
```
list (relation)
Standardization Organizations
- **ISO**
- International Organization for Standardization
- Founded in 1947 to promote global commerce
- In fact, UN backed reform of the 1926 "ISA"
- Representatives from 162 countries
- **W3C**
- World Wide Web Consortium
- International standardization for the Web
- Founded in 1994, by Tim Berners-Lee, supported by European Commission, DARPA, MIT
- Berners-Lee is still heading W3C
- Sponsored by industrial companies
- Offices all around the world
XML History
- **1986**: SGML ISO standard for sharing documentation readable by machines
- Stands for Standard Generalized Markup Language
- Considered highly complicated, expensive to support
- Extensible data model
- Can be extended to many special cases using schemas
- **1991**: Tim Berners-Lee proposes the first version of HTML as an instantiation of SGML
- Much simpler than SGML; restricted to Web pages
- **1998**: XML 1.0 released by W3C
- Extensible and clean like SGML, but things stripped off to get the simplicity of HTML
XML vs. HTML
<table>
<thead>
<tr>
<th>SGML</th>
<th>XML</th>
<th>HTML</th>
</tr>
</thead>
<tbody>
<tr>
<td>Fixed set of tags</td>
<td>Definable set of tags</td>
<td>Tags imply visual layout</td>
</tr>
<tr>
<td>No visual association</td>
<td>Rigid format</td>
<td>Uneven format</td>
</tr>
</tbody>
</table>
More XML-Based Technologies
- RDF (format for the Semantic Web)
- WSDL (Web-service protocol)
- SOAP (object communication)
- RSS (Web-feed format)
- SVG (graphics)
- MathML (format for math editing)
More XML-Based Formats
- Application Vulnerability Description Language (AVDL)
- Basic Identity Management System (BIMS)
- Biometric Identity Technology Secretariat (BITS)
- CXML (Content eXtensible Metadata Language)
- Commercial Markup Language (CML) for Modular Instructional Materials
- Commercial Product Data Markup Language (CPML)
- Computer-Based eXtensible Markup Language (eXeML)
- Critical Access Control Markup Language (CACML)
- Financial Exchange (FX)
- Financial Information exchange protocol (FIP)
- Financial Products Markup Language (FPML)
- Genealogical Data Communication (GDC)
- Geographical Markup Language (GML)
- Global Justice's Justice XML Data Dictionary (JXDD)
- Human Resources Background Check and Payroll Deductions Language (HRXML)
- Product Data Markup Language (PDMX)
- Schools Interoperability Framework (SIF)
- Telecommunications Interchange Markup (TIM)
- The Text Encoding Initiative (TEI)
- Windows Rights Management Services (WRMS) by Microsoft
- XML Common Business Format (xCBL)
- XML Process Definition Language (XPDL) for workflow management
- YANG data modeling language [http://www.yang-central.org/twiki/bin/view/Main/WebHome](http://www.yang-central.org/twiki/bin/view/Main/WebHome)
**Related Standards**
- XML Schemas strengthen typing & schema capabilities (compared to built-in DTDs)
- XPath is a language for querying and accessing XML elements
- XSLT is a language for transforming XML documents into other XML documents
- Including XHTML for displaying XML files
- XQuery is a query language for XML
- XLink and XPointer provide a rich support cross-references among XML docs/elements
**Outline**
- Introduction
- XML Syntax
- DTD
- Element Declaration
- Attribute Declaration
- Entities
- Validity
- XPath
- Axes
- Predicates
- Examples of XPath Uses
- Namespaces
**XML Components**
- XML declaration
- Document Type Definition (DTD)
- Defines a schema
- What sequences of elements can each element have as children?
- For a given element name, which attributes are required? allowed?
- We will study DTD in depth later
- Can be:
- Internal (inside the XML document) or
- External (in an external URL)
**XML Declaration**
```xml
<?xml version="1.0" standalone="yes/no" encoding="enc"?>
• With standalone="no" we mean that we allow an external DTD
- Default is "no"
• Default encoding is UTF-8
- Good for Arabic, Armenian, Cyrillic, Greek, Hebrew, Latin, ...
• The entire declaration is optional
- But it is pretty conventional to include it
```
**Internal DTD Example (w3schools.com)**
```xml
<?xml version="1.0"?>
<!DOCTYPE note ["xml version="1.0" ?https://www.w3schools.com/xmldtd/note.dtd">]
<note>
<to>Tove</to>
<from>Jani</from>
<heading>Reminder</heading>
<body>Don’t forget me this weekend</body>
</note>
```
### External DTD Example
```xml
<?xml version="1.0"?><!DOCTYPE countries SYSTEM "world.dtd">
<countries>
<country name="Israel">
<name>Israel</name>
<year>2001</year>
<population>6199008</population>
<city name="Ashdod">60424213</city>
</country>
<country name="France">
<name>France</name>
<year>2001</year>
<city name="Paris">21182785</city>
<city name="Nice">329800</city>
</country>
</countries>
```
### XML Elements
- **Structure:**
- Opening tag: `<name attribute="v1", ..., attribute="vn">`
- Closing tag: `</name>`
- **Proper nesting is required**
- `proper-nesting := <tag ...> proper-nesting </tag>`
- Example of illegal XML: `<i>bob</i> hello</b>`
- (Web browsers will accept it as legal HTML)
- **Useful abbreviation for empty elements:**
- `<e ...></e>`
- Examples in XHTML: `<br/>`
- **The entire document must be nested within a single element, called the root element**
### Attributes
- **Restriction:** An element cannot have two occurrences of the same attribute
- For example, this is not allowed:
```xml
<person names="bill" name="william"/>
```
- **Design:** not always clear whether an information item should be an element or an attribute
- `<country population="7M"/>`
- `<country>(population=7M)/population</country>`
- **An attribute should be an element if:**
- If it has its own attributes (e.g., year)
- It has multiple values
### (Unparsed) CDATA
```xml
<message>
<head>
Entering a Kennel Club Member
</head>
<description>
Enter the member by the name on his or her paper. Use the NAME tag. The NAME tag has two attributes. Common (all in lowercase, please!) is the dog's call name. Breed (also in lowercase) is the dog's breed. Please see the breed reference guide for acceptable breeds. Your entry should look something like this:
</description>
<example>
<!CDATA><NAME common="freddy" breed="springer-spaniel">Sir Fredrick of Ledyard's End</NAME>!>
</example>
</message>
```
### XML Must be Well Formed
- An XML document is well-formed if
- Tags are syntactically correct
- Every start tag has an end tag
- Tags are properly nested
- There is a root tag
- A start tag does not have two occurrences of the same attribute
- No forbidden characters
- When a DTD is specified, a document must be both well formed and valid
Outline
- Introduction
- XML Syntax
- DTD
- Element Declaration
- Attribute Declaration
- Entities
- Validity
- XPath
- Axes
- Predicates
- Examples of XPath Uses
- Namespaces
Motivation
- A DTD adds syntactic requirements in addition to the well-formed requirement
- Why is it useful?
- The usual "why schema" arguments
- Helps avoiding errors when creating/editing XML
- Facilitates communication via XML
- Allows processing programs to make assumptions
- Default attribute values
- Macros for constants/includes (entities)
Example: An Address Book
```xml
<person>
<name> Homer Simpson </name> <!-- Exactly one name per person
<greet> Dr. H. Simpson </greet> <!-- At most one greeting
<addr> 1234 Springwater Road </addr> <!-- As many address lines as needed (in order)
<tel> (321) 786 2543 </tel>
<fax> (321) 786 2544 </fax>
<email> homer@math.springfield.edu </email> <!-- As many as needed
</person>
```
The Address Book DTD
```xml
<!DOCTYPE addressbook [address]>
<addressbook>
<person>
<name> Homer Simpson </name>
<greet> Dr. H. Simpson </greet>
<addr> 1234 Springwater Road </addr>
<tel> (321) 786 2543 </tel>
<fax> (321) 786 2544 </fax>
<email> homer@math.springfield.edu </email>
</person>
</addressbook>
```
Countries DTD
```xml
<!DOCTYPE countries SYSTEM "world.dtd">
<countries>
<country continent="Asia">
<name> Israel </name>
<population year="2001"> 6199008 </population>
<city capital="yes">
<name> Jerusalem </name>
</city>
<city>
<name> Ashdod </name>
</city>
</country>
<country continent="Europe">
<name> France </name>
<population year="2004"> 60424213 </population>
</country>
</countries>
```
Forms of Element Definitions
- A regular expression
- (name, greet?, address*, (fax | tel)*, email*)
- EMPTY
- The element has no content
- Example: `<ELEMENT br EMPTY>` (in XML: `<br/>`)
- ANY
- Mixture of PCDATA and elements defined in the DTD
- Mixed content
- (#PCDATA)
- (#PCDATA | address | name)*
- (#PCDATA | italic | bold)*
SDL Regular Expressions
<table>
<thead>
<tr>
<th>Format</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>name, tel, ...</td>
<td>Element name</td>
</tr>
<tr>
<td>e1e2</td>
<td>e₁ followed by e₂</td>
</tr>
<tr>
<td>e*</td>
<td>Zero or more occurrences of e</td>
</tr>
<tr>
<td>e?</td>
<td>Zero or one occurrences of e</td>
</tr>
<tr>
<td>e+</td>
<td>One or more occurrences of e</td>
</tr>
<tr>
<td>e₁</td>
<td>e₂</td>
</tr>
<tr>
<td>(e)</td>
<td>Grouping</td>
</tr>
</tbody>
</table>
Restriction on Regular Expressions
- DTD standard does not allow every regular expression (regex); only ones that can be “efficiently verified” in the following sense:
- We can determine whether a string s matches the regex by scanning s left to right; on every symbol we will know which regex symbol it matches without looking ahead in the string
- Such regex is called 1-unambiguous
- Example:
- (a|b)*,a is not 1-unambiguous
- b*,a(b*,a)* is 1-unambiguous
- Note: the two express the same language (string set)
Slightly More Precisely: Glushkov Automata
- **Glushkov automaton** of a regex [1961]:
- Preprocessing: replace each a+ with aa*
- State = symbol occurrence + init state
- Transition a→b whenever b is a possible follower of a in the left-to-right parse
- Accepting states = possible last symbols
Left-to-Right Scanning
| (a|b)*,a | b*,a(b*,a)* |
|---------|------------|
| a b b a a | a b b a a |
not 1-unambiguous 1-unambiguous
DTD Unambiguity Requirement
The requirement states (or can be formalized as):
*Every DTD regular expression has a deterministic Glushkov automaton*
Example of Violation
The requirement states (or can be formalized as):
*Every DTD regular expression has a deterministic Glushkov automaton*
This is a violation of the DTD recommendation:
```xml
<!ELEMENT filming ((movie|director)*,(movie|director))>
```
Mixed Content
- #PCDATA can be mixed with tags in only a restricted form
- That is, not every regex is allowed
- Described by a repeatable OR group
- (#PCDATA | element₁ | ⋯ | elementₖ)*
- Rules:
- This is the only regular expression allowed
- #PCDATA must be first
Outline
- Introduction
- XML Syntax
- DTD
- Element Declaration
- Attribute Declaration
- Entities
- Validity
- XPath
- Axes
- Predicates
- Examples of XPath Uses
- Namespaces
Attribute Types
- CDATA: General text
- ID: Unique identifier
- At most one ID attribute per element
- No two elements can have the same identifying attribute values
- IDREF: ID value of an element in the document
- Can be any element (not typed)
- IDREFS: A list of IDREFs (separated by space)
- ENTITY: A declared entity (later)
- ENTITIES: A list of ENTITYs (separated by space)
- (value₁ | ⋯ | valueₖ): One of value₁, ⋯, valueₖ
Attributes
```xml
<ELEMENT height (#PCDATA)>
<!ATTLIST height unit CDATA "cm" accuracy CDATA #IMPLIED>
```
Attribute Behavior
- #REQUIRED: Attribute must occur
- name CDATA #REQUIRED : <person name="Alma">...
- #IMPLIED: Optional
- #IMPLIED: Optional
- #IMPLIED: Optional
- #FIXED: Has a predefined value (in the DTD)
- genus CDATA #FIXED "Panthera": <lion genus="Panthera">...
- Default value: implied unless the attribute is given (with a different value)
- unit CDATA "cm": <length>...
Example of Recursive XML
<ELEMENT people (person*)>
<ELEMENT person (name, dateOfBirth, person?, person?)>
Problem: not satisfiable by any finite XML document
<ELEMENT people (person*)>
<ELEMENT person (name, dateOfBirth, person?, person?)>
Problem: illegal (not 1-unambiguous)
Problem: if there is one parent, is it the mother of the father?
Problem: we need to replicate parents for siblings
Using References
<person>
<person id="lisa" mother="marge" father="homer">
<name>Lisa Simpson</name>
</person>
<person id="bart" mother="marge" father="homer">
<name>Bart Simpson</name>
</person>
<person id="marge" children="bart lisa">
<name>Marge Simpson</name>
</person>
<person id="homer" children="bart lisa">
<name>Homer Simpson</name>
</person>
</people>
XML Entities (Macros)
- Syntax: <ENTITY name "value"/>
- Reference an entity by &name;
- Examples:
- &name; "Donald"
- &name; "Duck"
- In XML: &name; "Mr. &name;"
- &name; "Europe"
- In XML: &country continent="&name;";
Named Entities
- Introduction
- XML Syntax
- DTD
- Element Declaration
- Attribute Declaration
- Entities
- Validity
- XPath
- Axes
- Predicates
- Examples of XPath Uses
- Namespaces
Using References
<person>
<person id="lisa" mother="marge" father="homer">
<name>Lisa Simpson</name>
</person>
<person id="bart" mother="marge" father="homer">
<name>Bart Simpson</name>
</person>
<person id="marge" children="bart lisa">
<name>Marge Simpson</name>
</person>
<person id="homer" children="bart lisa">
<name>Homer Simpson</name>
</person>
</people>
Including External Files
<!DOCTYPE jokes [
<!ELEMENT jokes (<joke>)*
<!ENTITY joke1 SYSTEM "http://j.com/joke1.txt">
<!ENTITY joke2 SYSTEM "http://j.com/joke2.txt">
<!ENTITY joke3 SYSTEM "http://j.com/joke3.txt">
]<jokes>
<joke>&joke1;</joke>
<joke>&joke2;</joke>
<joke>&joke3;</joke>
</jokes>
Even Better
```xml
<!DOCTYPE jokes [ ]>
<!ELEMENT jokes (joke)>
<!ENTITY joke.1 SYSTEM "http://j.com/joke1.txt" [CDATA[&joke.1;]]>
<!ENTITY joke.2 SYSTEM "http://j.com/joke2.txt" [CDATA[&joke.2;]]>
<!ENTITY joke.3 SYSTEM "http://j.com/joke3.txt" [CDATA[&joke.3;]]>
</jokes>
Why CDATA?
Valid Documents
A well-formed XML document is valid if it conforms to its DTD:
- The sequence of names of the children of each element \( e \) matches the regex of \( \text{name}(e) \)
- The root element is as declared
- The types and values of attributes are correct
- IDs are unique
- IDREF attributes point to identifier values
Outline
- Introduction
- XML Syntax
- DTD
- Element Declaration
- Attribute Declaration
- Entities
- Validity
- XPath
- Axes
- Predicates
- Examples of XPath Uses
- Namespaces
DTDs vs. Schemas
- DTDs are rather weak specifications by DB & PL standards
- Only one base type – PCDATA
- No numbers, Booleans, dates, etc.
- IDREFs are untyped
- That is, the type of the object referenced is not known
- No constraints beyond parent/child
- For example, child is inverse of parent
- No inheritance
- Context-independent element definitions
- For example, \(<\text{role}>\) in a \(<\text{movie}>\) or a \(<\text{play}>\)?
- A much richer notion of a schema is XML Schema, which we do not study here
The XPath Language
- XPath expressions are used for referencing elements (nodes) of an XML document
- Used as a QL, and embedded in more expressive QLs like XQuery and XSLT
– We will see examples in the end
- The syntax resembles that of the Unix file system
XPath Expressions
- An XPath expression (or just XPath for short) matches paths in the XML tree
- An absolute path begins at the root of the document
– Starts with / or //
– For example, /countries/country, //city
- A relative path begins with a context node that is defined by the application that uses the XPath
– For example, city/name, or /name
The XML DOM Tree
DOM = Document Object Model
The root is implicit
(Does not appear in the text of the XML document)
Applying XPath to XML
• Applying an XPath expression \( e \) to a context node \( v \) results in the list of all nodes \( u \), such that \( e \) matches the path from \( v \) to \( u \)
• Applying an XPath expression \( e \) to a document \( d \) means applying \( e \) to \( \text{root}(d) \)
• The order in the list is the one induced by the preorder of the nodes in the DOM tree
XPath Steps and Axis
• An XPath describes a sequence of steps that together characterize a path
• A step is defined by an axis that specifies a binary relationship between nodes
– The axis describes how to get from the current node to the next one
– For example, parent-child, child-parent, ancestor-descendant, etc.
• Consecutive steps are separated by /
XPath Evaluation
• Applying \( \text{axis}_1 / \text{axis}_2 / \ldots / \text{axis}_k \) to context node \( v \):
\[
U := \{ u | \text{axis}_1(v, u) \}
\]
If \( k = 1 \) then \( \text{Result} := U \);
Else
\[\text{Result} := \emptyset \]
For \( u \in U \) {
Recursively apply \( \text{axis}_2 / \ldots / \text{axis}_k \) with \( u \) as the context and insert all resulting nodes to Result }
Return \( \text{Result} \)
• If the XPath begins with “/” then the context node is the root
Child Axis
• A child axis has the simple form \( \text{tagName} \)
– Go to an element child with the tag \( \text{tagName} \)
• For example,
– \( /\text{tagName} \) matches the \( \text{tagName} \) child of root
– \( \text{city}/\text{name} \)
– \( /\text{countries}/\text{country}/\text{city} \)
• The child name * matches every tag
– For example: "/**/\text{city}, */\text{name}
Outline
• Introduction
• XML Syntax
• DTD
• Element Declaration
• Attribute Declaration
• Entities
• Validity
• XPath
▶ Axes
▶ Predicates
▶ Examples of XPath Uses
• Namespaces
Child-Axis Examples
\( /\text{countries} \)
**Descendant Examples**
```
//countries/country/city
```
- Document root
- countries
- country
- continent
- name
- population
- city
- year
- name
- capital
- name
- capital
- Asia
- Israel
- 6199008
- 2001
- Jerusalem
- yes
- Ashdod
- no
**Child-Axis Examples**
```
/*//country/*
```
- Document root
- An attribute is not an element!
**Self and Descendant-or-Self**
- The **self** axis "." denotes the identity relationship
- That is, the step "remain in the current node"
- /countries/country/. is /countries/country
- country/.city = country/city
- The **descendant-or-self** axis means: either stay in the current node or go to some descendant of the current node
- descendant-or-self::node()
- Text is a node, an attribute is not!
- // is a shorthand for /descendant-or-self::node//
- For example, country//name
**Descendant Examples**
```
//countries//name
```
- Document root
- countries
- country
- continent
- name
- population
- city
- year
- name
- capital
- name
- capital
- Asia
- Israel
- 6199008
- 2001
- Jerusalem
- yes
- Ashdod
- no
**Child-Axis Examples**
```
city/name
```
- Document root
- countries
- country
- continent
- name
- population
- city
- year
- name
- capital
- name
- capital
- Asia
- Israel
- 6199008
- 2001
- Jerusalem
- yes
- Ashdod
- no
### Other Axis Types
- The `parent` axis `/*` denotes the parent relationship
- “Go to the parent of the current node”
- For example, `//name/..//population`
- XPath has more axis types (denoted by a different syntax from the ones shown earlier): examples:
- descendant
- ancestor
- ancestor-or-self
- following-sibling
- preceding-sibling
### Referring Attributes
- The `attribute` axis is written as `@attName`
- That is, “go to the attribute `attName` of the current node”
- The operator `@*` matches every attribute
### Attribute Examples
- `//country/@continent`
```xml
<country>
<continent>Asia</continent>
<name>Israel</name>
<name>population>6199008</year>
<city>Jerusalem</city>
</country>
```
- `@continent`
Attribute Examples
XPath Predicates
- Predicates are used for filtering steps out
- For example, //city[@capital="yes"] will match only capital cities
- Formally, given a predicate [P]:
- P evaluated over target node • true/false
- The step is taken if the value is true
- The node reached in the last step is the context node
- XPath has a rich logic for predicates; we demonstrate only the common ones
Outline
- Introduction
- XML Syntax
- DTD
- Element Declaration
- Attribute Declaration
- Entities
- Validity
- XPath
- Axes
- Predicates
- Examples of XPath Uses
- Namespaces
//@*
countries
country
name
population
city
countries
country
name
population
city
countries
country
name
population
city
countries
country
name
population
city
//country[@population>10000000]
//population[../city/name="Jerusalem"]
//country[.]//city
Functions
- Inside XPath predicates, you can use predefined functions.
- Examples:
- last() – returns the number of nodes obtained from the last axis step.
- position() – returns the position of the node in the list of nodes from the last axis step.
- name() – returns the tag of the current node.
- count(XPath) – returns the number of nodes satisfying XPath.
Final Remarks on XPath
- We presented the abbreviated (sugared) syntax syntax of XPath
- For example, `<country/@name>` is an abbrv. of `<child::city[parent::node()/attribute::name]`
- More details on XPath:
- XPath tutorial in W3Schools
- XPath W3C Recommendation
XPath in XQuery
```xml
<catalog>
<cd country='UK'>
<artist>David Bowie</artist>
<title>Space Oddity</title>
<price>9.90</price>
</cd>
<cd country='UK'>
<artist>Aretha Franklin</artist>
<title>Lady Soul</title>
<price>11.90</price>
</cd>
</catalog>
```
FLWOR expressions:
- For
- Let
- Where
- Order
- by
- Return
XPath in XSLT Example
```xml
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="html"/>
<xsl:template match="/catalog">
<table>
<tr><th>Title</th><th>Price</th><th>Country</th></tr>
<xsl:for-each select="cd">
<tr>
<td><xsl:value-of select="title"/></td>
<td><xsl:value-of select="price"/></td>
<td><xsl:value-of select="@country"/></td>
</tr>
</xsl:for-each>
</table>
</xsl:template>
</xsl:stylesheet>
```
Outline
- Introduction
- XML Syntax
- DTD
- Element Declaration
- Attribute Declaration
- Entities
- Validity
- XPath
- Axes
- Predicates
- Examples of XPath Uses
- Namespaces
XML Namespaces
- A mechanism for creating intuitive unique names (for elements and attributes)
- Those can be used all over the Web, cf. RDF
- Semantically, a namespace is a collection of names that were created for a specific domain of applications
- We will see namespaces in action when we learn RDF
Terminology
<table>
<thead>
<tr>
<th>prefix</th>
<th>h</th>
</tr>
</thead>
<tbody>
<tr>
<td>local name</td>
<td>table</td>
</tr>
<tr>
<td>qualified name</td>
<td>h:table</td>
</tr>
<tr>
<td>namespace URI</td>
<td><a href="http://www.w3.org/TR/html4/">http://www.w3.org/TR/html4/</a></td>
</tr>
<tr>
<td>expanded name</td>
<td><a href="http://www.w3.org/TR/html4/table">http://www.w3.org/TR/html4/table</a></td>
</tr>
</tbody>
</table>
Scope of Namespaces
- The scope of a namespace declaration is the element containing the declaration and all descendant elements
- More than one namespace can be declared in the same scope
- At most one can be the default namespace
- All others must have unique prefixes
|
{"Source-Url": "https://webcourse.cs.technion.ac.il/236363/Spring2017/ho/WCFiles/l8-xml-6.pdf", "len_cl100k_base": 7396, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 90196, "total-output-tokens": 8476, "length": "2e12", "weborganizer": {"__label__adult": 0.000308990478515625, "__label__art_design": 0.0010328292846679688, "__label__crime_law": 0.00040984153747558594, "__label__education_jobs": 0.00605010986328125, "__label__entertainment": 0.00014281272888183594, "__label__fashion_beauty": 0.00017082691192626953, "__label__finance_business": 0.0003485679626464844, "__label__food_dining": 0.00028204917907714844, "__label__games": 0.0004508495330810547, "__label__hardware": 0.0007696151733398438, "__label__health": 0.00030303001403808594, "__label__history": 0.00048232078552246094, "__label__home_hobbies": 0.00013625621795654297, "__label__industrial": 0.00042819976806640625, "__label__literature": 0.000782012939453125, "__label__politics": 0.00026226043701171875, "__label__religion": 0.0005540847778320312, "__label__science_tech": 0.0576171875, "__label__social_life": 0.00020563602447509768, "__label__software": 0.044525146484375, "__label__software_dev": 0.8837890625, "__label__sports_fitness": 0.00017178058624267578, "__label__transportation": 0.0003364086151123047, "__label__travel": 0.0001857280731201172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25858, 0.01266]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25858, 0.7718]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25858, 0.6787]], "google_gemma-3-12b-it_contains_pii": [[0, 2116, false], [2116, 4972, null], [4972, 6606, null], [6606, 8981, null], [8981, 10734, null], [10734, 12492, null], [12492, 14162, null], [14162, 16093, null], [16093, 17448, null], [17448, 18186, null], [18186, 20070, null], [20070, 21697, null], [21697, 22446, null], [22446, 23364, null], [23364, 23364, null], [23364, 23734, null], [23734, 25079, null], [25079, 25858, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2116, true], [2116, 4972, null], [4972, 6606, null], [6606, 8981, null], [8981, 10734, null], [10734, 12492, null], [12492, 14162, null], [14162, 16093, null], [16093, 17448, null], [17448, 18186, null], [18186, 20070, null], [20070, 21697, null], [21697, 22446, null], [22446, 23364, null], [23364, 23364, null], [23364, 23734, null], [23734, 25079, null], [25079, 25858, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 25858, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25858, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25858, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25858, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25858, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25858, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25858, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25858, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25858, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25858, null]], "pdf_page_numbers": [[0, 2116, 1], [2116, 4972, 2], [4972, 6606, 3], [6606, 8981, 4], [8981, 10734, 5], [10734, 12492, 6], [12492, 14162, 7], [14162, 16093, 8], [16093, 17448, 9], [17448, 18186, 10], [18186, 20070, 11], [20070, 21697, 12], [21697, 22446, 13], [22446, 23364, 14], [23364, 23364, 15], [23364, 23734, 16], [23734, 25079, 17], [25079, 25858, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25858, 0.03257]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
611eb84a540934a9d5a28b755105a78c33053134
|
Air Force Information Workflow Automation through Synchronized Air Power Management (SAPM)
Carl Benkley
cbenkley@mitre.org
The MITRE Corporation
202 Burlington Road
Bedford, MA 01730-1420
Irene Chang
inchang@mitre.org
The MITRE Corporation
John Crowley
jdcrowley@mitre.org
The MITRE Corporation
Lt. Thomas Oristian, USAF
Thomas.Oristian@hanscom.af.mil
Electronic Systems Center (ESC/ACU)
50 Griffiss Street
Hanscom AFB, MA 01731-1625
| 1. REPORT DATE | JUN 2004 |
| 2. REPORT TYPE | |
| 3. DATES COVERED | 00-00-2004 to 00-00-2004 |
| 4. TITLE AND SUBTITLE | Air Force Information Workflow Automation through Synchronized Air Power Management (SAPM) |
| 5a. CONTRACT NUMBER | |
| 5b. GRANT NUMBER | |
| 5c. PROGRAM ELEMENT NUMBER | |
| 5d. PROJECT NUMBER | |
| 5e. TASK NUMBER | |
| 5f. WORK UNIT NUMBER | |
| 6. AUTHOR(S) | The MITRE Corporation, 202 Burlington Road, Bedford, MA, 01730-1420 |
| 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) | |
| 8. PERFORMING ORGANIZATION REPORT NUMBER | |
| 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) | |
| 10. SPONSOR/MONITOR’S ACRONYM(S) | |
| 11. SPONSOR/MONITOR’S REPORT NUMBER(S) | |
| 12. DISTRIBUTION/AVAILABILITY STATEMENT | Approved for public release; distribution unlimited |
| 13. SUPPLEMENTARY NOTES | The original document contains color images. |
| 14. ABSTRACT | |
| 15. SUBJECT TERMS | |
| 16. SECURITY CLASSIFICATION OF: | 17. LIMITATION OF ABSTRACT |
| a. REPORT | unclassified |
| b. ABSTRACT | unclassified |
| c. THIS PAGE | unclassified |
| 18. NUMBER OF PAGES | 20 |
| 19a. NAME OF RESPONSIBLE PERSON | |
Standard Form 298 (Rev. 8-98)
Prescribed by ANSI Std Z39-18
Abstract
Utilizing emerging information technologies, the Synchronized Air Power Management (SAPM) initiative presents an automated business process to integrate the Air Force (AF) Command and Control (C2) systems at the Wing level. The SAPM Phase I proof-of-concept effort has demonstrated a significant reduction in the time to plan unit taskings, evaluate missions, and execute decisions. SAPM is a joint venture among ESC/AC, AFC2ISRC/DO, MITRE, Microsoft, Lockheed Martin, Northrop Grumman, and DSRC. Implementing Extensible Markup Language (XML) messages, web services, and workflow automation, SAPM expands existing web-based capabilities, enables machine-to-machine interfaces, and streamlines the war fighter kill chain process. SAPM Phase I was successfully demonstrated to senior AF officers and representatives of DoD. Phase II is being developed at the MITRE facility in Bedford, Massachusetts.
Background
The AF war fighter kill chain is an operational process that cuts across most of the existing Battle Management Command, Control, and Communications (BMC3) systems. The traditional focus on building and acquiring individual systems has made it difficult to fully capture and implement this process as an end-to-end information workflow. The result has been stove-piped systems where data has to be manually entered or reentered, where status of the ongoing process is not easily visible, and where reports such as Wing commanders’ briefings and mission reports have to be constructed off-line. To address those issues, the AF ESC has been advocating C2 enterprise integration. The goal is to integrate C2 systems, components, and services into the broader view of the operational C2 entities using appropriate architectural frameworks and business processes to ensure affordability, military utility, efficiency, and timeliness. With the same objective in mind and by implementing information workflow automation, SAPM initiative offers a modern technology process toward the C2 enterprise integration.
SAPM Concept
In January 2003 the SAPM initiative began as a collaborative prototype effort of ESC/AC, AFC2ISRC/DO, MITRE, Microsoft, Lockheed Martin, Northrop Grumman, and DSRC. Exploiting XML, web service, and workflow automation technologies as well as the Universal Description, Discovery and Integration (UDDI) registry, the SAPM Phase I proof-of-concept was completed in May 2003. It has demonstrated a drastic reduction in the time it takes to plan, evaluate, and execute decisions, as well as a decrease in associated manpower needs in a lab environment.
1 Other SAPM Phase I development team members included D. Hebert, D. Konstantopoulos, T. McDevitt, J. Sexton (MITRE); E. Rosenkranz, D. Stampfli (Microsoft); S. Allen (AFC2ISRC); B. Reed (Northrop Grumman), B. Donohue (LMMS), R. Guerrero (DSRC); S. Taylor, R. Raymond (DRC); D. Mirra (Quantec)
The concept was to build web services to automate the information flows and improve interoperability between TBMCS Force Level (FL), TBMCS Unit Level (UL), Joint Mission Planning System (JMPS), and Joint Weather Impacts System (JWIS). SAPM Phase I implemented workflow management capabilities within and among the above mentioned C2 systems to provide commanders status and control from the generation of the air battle plan all the way through the creation of detailed mission planning routes. These systems are either J2EE or .NET enabled running within UNIX and Windows environments. The SAPM architecture is illustrated in Figure 1.
All SAPM web services are Simple Object Access Protocol (SOAP) based, and the information will flow as XML files. SAPM web services are defined by the World Wide Web Consortium (W3C) Web Services Definition Language (WSDL). For example, a UL scheduling web service provides scheduling information such as tail number, pilot name, and Standard Configuration Load (SCL). A mission planning route web service provides detailed route information in Common Route Definition (CRD) format. The vision is that all of these web services will be consumed by the AF C2 systems to automate existing interfaces that are either “hand-jammed” or partially automated.
Another capability demonstrated by SAPM is the ability to manage the workflow across multiple systems. An example of workflow control would be to provide automated control of the generation and delivery of the Air Operations Database (AODB) Oracle Database Exchange (ORDBEX) file from TBMCS Force Level to TBMCS Unit Level. This would allow a commander to obtain status as to the availability of the updated AODB on both the FL and UL sides as well as providing more automation of this complex transfer of information. Detailed implementation of SAPM web services and workflow automation is described in the sections below.
**Air Force C2 Programs Overview**
TBMCS FL is used by the Air Operations Center (AOC) to plan and execute theater-level air campaigns in support of joint operations. TBMCS supports all phases of the C2 cycle, provides the strategic planning through target analysis, defensive planning, airspace planning, and task analysis. The output of this planning phase provides detailed air battle plans constructed to provide tasking for the air battle forces. Once the tasking has been accomplished, the unit level systems begin the scheduling and mission preparation activities. After flight scheduling and mission planning are completed, detailed flight missions will be executed. Finally the flight missions will be assessed and the report will be generated and fed back to the AOC.
The TBMCS UL scheduler application assigns crew members, tail numbers, and SCLs to mission sorties in support of tasking and publishes daily flying schedules. During peacetime, Wing commanders use the Unit Level system as a resource management tool. During contingencies, TBMCS Unit Level is used to support Wing level operations, planning, logistics, and intelligence activities.
JMPS provides the mission planner with unit level mission planning support for every phase of a mission, ranging from the preflight planning, departure, attack/cargo delivery, deconfliction, recovery, and post-mission debrief. JMPS information management tools allow the user to access data and develop a flight plan for the combat mission. JMPS supports collaborative planning of all mission elements including attack, bomber, cruise missiles, fighter, assault, airborne early warning, command, control and communications, etc. JMPS can transmit, accept, and process large amounts of near real-time data on a frequent basis. It has the capacity to rapidly process these data to create a situation updated threat (or weather, terrain, routes etc.) picture as well as to display new information to the planner when requested.
**Workflow Automation Overview**
The goals of workflow automation are to provide flexibility, visibility, auditability and extensibility to process management. Workflow automation has often been solely viewed as an information technology solution. To be a success, it needs to involve all segments of an organization and requires a thorough understanding of their roles. In addition to the benefits identified above, a successful workflow automation project helps organizations achieve a clearer understanding of the relationships that exists between different systems and frequently discover more robust processes and solutions.
Developers and implementers of workflow automation systems need to perform a critical analysis of how the solution will add organizational value and determine the potential return on investment prior to investing the time and resources into the process analysis and system integration. Often the core areas that provide the greatest value to the organization have heuristic traits that cannot be fully captured by an automated process, for example, mensurating a target. These processes can still greatly benefit from orchestrating and tracking the affiliated tasks.
Workflow automation is not designed to completely alter the way a business operates, rather it is designed for management and process participants to assert control over their existing processes, and to provide the flexibility to modify these processes when necessary to reflect the dynamic environment where they are deployed.
Solutions need to emphasize the fundamentals of defining and managing process information. As integration standards, tool sets, and protocols continue to evolve, successful solutions will be defined by focusing on the process and management layers
**Leveraging Current Infrastructure and Investments**
One of the mantras within the workflow automation space is the introduction of automated workflow solutions that leverages an organization’s existing infrastructure. The theory is that workflow automation permits organizations to realize request of interest from their existing applications by abstracting processes to a higher level, allowing them to be connected and orchestrated by a workflow automation solution.
The argument assumes that there is a driving need to integrate the systems in question and that no alternative steps have been taken to optimize the processes and movement of information between the systems. Due to operational and competitive pressures, most organizations have performed and continue to optimize their systems and processes either manually of through traditional system integration tasks.
It is important that workflow automation processes do not simply repeat these optimizations but rather look to increase the efficiency, flexibility and reliability of the processes.
**Improving Process Management and Control**
A promise of workflow automation is the ability to achieve greater process control and end-to-end process visibility.
A workflow automation solution is not simply a collection of integrated systems, rather a choreographed network of systems and capabilities that can quickly adapt to a changing environment and incorporate new opportunities by managing information in a process-centric/Service Oriented Architectures (SOAs) manner. SOAs are an enterprise architectural style designed to achieve loose coupling among interacting systems. Each system is designed to fulfill specific tasks for a requester. By loose coupling the interface between systems an internal change in a service will have no effect on the requesters of that service.
Workflow automation solutions demonstrate real value when an organization needs to adapt or change in order to minimize the effects of the change or maximize the opportunity that it may uncover. Process optimization requires gathering metrics on the processes under management. These metrics may include the time to complete each task,
users associated with tasks, network bandwidth, and process resolution. As such, the workflow management system needs to be integrated into the overall information architecture of the organization.
The gathering of process intelligence and the process performance metrics discussed above should be built into the workflow management solution. These metrics can be analyzed and mined for valuable information pertaining to the managed processes and to discover previously unrecognized operational and data patterns.
With the successful adoption of workflow automation tools, organizations will be able to respond more rapidly and with greater confidence to change and to optimize their existing processes, with the performance and process metrics providing management with a clearer understanding of the cause-and-effect relationships that exist between process elements.
**Technical Issues of Workflow Automation**
Workflow Automation is being driven by the formation and rapid acceptance of key industry standards maintained by the W3C, IEEE, and OASIS. It relies on the capability of the individual components, systems and organizational units to communicate in an economical way with suitable performance. Typical integration methods include SOAP, and XML based protocol requiring little if any new organizational infrastructure or network configuration.
Currently evolving standards such as the Business Process Modeling Language (BPML) and the Business Process Execution language (BPEL) will insure that workflow automation solutions are more focused on process management and organizational needs and less on the difficulties of system integration.
**Workflow Design Strategy**
Comparing similar organizations using a collection of common systems, it is the organizational processes and the application of organizational intelligence that permits the discrimination between groups.
Though workflow automation solutions help organizations manage their workload and processes, it does not, per se, introduce excellence or a competitive advantage. If correctly designed and implemented, workflow automation can, however, provide an infrastructure that can be leveraged for a competitive advantage.
A bottom-up approach to workflow automation begins with analyzing an organization current systems and data models. Designing automation systems from the bottom-up approach tends to result in a typical systems integration process driven by the current processes and systems. Often the approach results in an inflexible solution that is expensive to modify. Preferably a top-down approach to designing workflow
management systems focuses on analyzing the organizational processes and the dynamics governing these processes.
Many common business processes exist between the different AF operational units. These units will broadly tend to use similar applications and systems to support their operational requirements, though some systems may be highly specialized for the unit’s particular mission. The monitoring and reporting requirements are also quite similar. The focus of the effort involved in creating workflow automation solutions within this environment should be in providing the correct set of workflow tools and techniques that will encapsulate the common elements while permitting the unique aspects to flourish.
The further the operational organization can move away from managing interfaces and technical micro-detail, the better. SAPM utilizes Microsoft’s BizTalk server as an orchestration engine. The BizTalk solution provides a graphical process modeler which allows an analyst to develop a business process by using intuitive graphical elements to represent different services and consumers of information within the process. A snapshot of the SAPM BizTalk process flowchart is show in Figure 2.

Figure 2. SAPM BizTalk Process Flowchart
If we are to successfully manage processes from a model level, we have to have the necessary technical plumbing in place to fill the void between the process model and low level technologies.
The Standard Workflow Reference Model
The standard workflow reference model (Figure 3) has been developed from analyzing generic workflow application structures and by identifying the interfaces required within this structure that enable different products to interoperate. All workflow automation systems contain a number of generic components that interact in a defined set of ways; different commercial products typically exhibit different levels of capability within each of the identified components. To achieve interoperability between workflow products a standardized set of interfaces, such as web services or CORBA, and data interchange formats between such components is necessary.
Figure 3. Standard Workflow Reference Model
Interface 1: Process Definition Tools Interface
The process definition tools interface defines a standard interface between process definition modeling tools and the workflow engine(s). The process definition and modeling tools should be capable of publishing the workflow process to the workflow engine in a standard format such as BPEL. The customary users of the process modeling tools are the process engineers. Many commercially available workflow engines, such as Microsoft’s BizTalk, bundle the process modeling tool directly into their product offering.
Interface 2: Workflow Client Application Interface
The workflow client application interface defines an Application Program Interface (API) for client applications to request services from the workflow engine to control the progression of processes, activities and work-items. Often the client application will communicate automatically with the workflow engine, for example a client application may automatically notify the workflow engine that a user has performed a specific task such as updating a database. In other scenarios a user may need to directly notify the workflow engine that a task has been completed.
Interface 3: Invoked Application Interface
The invoked application interface defines a collection of APIs that allow the workflow engine to invoke a variety of applications, through common agent software. Common standard interfaces models include web-services and CORBA.
Interface 4: Administration & Monitoring Tools Interface
The administration and monitoring interface defines how different applications can integrate with the workflow engine to provide monitoring and control functionality.
Interface 5: Workflow Interoperability Interface
The workflow interoperability interface defines an interoperability model to support the interconnection of multiple workflow systems. The Workflow Management Coalition, an organization composed of industry experts and vendors, is promoting a common standard, wf-XML, as a solution for providing this interoperability.
Workflow Design Patterns
Workflow processes can be decomposed into an orchestrated collection of common design patterns. The workflow design patterns represent those common process elements such as conditional execution branches and cycles that can be found in all processes. They do not define the underlying data model required to support an instantiation of the pattern.
The process engineer uses the selected process definition tool to interweave different design patterns to create a complete process. Abstractly, the process definition tool does not need to be aware of the data model associated with the workflow process to design the process; however in practice the data model must be known to actually orchestrate the process.
1) Basic Control Patterns
- Sequence: execute activities in sequence.
- Parallel Split: execute activities in parallel.
- Synchronization: synchronize two parallel threads of execution.
- Exclusive Choice: choose one execution path from many alternatives.
- Simple Merge: merge two alternative execution paths.
(2) Advanced Branching and Synchronization Patterns
- Multiple Choice: choose several execution paths from many alternatives.
- Synchronization Merge: merge many execution paths. Synchronize if many paths are taken. Simple merge is used if only one execution path is taken.
- Multiple Merge: merge many execution paths without synchronizing
- Discriminator: merge many execution paths without synchronizing. Execute the subsequent activity only once.
(3) Structural Patterns
- Arbitrary Cycles: execute workflow graph without any structural restriction on loops.
- Implicit Termination: terminate if there is nothing to be done.
(4) Patterns Involving Multiple Instances (MI)
- MI without synchronization: generate many instances of one activity without synchronizing them afterwards.
- MI with a priori known design time knowledge: generate many instances of one activity when the number of instances is known at the design time (with synchronization).
- MI with a priori known runtime knowledge: generate many instances of one activity when a number of instances can be determined at some point during the runtime (as in FOR loop but in parallel)
- MI with a priori no known runtime knowledge: generate many instances of one activity when a number of instances cannot be determined (as in WHILE loop but in parallel).
(5) State-Based Patterns
- Deferred Choice: execute one of the two alternatives threads. The choice of which thread is to be executed should be implicit.
- Interleaved Parallel Routing: execute two activities in random order, but not in parallel.
- Milestone: enable an activity until a milestone is reached.
(6) Cancellation Patterns
- Cancel Activity: cancel (disable) an enabled activity.
- Cancel Case: cancel (disable) the process.
SAPM Workflow Implementation
The SAPM solution was implemented according to the standard workflow reference model described in previous paragraphs. The SAPM workflow implementation model is illustrated in Figure 4. Microsoft’s BizTalk server tool was selected as the workflow
engine to orchestrate the process. A simplified view of the SAPM workflow process model is depicted in Figure 5.
Figure 4. SAPM Workflow Implementation Model
1. Tasking order notification
2. Get new tasking order.
3. Begin tasking Order process
4. Create Mission Folders
5. Notify Duty Officer
6. Notify Scheduler operator
7. Perform scheduling
8. Create Mission Folder in ODS
9. Get Schedule
10. Build Mission Data Sheet and store in ODS
11. Update CRD
12. Post CRD to the ODS
Figure 5. Primary SAPM Process Model - Simplified View
A principal element of a successful workflow project is the capability to adapt dynamic changes. Changes causing process modification can be additive. They could be caused by the inclusion of new systems or precipitated by a network or system failure. The SAPM workflow application monitors the status of the network and individual systems to insure that it is capable of completing its tasks. If a service is unavailable through difficulties with the actual service or the network, then alternate paths of execution will be executed.
The SAPM presentation framework (Figure 6) gives commanders and operational personnel a global view into their mission planning environment, clearly representing the status of the different missions being planned and the status of the systems involved.
The SAPM workflow application required the introduction of a service, the Operational Data Store (ODS), to persist information common to many of the systems associated with the mission planning process. The ODS, designed to operate on a Microsoft Share Point server, stores the route information in an XML format. Other systems of record are capable of retrieving the CRD via a web service exposed by the CRD.
The unbounded system coupling offered by web services, and the flexible design tools and framework provided in the Biztalk environment resulted in the implementation of the
alternate process execution models being primarily a process modeling task and less of an engineering system integrations task.
**Web Services Overview**
An important feature of web services is that the invoking application needs not to know anything about the language the remote application is constructed in nor the platform it is deployed on. Another desirable feature is that web services run over standard TCP/IP networks requiring no infrastructure changes. To support the existing network infrastructure and the net-centric capability of the existing systems, BizTalk uses web services to invoke remote services. The web service programming model allows one to construct highly scalable, distributed applications using XML based messaging to exchange data between different systems in a possibly heterogeneous environment.
Web services do have drawbacks. The verbose nature of XML and SOAP messaging imposes a network overhead that other methods such as CORBA or Remote Method Invocation (RMI) do not. As such, for network constrained environments a careful analysis should be performed on the effects that web services may have on the network. Security is an additional consideration.
The web service infrastructure consists of several components that enable applications to discover and consume services as illustrated in Figure 7. These components are described in the following paragraphs.

*Figure 7. Web Service Invocation Infrastructure*
**Webs Service Directories**
Web service directories provide a central location to publish information about web services. The UDDI specification defines the web service publishing guidelines. UDDI defines three types of information associated with a web service: business data, descriptive service information, and detailed service specifications.
**Web Service Discovery**
The web service discovery process involves locating the documents that detail the necessary specifications required to call a web service. The standard format for the specifications is the Web Service Description Language (WSDL).
**Web Service Description**
An individual web service may expose multiple operations. The web service description component provides the descriptions required to allow a user to determine what operation to call, the required parameters, and how to resolve the service. The web service descriptions are part of the WSDL file. Typically the WSDL associated with a web service is initially resolved during an application’s design phase to ensure that the application’s data model has the prerequisite elements to successfully call the web service. The actual service point, generally a URL, that is used to call the web service is typically resolved at runtime by querying a network naming service, the UDDI server. Once the service has been resolved, it is typically cached by the invoking machine.
Web services may also be discovered, bound and invoked at runtime. But similar to other protocols there is a performance penalty associated with such “late-binding” techniques. The SAPM initiative utilized design time binding and run-time discovery to prove the performance associated with early-binding and the flexibility to relocating services offered by late time discovery and caching.
**SOAP Package**
Web services use protocols that can be understood by any system that is capable of supporting common Internet formats such as HTTP and HTTPS. The HTTP(S) GET and POST operations are the common methods for invoking web service operations. The operation calls are embedded in a SOAP package. The SOAP protocol allows the structured transfer of typed information between clients and servers over the inter/intranet. If a web service is being invoked by an HTTP GET or POST, operation the SOAP package is the body of the HTTP operation.
A SOAP package consists of four parts:
- **SOAP Envelope**: The mandatory SOAP package wrapper.
- **SOAP Header**: A section that defines encoding, capabilities and forwarding rules.
- **SOAP Body**: The SOAP body contains the necessary information and parameter to call a web service.
- **SOAP Fault**: The SOAP fault details error handling mechanisms.
The UDDI Server
The UDDI server is an integral part of the Windows Server 2003. The registry allows an organization to publish information about itself and the services it provides. The services are organized into a defined topology, or hierarchy of service and binding information. The UDDI services within the SAPM UDDI server have been organized according to the following UDDI.org specified taxonomy as shown in the Table below.
<table>
<thead>
<tr>
<th>UDDI.org Specification Term</th>
<th>SAPM representation</th>
</tr>
</thead>
<tbody>
<tr>
<td>businessEntity</td>
<td>Name of the local wing (i.e. 99 WG COMPOSITE)</td>
</tr>
<tr>
<td>BusinessService</td>
<td>Name of the SAPM Systems (i.e. NAWS)</td>
</tr>
<tr>
<td>bindingTemplate</td>
<td>Service endpoint (i.e. <a href="http://naws/naws.asmx">http://naws/naws.asmx</a>)</td>
</tr>
<tr>
<td>tModel</td>
<td>A reference to the WSDL document for the service</td>
</tr>
</tbody>
</table>
Individual services involved in SAPM expose interfaces (web services) in unique format that may require translation before being consumed. BizTalk provides the ability to graphically transform data formats and map data elements between different models. This capability supports the concept of escalating the task of developing workflow automation solutions from an information technology task to an operational task (see Figure 8).
Figure 8. BizTalk Graphical Data Transformation and Mapping Utility
Webs Service Security
Early workflow projects with web service interfaces often overlooked security due to a lack of web service security standards. This risk is being mitigated by the adoption of the web services security model (WS-Security) by the Organization for the Advancement of Structured Information Standards (OASIS), a global consortium driving the adoption of standards, and the inclusion of WS-Security into most commercial workflow engines.
The WS-Security specification describes enhancements to the SOAP based web service applications. WS-Security provides a general purpose mechanism for associating generic security tokens with a SOAP message. The specification covers three primary areas: token propagation, message integrity, and message confidentiality. The specification does not provide a comprehensive security solution, but is intended to be used with application specific protocols and encryption techniques.
Similarly, the Security Assertion Markup Language (SAML) developed by the OASIS defines a framework for exchanging security information between entities. SAML defines a common XML framework for exchanging security assertions. SAML is different from other security systems due to its approach of expressing assertions between an asserting party and a relying party about a subject that other applications within a network can trust.
Neither SAML nor WS-Security provides a solution for managing information across security domains. Ongoing research and product development into XML guard technology that can address this issue is currently being conducted by leading vendors and government labs.
Result and Lessons Learned
SAPM Phase I was completed in 90 days of intensive effort. Along with the SAPM workflow operational process models, over 30 web services were developed. SAPM was successfully demonstrated to ESC Commander, Lieutenant General William Looney, USAF in May 2003 and to many other flag officers. SAPM was showcased at the 2003 AF C4ISR Summit in Danvers, Massachusetts, the 2003 Microsoft DoD Air Force Symposium in Redmond, Washington, and the 2004 AF Mission Planning Users Conference (MPUC) in Las Vegas, Nevada. Figure 9 depicts the amount of operational streamlining that SAPM has demonstrated.
Figure 9. SAPM Process Streamlining
Some of the lessons learned are described as follows.
- SAPM has proven that enabling machine-to-machine communications using web services and workflow automation helps break down the barriers to rapid information exchange among AF C2 systems. Those technologies could drive us closer toward General John Jumper’s, USAF Chief of Staff, vision of enterprise interoperability and the implementation of the DoD network-centric warfare.
- Phase I was a successful collaborative effort of the government and industry. It was accomplished through strong support from program offices and significant contributions from vendors and contractors. The scope for SAPM Phase II and beyond would be much bigger and will require continuous funding and commitment from participating programs.
- SAPM has demonstrated that loosely coupling via web services facilitates distributed development. Therefore, enterprise integration could be accomplished in heterogeneous environments (e.g., UNIX and .NET) as long as those web services are WSDL compliant and developed according to the tenets of the AF ESC Command and Control Enterprise Reference Architecture (C2ERA).
- Due to resource and time constraints, the Phase I development team was unable to fully define the SAPM architecture and requirements before development started. In fact, most of the requirements were evolving during development. It was manageable during the Phase I proof-of-concept, but it was not considered as a sufficiently rigorous engineering process. The prototype architecture and requirements should be defined prior to prototyping development in order to better address individual program’s needs and expectations.
- The implementation of web services and workflow automation enabled SAPM to transmit mission tasking data and battle plan information in real time. Therefore, with SAPM, Wing commanders’ briefings, mission checklists, and mission reports could be automatically generated within minutes of receiving the tasking order. This not only could minimize the presence of slow and error-prone data entry, but also could provide Wing commanders early visibility to the status of missions, logistics, and weapons allocations.
**SAPM Phase II Enhancements**
Some enhancements and new capabilities will be implemented in SAPM Phase II. For example, Phase II will allow simplified administration of UDDI. All web services will be maintained in the SAPM UDDI. Thus, web service consumers will retrieve web service information directly from this UDDI. Additionally UDDI settings will accommodate secondary providers in case of a primary failover. The SAPM ODS will be enhanced to a scalable solution in order to store metadata in a database. It will leverage the document storage features of Share Point server. Also ODS replication features will be designed into the Phase II ODS but not implemented in the BizTalk workflow model.
**Bibliography**
ESC/AC, “Concept of Operations for SAPM”, 13 August 2003, Hanscom AFB, MA
ESC/AC, “SAPM Warfighter Rapid Acquisition Process (WRAP) FY04 Proposal”, 13 August 2003, Hanscom AFB, MA,
ESC/AC, “SAPM Warfighter Rapid Acquisition Process (WRAP) FY04 Proposal”, 13 August 2003, Hanscom AFB, MA,
William Nelson, Colonel USAF & Daniel Hebert, ESC/AC, “SAPM Proof of Concept Briefing to Lieutenant General Looney”, 16 May 2003, Hanscom AFB,
Kathy Harding & Jean Trenary, Developing XML Web Services and Server Components, Microsoft Press 2003
Paul Kulchenko, James Snell, & Doug Tidwell, Programming Web Services with SOAP, O’Reilly 2002
“Workflow Design Patterns”, 20 February 2004
http://tmitwww.tm.tue.nl/research/patterns/patterns.htm
OASIS, February 20, 2004
http://www.oasis-open.org/home/index.php
|
{"Source-Url": "https://apps.dtic.mil/dtic/tr/fulltext/u2/a466055.pdf", "len_cl100k_base": 6989, "olmocr-version": "0.1.50", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 39131, "total-output-tokens": 7979, "length": "2e12", "weborganizer": {"__label__adult": 0.00052642822265625, "__label__art_design": 0.0004642009735107422, "__label__crime_law": 0.00173187255859375, "__label__education_jobs": 0.0020084381103515625, "__label__entertainment": 0.00017571449279785156, "__label__fashion_beauty": 0.00021922588348388672, "__label__finance_business": 0.00206756591796875, "__label__food_dining": 0.0004072189331054687, "__label__games": 0.00093841552734375, "__label__hardware": 0.004871368408203125, "__label__health": 0.000492095947265625, "__label__history": 0.0009899139404296875, "__label__home_hobbies": 0.00012576580047607422, "__label__industrial": 0.0031147003173828125, "__label__literature": 0.0002522468566894531, "__label__politics": 0.0010967254638671875, "__label__religion": 0.0003504753112792969, "__label__science_tech": 0.36279296875, "__label__social_life": 0.00019884109497070312, "__label__software": 0.09063720703125, "__label__software_dev": 0.51953125, "__label__sports_fitness": 0.00038552284240722656, "__label__transportation": 0.006206512451171875, "__label__travel": 0.00039577484130859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36263, 0.02488]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36263, 0.16151]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36263, 0.90488]], "google_gemma-3-12b-it_contains_pii": [[0, 438, false], [438, 1650, null], [1650, 4534, null], [4534, 6485, null], [6485, 9636, null], [9636, 12400, null], [12400, 15019, null], [15019, 16504, null], [16504, 17806, null], [17806, 20347, null], [20347, 22513, null], [22513, 22672, null], [22672, 23059, null], [23059, 24433, null], [24433, 25940, null], [25940, 28647, null], [28647, 30007, null], [30007, 32265, null], [32265, 33982, null], [33982, 36071, null], [36071, 36263, null]], "google_gemma-3-12b-it_is_public_document": [[0, 438, true], [438, 1650, null], [1650, 4534, null], [4534, 6485, null], [6485, 9636, null], [9636, 12400, null], [12400, 15019, null], [15019, 16504, null], [16504, 17806, null], [17806, 20347, null], [20347, 22513, null], [22513, 22672, null], [22672, 23059, null], [23059, 24433, null], [24433, 25940, null], [25940, 28647, null], [28647, 30007, null], [30007, 32265, null], [32265, 33982, null], [33982, 36071, null], [36071, 36263, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 36263, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36263, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36263, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36263, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36263, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36263, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36263, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36263, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36263, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36263, null]], "pdf_page_numbers": [[0, 438, 1], [438, 1650, 2], [1650, 4534, 3], [4534, 6485, 4], [6485, 9636, 5], [9636, 12400, 6], [12400, 15019, 7], [15019, 16504, 8], [16504, 17806, 9], [17806, 20347, 10], [20347, 22513, 11], [22513, 22672, 12], [22672, 23059, 13], [23059, 24433, 14], [24433, 25940, 15], [25940, 28647, 16], [28647, 30007, 17], [30007, 32265, 18], [32265, 33982, 19], [33982, 36071, 20], [36071, 36263, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36263, 0.14953]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
53cd5549d5b9c2da76ddfa389d9e0f8ee70aa3cd
|
This is a repository copy of *Unifying Theories of Time with Generalised Reactive Processes*.
White Rose Research Online URL for this paper:
http://eprints.whiterose.ac.uk/127797/
Version: Accepted Version
**Article:**
Foster, Simon David orcid.org/0000-0002-9889-9514, Cavalcanti, Ana Lucia Caneca orcid.org/0000-0002-0831-1976, Woodcock, JAMES Charles Paul orcid.org/0000-0001-7955-2702 et al. (1 more author) (2018) Unifying Theories of Time with Generalised Reactive Processes. Information Processing Letters. pp. 47-52. ISSN 0020-0190
https://doi.org/10.1016/j.ipl.2018.02.017
**Reuse**
This article is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs (CC BY-NC-ND) licence. This licence only allows you to download this work and share it with others as long as you credit the authors, but you can’t change the article in any way or use it commercially. More information and the full terms of the licence here: https://creativecommons.org/licenses/
**Takedown**
If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request.
Unifying Theories of Time with Generalised Reactive Processes
Simon Foster, Ana Cavalcanti, Jim Woodcock, Frank Zeyda
Department of Computer Science, University of York, York, YO10 5DD, United Kingdom
Abstract
Hoare and He’s theory of reactive processes provides a unifying foundation for the formal semantics of concurrent and reactive languages. Though highly applicable, their theory is limited to models that can express event histories as discrete sequences. In this paper, we show how their theory can be generalised by using an abstract trace algebra. We show how the algebra, notably, allows us to also consider continuous-time traces and thereby facilitate models of hybrid systems. We then use this algebra to reconstruct the theory of reactive processes in our generic setting, and prove characteristic laws for sequential and parallel processes, all of which have been mechanically verified in the Isabelle/HOL proof assistant.
Keywords: formal semantics, hybrid systems, process algebra, unifying theories, theorem proving
1. Introduction
The theory of reactive processes provides a generic foundation for denotational semantics of concurrent languages. It was created as part of the Unifying Theories of Programming (UTP) [1, 2] framework, which models computation using predicate calculus. The theory of reactive processes unifies formalisms such as CSP [3], ACP [4], and CCS [5]. This is made possible by its support of a large set of algebraic theorems that universally hold for families of reactive languages. The theory has been extended and applied to several languages including, stateful [6] and real-time languages, with both discrete [7] and continuous time [8, 9].
Technically, the theory’s main feature is its trace model, which provides a way for a process to record an interaction history, using an observational variable \( tr : \text{seq Event} \). In the original presentation, a trace is a discrete event sequence, which is standard for languages like CSP. The alphabet can be enriched by adding further observational variables; for example, \( \text{ref} : \mathbb{P Event} \) to model refusals [1].
Though sequence-based traces are ubiquitous for modelling concurrent systems, other models exist. In particular, the sequence-based model is insufficient to represent continuous evolution of variables as present in hybrid systems. A typical notion of history for continuous-time systems are real-valued trajectories \( \mathbb{R}_{\geq 0} \to \Sigma \) over continuous state \( \Sigma \).
Although the sequence and trajectory models appear substantially different, there are many similarities. For example, in both cases one can subdivide the history into disjoint parts that have been contributed by different parts of the program, and describe when a trace is a prefix of another. By characterising traces abstractly, and thus unifying these different models, we provide a generalised theory of reactive processes whose properties, operators, and laws can be transplanted into an even wider spectrum of languages. We thus enable unification of untimed, discrete-time, and continuous-time languages. The focus of our theory is on traces of finite length, but the semantic framework is extensible.
We first introduce UTP and its applications (2). We then show how traces can be characterised algebraically by a form of cancellative monoid (3), and that this algebra encompasses both sequences and piecewise continuous functions (4). We apply this algebra to generalise the theory of reactive processes, and show that its key algebraic laws are retained in our generalisation, including those for sequential and parallel composition (5).
Our work is mechanised in our theorem prover,
Isabelle/UTP is a semantic embedding of UTP in Isabelle/HOL. We sometimes give proofs, but these merely illustrate the intuition, with the mechanism being definitive. To the best of our knowledge, this is the most comprehensive mechanised account of reactive processes.
2. Background
UTP is founded on the idea of encoding program behaviour as relational predicates whose variables correspond to observable quantities. Unprimed variables (x) refer to observations at the start, and primed variables (x′) to observations at a later point of the computation. The operators of a programming language are thus encoded in predicate calculus, which facilitates verification through theorem proving. For example, we can specify sequential programming operators as relations:
\[
x := v \triangleq x' = v \land y_1' = y_1 \land \cdots \land y_n' = y_n
\]
\[
P ; Q \triangleq \exists x_0 \cdot P[x_0/x] \land Q[x_0/x]
\]
\[
P \circ b \triangleright Q \triangleq (b \land P) \lor (\neg b \land Q)
\]
Assignment \(x := v\) states that \(x'\) takes the value \(v\) and all other variables are unchanged. We define the degenerate form \(x \triangleq x := x\), which identifies all variables. Sequential composition \(P ; Q\) states that there exists an intermediate state \(x_0\) on which \(P\) and \(Q\) agree. If-then-else conditional \(P \circ b \triangleright Q\) states that if \(b\) is true, \(P\) executes, otherwise \(Q\).
UTP variables can either encode program data, or behavioural information, in which case they are called observational variables. For example, we may have \(ti, t' : \mathbb{R}_{\geq 0}\) to record the time before and after execution. These exist to enrich the semantic model and are constrained by healthiness conditions that restrict permissible behaviours. For example, we can impose \(ti \leq t'\) to forbid reverse time travel.
Healthiness conditions are expressed as functions on predicates, such as \(HT(P) \triangleq P \land ti \leq t'\), the application of which coerces predicates to healthy behaviours. When such functions are idempotent and monotonic, with respect to the refinement order \(\sqsubseteq\), we can show, with the aid of the Knaster-Tarski theorem, that their image forms a complete lattice, which allows us to reason about recursion. Healthiness conditions are often built from compositions: \(H \triangleq H_1 \circ H_2 \circ \cdots \circ H_n\). In this case, idempotence and monotonicity of \(H\) can be shown by proving that each \(H_i\) is monotonic and idempotent, and each \(H_i\) and \(H_j\) commute. A set of healthy fixed-points, \([H] \triangleq \{P \mid H(P) = P\}\), is called a UTP theory. Theories isolate the aspects of a programming language, such as concurrency, object orientation, and real-time programming. Theories can also be combined by composing their healthiness conditions to enable construction of sophisticated heterogeneous and integrated languages.
Our focus is the theory of reactive processes, with healthiness condition \(R\), which we formalise in Section 3. Reactive programs, in addition to initial and final states, also have intermediate states, during which the process waits for interaction with its environment. \(R\) specifies that processes yield well-formed traces, and that, when a process is in an intermediate state, any successor must wait for it to terminate before interacting. This theory uses observational variable \(wait\) to differentiate intermediate from final states, and \(tr\) to record the trace.
UTP theories based on reactive processes have been applied to give formal semantics to a variety of languages \([1] [11] [22]\), notably the Circus formal modelling language family \([3]\), which combines stateful modelling, concurrency, and discrete time \([7] [13]\). A similar theory has been used for a hybrid variant of CSP \([9]\) with a modified notion of trace. Though sharing some similarities, these various versions of reactive processes are largely disjoint theories with distinct healthiness conditions. Our contribution is to unify them all under the umbrella of generalised reactive processes.
3. Trace Algebra
In this section, we describe the trace algebra that underpins our generalised theory of reactive processes. We define traces as an abstract set \(T\) equipped with two operators: trace concatenation \(\cdot : T \rightarrow T \rightarrow T\), and the empty trace \(\varepsilon : T\), which obey the following axioms.
**Definition 3.1 (Trace Algebra).** A trace algebra \((T, \cdot, \varepsilon)\) is a cancellative monoid satisfying the following axioms:
\[
x \cdot (y \cdot z) = (x \cdot y) \cdot z \quad \text{(TA1)}
\]
\[
\varepsilon \cdot x = x \cdot \varepsilon = x \quad \text{(TA2)}
\]
\[
x \cdot y = x \cdot z \Rightarrow y = z \quad \text{(TA3)}
\]
\[
x \cdot z = y \cdot z \Rightarrow x = y \quad \text{(TA4)}
\]
\[
x \cdot \varepsilon = \varepsilon \Rightarrow x = \varepsilon \quad \text{(TA5)}
\]
As expected, $\wedge$ is associative and has left and right units. Axioms TA3 and TA4 show that $\wedge$ is injective in both arguments. As an aside, TA3 and TA4 hold only in models without infinitely long traces, as such a trace $x$ would usually annihilate $y$ in $x \wedge y$. Axiom TA5 states that there are no “negative traces”, and so if $x$ and $y$ concatenate to $\epsilon$ then $x$ is $\epsilon$. We can also prove the dual law: $x \wedge y = \epsilon \Rightarrow y = \epsilon$. From this algebraic basis, we derive a prefix relation and subtraction operator.
**Definition 3.2 (Trace Prefix and Subtraction).**
$$x \leq y \Leftrightarrow \exists z : y = x \wedge z$$
$$y - x \triangleq \left\{ \begin{array}{ll}
z \bullet y = x \wedge z & \text{if } x \leq y \\
\epsilon & \text{otherwise}
\end{array} \right.$$
Trace prefix, $x \leq y$, requires that there exists $z$ that extends $x$ to yield $y$. Trace subtraction $y - x$ obtains that trace $z$ when $x \leq y$, using the definite description operator (Russell’s $\iota$), and otherwise yields the empty trace. This is slightly different from the standard UTP operator, which is defined only when $x \leq y$. We can prove the following laws about trace prefix.
**Theorem 3.1 (Trace Prefix Laws).** For $x, y, z : T$, $(T, \leq)$ is a partial order
\begin{align*}
\epsilon & \leq x & (TP1) \\
x & \leq x \wedge y & (TP2) \\
x \wedge y \leq x \wedge z & \Leftrightarrow y \leq z & (TP3) \\
x \leq y & \Leftrightarrow x \wedge (y - x) = y & (TP4)
\end{align*}
TP1 tells us that $\epsilon$ is the smallest trace, TP3 that concatenation builds larger traces, and TP4 that concatenation is monotonic in its right argument. We also have the following trace subtraction laws.
**Theorem 3.2 (Trace Subtraction Laws).**
\begin{align*}
x - \epsilon & = x & (TS1) \\
\epsilon - x & = \epsilon & (TS2) \\
x - x & = \epsilon & (TS3) \\
(x \wedge y) - x & = y & (TS4) \\
(x - y) - z & = x - (y \wedge z) & (TS5) \\
(x \wedge y) - (x \wedge z) & = y - z & (TS6) \\
y \leq x & \wedge x - y = \epsilon \Leftrightarrow x = y & (TS7) \\
x \leq y & \Rightarrow x \wedge (y - x) = y & (TS8)
\end{align*}
Laws TS1, TS3 relate trace subtraction and the empty trace. TS4 shows that subtraction inverts concatenation. TS5 shows that subtracting two traces is equivalent to subtracting their concatenation. TS6 shows that subtraction can be used to remove a common prefix. TS7 shows that two traces are equal if, and only if, the first is a prefix of the second and they subtract to $\epsilon$. TS8 shows that a trace can be split into its prefix and suffix.
In the next section, we show that standard notions of traces are models. Afterwards, in Section 5 we use the algebra to create the generalised theory of reactive processes.
4. Trace Models
In this section we describe three trace models: positive reals, finite sequences, and timed traces. Other models are possible; for example, we can further extend timed traces to “super-dense time” [14] to encompass multiple distinguished discrete state updates at a time instant. We leave study of other models as future work.
Positive real numbers $\mathbb{R}_{\geq 0}$ form one of the simplest models of the trace algebra.
**Theorem 4.1.** $(\mathbb{R}_{\geq 0}, +, 0)$ is a trace algebra.
*Proof.* $+$ is clearly associative, cancellative, and has 0 as its left and right unit. Moreover, since $+$ is commutative and $\mathbb{R}_{\geq 0}$ contains no negative numbers then $+$ has no additive inverse.
Positive reals can be used to express timed programs with a clock variable $\text{time} : \mathbb{R}_{\geq 0}$ [13]. Finite sequences, unsurprisingly, also form a trace algebra, when we set $\wedge$ to sequence concatenation (\(\langle \rangle\)), and $\epsilon$ to the empty sequence (\(\langle \rangle\)).
**Theorem 4.2.** $(\text{seq Event, } \wedge, \langle \rangle)$ is a trace algebra.
Though simple, we note that the sequence-based trace model has been shown to be sufficient to characterise both untimed [6] and discrete time modelling languages [13].
A more complex model is that of piecewise continuous functions, for which we adopt and refine a model called timed traces (TT) [16]. A timed trace is a partial function of type $\mathbb{R}_{\geq 0} \rightarrow \Sigma$, for continuous state type $\Sigma$, which represents the system’s continuous evolution with respect to time.
In our model we also require that timed traces be piecewise continuous, to allow both continuous and discrete information. A timed trace is split into a finite sequence of continuous segments, as shown in
215
205
65
2579
denotes that trajectories [17].
A (require that describe limits and continuity, and consequently we require that $\Sigma$ be a topological space, such as $\mathbb{R}^n$, though it can also contain discrete topological information, like events. Continuous variables are projections such as $x : \Sigma \to \mathbb{R}$. We give the formal model below.
**Definition 4.1 (Timed Traces).**
$$
\text{TT} \triangleq \left\{ f : \mathbb{R}_{\geq 0} \to \Sigma \mid \exists t : \mathbb{R}_{\geq 0} \bullet \text{dom}(f) = [0, t) \land t > 0 \Rightarrow \exists I : \text{seq} \mathbb{R}_{\geq 0} \bullet \text{ran}(I) \subseteq [0, t) \land (\forall n < \#I - 1 \bullet f \text{ cont-on}[I_n, I_{n+1}]) \right\}
$$
$$\text{seq} \mathbb{R}_{\geq 0} \triangleq \{x : \text{seq} \mathbb{R} \mid \forall n \notin \#x - 1 \bullet x_n < x_{n+1}\}
f \text{ cont-on}[m, n] \triangleq \forall t \in [m, n) \bullet \lim_{x \to t^+} f(x) = f(t)
$$
A timed trace is a partial function $f$ with domain $[0, t)$, for end point $t \geq 0$. When the trace is non-empty ($t > 0$), there exists an ordered sequence of instants $I$ giving the bounds of each segment. $\text{seq} \mathbb{R}_{\geq 0}$ is the subset of finite real sequences such that for every index $n$ in the sequence less than its length $#x$, $x_n < x_{n+1}$. $f$ must naturally contain at least 0 and $t$, and only values between these two extremes. The timed trace $f$ is required to be continuous on each interval $[I_n, I_{n+1})$. The operator $f \text{ cont-on } A$ denotes that $f$ is continuous on the range given by $A$. We now introduce the core timed trace operators, which take inspiration from Höfner’s algebraic trajectories [17].
**Definition 4.2 (Timed-trace Operators).**
$$
\text{end}(f) \triangleq \min(\mathbb{R}_{\geq 0} \setminus \text{dom}(f))
$$
$$
\varepsilon \triangleq 0,\quad f \cup g \triangleq f \cup (g \gg \text{end}(f))
$$
Auxiliary function $f \gg n$ shifts the indices of a partial function $f : \mathbb{R}_{\geq 0} \to A$ to the right by $n : \mathbb{R}_{\geq 0}$, and has definition $\lambda x \bullet f(x - n)$. The operator $\text{end}(f)$ gives the end time of a trace $f : \text{TT}$ by taking the infimum of the real numbers excluding the domain of $f$. The empty trace $\varepsilon$ is the empty function. Finally, $f \gg g$ shifts the domain of $g$ to start at the end of $f$, and takes the union. We establish laws governing these trace operators.
**Theorem 4.3 (Timed-trace Laws).**
$$
(f \gg m) \gg n = f \gg (m + n) \quad (T1)
$$
$$
(f \cup g) \gg n = (f \gg n) \cup (g \gg n) \quad (T2)
$$
$$
\text{end}(\varepsilon) = 0 \quad (T3)
$$
$$
\text{end}(x \gg y) = \text{end}(x) + \text{end}(y) \quad (T4)
$$
$[T1]$ shows that shifting a function twice equates to a single shift on their summation. $[T2]$ shows that shift distributes through function union. $[T3]$ shows that the length of the empty trace is 0, and $[T4]$ shows that the length of a trace is the sum of its parts. $\text{TT}$ is closed under trace concatenation.
**Theorem 4.4 (Trace Concatenation Closure).** If there exists $m, n : \mathbb{R}_{\geq 0}$, such that $\text{dom}(t_1) = [0, m)$, and $\text{dom}(t_2) = [0, n)$, then $t_1, t_2 \in \text{TT}$ if, and only if, $t_1 \gg t_2 \in \text{TT}$.
This theorem tells us that decomposition of a timed trace always yields timed traces, provided both $f$ and $g$ have a contiguous domain. Finally, trace concatenation satisfies our trace algebra.
**Theorem 4.5.** $(\text{TT}, \gg, \varepsilon)$ forms a trace algebra
*Proof.* For illustration, we show the derivation for associativity. The other proofs are simpler.
$$
x \gg (y \gg z) = x \cup ((y \gg (z \gg \text{end}(y))) \gg \text{end}(x)) = (x \cup (y \gg \text{end}(x))) \cup (z \gg (\text{end}(x) + \text{end}(y))) = (x \gg y) \cup (z \gg (\text{end}(x) + \text{end}(y))) = (x \gg y) \cup (z \gg \text{end}(x \gg y)) = (x \gg y) \gg z
$$
This model provides the basis for hybrid computation. We introduce the theory in the next section.
5. Generalised Reactive Processes
Here, we use our trace algebra to provide a generalised theory of reactive processes. We prove the key laws of reactive processes, thus demonstrating the conservative nature of our theory. Many of the properties here have been previously proved [2], but we restate and prove many of them due to our weakening of the trace model and some small differences. Another novelty is that all these theorems have been mechanised in our Isabelle/UTP repository. Following [1, 2] we define the theory in terms of two pairs of observational variables:
- \( \text{wait, wait}' : B \) – describe when the previous or current process, respectively, is in an intermediate state;
- \( tr, tr' : T \) – the trace that occurred prior to and after execution of the current process in terms of a trace algebra \((T, \sqcup, \varepsilon)\).
Our theory does not contain refusal variables \( \text{ref, ref}' \), as these are not always necessary to describe reactive processes [13]. We describe three healthiness conditions namely \( R_1, R_2_e, \) and \( R_3, R_1 \) and \( R_3 \) are already presented in [1]; for their \( R_2 \) we have a different formulation, which we call \( R_2_e \).
**Definition 5.1 (Reactive Healthiness Conditions).**
\[
\begin{align*}
R_1(P) & \triangleq P \land tr \leq tr' \\
R_2_e(P) & \triangleq P[e, tr, tr'/tr, tr'] \land tr \leq tr' \triangleright P \\
R_3(P) & \triangleq \exists \varepsilon \triangleleft \text{wait} \land P \\
R & \triangleq R_3 \circ R_2_e \circ R_1
\end{align*}
\]
\( R_1 \) states that \( tr \) is monotonically increasing: processes are not permitted to undo past events. \( R_2_e \) states that a process must be history independent: the only part of the trace it may constrain is \( tr' - tr \), that is, the portion since the previous observation \( tr \). Specifically, if the history is deleted, by substituting \( \varepsilon \) for \( tr \), and \( tr' - tr \) for \( tr' \), then the behaviour of the process is unchanged. Our formulation of \( R_2_e \) deletes the history only when \( tr \leq tr' \), which ensures that \( R_2_e \) does not depend on \( R_1 \), and thus commutes with it. Finally, \( R_3 \) states that if a prior process is intermediate (\( \text{wait}' \)) then the current process must identify all variables.
We compose the three to yield \( R \), the overall healthiness condition of reactive processes. An example \( R \) healthy predicate is
\[
R_3(tr' = tr^- (a) \land v' = v)
\]
which extends the trace with a single event \( a \) and leaves program variable \( v \) unchanged. We show that \( R \) is idempotent and monotonic.
**Theorem 5.1 (R idempotence and monotonicity).**
\[
R = R \circ R \quad \text{and} \quad P \subseteq Q \Rightarrow R(P) \subseteq R(Q)
\]
A corollary of Theorem 5.1 is that reactive processes form a complete lattice.
**Theorem 5.2.** Reactive processes form a complete lattice ordered by \( \subseteq \), with infimum \( \bigwedge A \) and supremum \( \bigvee A \), for \( A \subseteq [R] \).
This, in particular, provides us with specification and reasoning facilities about recursive reactive processes using the fixed-point operators.
Having stated the lattice theoretic properties of reactive processes, we move on to the relational operators. Intuitively, \( R_1 \) and \( R_2 \), together ensure that the reactive behaviour of a process contributes an extension \( t \) to the trace.
**Theorem 5.3 (R1-R2_e trace contribution).**
\[
R_1(R_2_e(P)) = (\exists t \bullet P[e, t/tr, tr'] \land tr' = tr^- t)
\]
This shows that for any \( R_1-R_2_e \) process there exists a trace extension \( t \) recording its behaviour, and that \( tr' \) is the prior history appended with this extension. Aside from illustrating \( R_1 \) and \( R_2_e \), this allows us to restate a process containing \( tr \) and \( tr' \) to one with only the extension logical variable \( t \), which provides a more natural entry point for reasoning about the trace contribution of a process. In particular, we can prove a related law about sequential composition of reactive processes.
**Theorem 5.4 (R1-R2_e sequential).** If \( P \) and \( Q \) are \( R_1-R_2_e \)-healthy, then
\[
P ; Q = \exists t_1, t_2 \bullet \left((P[e, t_1/tr, tr'] ; Q[t_2/\{tr, tr'\}] \land tr' = tr^- t_1 \triangleright t_2)\right)
\]
**Proof.** By Theorem 5.3 and relational calculus. □
This theorem shows that two sequentially composed processes have their own unique contribution to the trace without sharing or interference. When applied in the context of a timed trace, for example, it allows us to subdivide the trajectory into segments, which we can reason about separately. This theorem allows us to demonstrate closure of $R_1 \cdot R_2$, predicates under sequential composition.
**Theorem 5.5 (R$1 \cdot R_2$, sequential closure).** If $P$ and $Q$ are both $R_1$ and $R_2$, healthy then
$$R_1(R_2,(P;Q)) = P; Q$$
Closur of $R_3$ has previously been shown [2] and we have mechanised this proof. This allows us to prove the following theorem.
**Theorem 5.6 (R sequential closure).** If $P$ and $Q$ are both $R$ healthy then $P; Q$ is $R$ healthy.
We have now shown that reactive processes are closed under the lattice and relational operators, and can use these results to demonstrate the algebraic nature of the theory, by showing that reactive processes form a weak unital quantale.
**Theorem 5.7.** $R$ predicates form a weak unital quantale. Provided $A \subseteq [R]$ and $A \neq \emptyset$ the following laws hold:
- $P; (\bigsqcup_n A) = (\bigsqcup_n Q \in A \star P; Q)$ \hspace{1cm} (Q1)
- $(\bigsqcup_n A); Q = (\bigsqcup_n P \in A \star P; Q)$ \hspace{1cm} (Q2)
- $P; x = x; P = P$ \hspace{1cm} (Q3)
**Proof.** Since $\bigsqcup_n A = R(\bigsqcup A)$ and sequential composition left and right distributes through $\bigsqcup$ it suffices, to show that $R$ is continuous: it distributes through non-empty infima. $\Box$
Q1 and Q2 are the quantale laws, which state that sequential composition distributes through infima. The requirement of non-emptiness is why the quantale is called “weak”. Finally, Q3 makes the weak quantale unital. Unital quantales are an important algebraic structure that give rise to Kleene algebras [18]. They augment a complete lattice with the laws above, the combination of which provides a minimal algebraic foundation for substantiating the point-free laws of sequential programming [18].
Our final result is closure under parallel composition. The UTP provides an operator called parallel-by-merge [1]. $P \parallel_M Q$, whereby the composition of processes $P$ and $Q$ separates their states, calculates their independent concurrent behaviours, and then merges the results. The operator is parametric over merge predicate $M$ that specifies how synchronisation is performed. Different programming language semantics require formation of a bespoke merge predicate depending on their concurrency scheme. We give a slightly simplified version of the UTP definition, which is nevertheless equivalent.
**Definition 5.2 (Parallel-by-merge).**
$$P \parallel_M Q \triangleq ([P]_0 \land [Q]_1 \land v' = v) : M$$
Operator $[P]_n$ augments the after variables of $P$ with an index; for example:
$$[x' = 7 \cdot y]_1 = (0. x' = 7 \cdot y)$$
The three conjuncts rename the after variables of $P$ and $Q$ to ensure no clashes, and copy all before variables ($v$) to after variables, respectively. Thus, $M$ has access to the state of each variable before execution ($v$), and from the respective composed processes ($0.v$ and $1.v$). Merge predicate $M$ can then invoke $tr' = f(0.tr, 1.tr)$ with a suitable trace merge function $f$, such as interleaving.
The healthiness conditions $R_1$ and $R_3$ can be directly applied to $M$, modulo some differences in alphabet. $R_2$, requires adaptation as it is possible to access the trace history through the two indexed traces, $0.tr$ and $1.tr$, in addition to $tr$. It is, therefore, necessary to delete the history from the two in the revised healthiness condition $R_2_m$ below.
**Definition 5.3 (R$2_m$, for merge predicates).**
$$R_2_m(M) \triangleq (P[x, tr'' - tr, 0.tr - tr, 1.tr - tr \parallel tr''; tr', 0.tr, 1.tr]) \land tr \leq tr' \triangleright P$$
$R_2_m$ has the same form as $R_2$, except that it deletes the history of three extant traces, $tr''$, $0.tr$, and $1.tr$. From $M$’s perspective, $0.tr$ and $1.tr$ contain the trace the parallel processes have executed. Thus we need to delete the history, through substitution, from these as well so that they contain only the contributions of their respective processes. This allows us to show that the overall composition is $R_2_c$. We define a condition for merge predicates $- R_m \triangleq R_1 \circ R_2 \cdot R_3$ — and prove the following final theorem.
**Theorem 5.8.** $P \parallel_M Q$ is $R$ healthy provided that $P$, $Q$ are $R$ healthy, and $M$ is $R_m$ healthy.
Thus our generalised theory of reactive processes is conservative and unifies the denotational semantics of concurrent programming.
6. Conclusion
Traces are ubiquitous in modelling of program history. Here, we have shown how a generalised foundation for their semantics can be given in terms of a trace algebra, and presented some important models, notably piecewise-continuous functions. Finally, we have applied it to reconstruct Hoare and He's model of reactive processes, with some important additions of our own, including revision of $\mathcal{R}_2$, additional theorems about reactive relations, and lifting of the healthiness conditions to parallel composition. All of the theorems described herein have been mechanised in Isabelle/UTP [10].
In the future we will apply this theory of reactive processes to give a new model to the UTP hybrid relational calculus $\mathcal{S}$ that we have previously created to give denotational semantics to Modelica and Simulink. Moreover, inspired by $\mathcal{S}$, we will use our theory to describe generalised reactive designs, a UTP theory that justifies combined use of concurrent and assertional reasoning. This will enable the construction of verification tools on top of our Isabelle/HOL embedding for concurrent and hybrid programming languages.
We also aim to explore weakenings of the trace algebra and healthiness conditions to support larger classes of reactive process semantics. For example, weakening of the trace cancellation laws could enable representation of infinite traces in order to support reactive processes with unbounded non-determinism. Moreover, $\mathcal{R}_2$, currently prevents a process from depending on an absolute start time with respect to a global clock. In the future this could be relaxed, either at the model or theory level, to support time variant real-time and hybrid processes.
Acknowledgements
This work is funded by EU H2020 project “INTO-CPS” grant agreement 644047. We would also like to thank Dr. Jeremy Jacobs, and also our anonymous reviewers, for their helpful feedback.
References
|
{"Source-Url": "http://eprints.whiterose.ac.uk/127797/1/grproc.pdf", "len_cl100k_base": 7905, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 34673, "total-output-tokens": 9662, "length": "2e12", "weborganizer": {"__label__adult": 0.0006623268127441406, "__label__art_design": 0.0005784034729003906, "__label__crime_law": 0.0006470680236816406, "__label__education_jobs": 0.0014295578002929688, "__label__entertainment": 0.00015997886657714844, "__label__fashion_beauty": 0.0003154277801513672, "__label__finance_business": 0.00040531158447265625, "__label__food_dining": 0.0008587837219238281, "__label__games": 0.0010633468627929688, "__label__hardware": 0.0011997222900390625, "__label__health": 0.0019025802612304688, "__label__history": 0.0005083084106445312, "__label__home_hobbies": 0.0002065896987915039, "__label__industrial": 0.0008640289306640625, "__label__literature": 0.00098419189453125, "__label__politics": 0.0005893707275390625, "__label__religion": 0.0009407997131347656, "__label__science_tech": 0.16162109375, "__label__social_life": 0.00022590160369873047, "__label__software": 0.00506591796875, "__label__software_dev": 0.81787109375, "__label__sports_fitness": 0.0005326271057128906, "__label__transportation": 0.001285552978515625, "__label__travel": 0.000301361083984375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32281, 0.04094]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32281, 0.47732]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32281, 0.8471]], "google_gemma-3-12b-it_contains_pii": [[0, 1220, false], [1220, 4947, null], [4947, 9901, null], [9901, 14509, null], [14509, 18545, null], [18545, 22949, null], [22949, 27508, null], [27508, 32281, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1220, true], [1220, 4947, null], [4947, 9901, null], [9901, 14509, null], [14509, 18545, null], [18545, 22949, null], [22949, 27508, null], [27508, 32281, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32281, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32281, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32281, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32281, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32281, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32281, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32281, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32281, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32281, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32281, null]], "pdf_page_numbers": [[0, 1220, 1], [1220, 4947, 2], [4947, 9901, 3], [9901, 14509, 4], [14509, 18545, 5], [18545, 22949, 6], [22949, 27508, 7], [27508, 32281, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32281, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
1162068ddc6e945ac8fcf7ed405a19e45fa76f05
|
A System of Information Systems to Capitalize Resources of Collaborative Activities: the ECOPACK Project
Zoubida Afoutni, Claude Moulin, Marie-Hélène Abel, Majd Saleh, Véronique Misséri
To cite this version:
Zoubida Afoutni, Claude Moulin, Marie-Hélène Abel, Majd Saleh, Véronique Misséri. A System of Information Systems to Capitalize Resources of Collaborative Activities: the ECOPACK Project. 13th Annual International Conference on System of Systems Engineering (SoSE 2018), Jun 2018, Paris, France. pp.82-88, 10.1109/SYSOSE.2018.8428751 . hal-01996313
HAL Id: hal-01996313
https://hal.science/hal-01996313
Submitted on 5 Jul 2021
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
ABSTRACT
This paper describes a System of Information Systems (SoIS) to capitalize heterogeneous resources of collaborative activities in perspective of strategic analysis. The de-centralization of work processes have led organizations to respond to their dynamic environments and nomads practices by using collaborative software such as brainstorming environment. The outputs of a brainstorming are huge amounts of knowledge that need to be shared, capitalized and structured to generate a support for new discussions. While preparing, or during a brainstorming, participants may use other systems/web tools to gather more information. This information are external resources that improve the brainstorming efficiency and have to be capitalized in the same way as the knowledge produced during the brainstorming to “re-invent” the way of strategic analysis. It is therefore a matter of managing a set of resources generated by a set of independent systems. For this, we propose a solution based on SoIS approach. More precisely, we propose a solution to integrate a collaborative application called ECOPACK within a large SoIS called MEMORAeSoIS. ECOPACK is a digital ecosystem that meets the needs of ideation, innovation and strategic analysis. It offers several functionalities such as the capitalization of resources produced during a meeting. However, as mentioned above, participants may use resources coming from other systems. To allow capitalizing external resources, we use the MEMORAeSoIS that provides the ability to manage resources of autonomous systems. We illustrate our proposition by a scenario on the challenges of France’s naval defense. This paper provides a solution based on the System of Information Systems approach to capitalize collaborative activities.
CCS CONCEPTS
•Information systems → Collaborative and social computing systems and tools; Wrappers (data mining); •Computer systems organization → Architectures;
KEYWORDS
System of Information Systems, Knowledge capitalization, Brainstorming Platform.
1 INTRODUCTION
The emerging of technological development and communication technologies has led to the emergence of new form of work known as collaborative work. Brainstorming environments play important role to enhance collaborative work in organization by providing a tool to the stakeholders allowing them generating innovative idea, increasing creative efficiency or finding solutions to different problems. During brainstorming activities, participants may use several systems and web tools with dedicated purposes to accomplish their tasks. Among this systems we can cite, systems to comment, to note and to capitalize resources of collaborative activities, project management software, social networks, wiki systems, etc. For example, a stakeholder that uses a TiddlyWiki to take some notes about the topic of the brainstorming while preparing it. From this point of view, participants in a brainstorming session use a set of resources coming from autonomous and heterogeneous systems to collaborate and accomplish their activities. To get these resources, participants have to query each system/web tools independently. This makes difficult the task of storage, sharing and capitalization of these resources. Also, the outputs of a brainstorming are huge amounts of knowledge that may be interesting for other projects, help to explain some decisions made during the brainstorming, etc. The question now is how to manage, to capitalize, to share, to reuse all
resources used and produced in collaborative activities? To answer this question there is a need of an adequate architecture and a robust semantic information systems allowing gathering, representing and storing various heterogeneous information coming from different systems. We are interested thus in capitalization and reuse heterogeneous resources of independent systems. More precisely, we are interested in managing resources used and produced by a collaborative application called ECOPACK. It is a digital ecosystem that meets the needs of ideation, innovation and strategic analysis. To reach our goal, we use the System of Information Systems (SoIS) approach. In [10] authors propose a SoIS system called MEMORAeSoIS to manage heterogeneous resources of different system from a single point. MEMORAeSoIS architecture has the advantage to be flexible which facilitates adding a new system without an important development effort. We have chosen thus this platform to integrate ECOPACK platform in order to: (i) offer to users the possibility to access easily various system/web tools from a single point (ii) allow users capitalizing, sharing and reusing all resources used and produced during their collaborative activities.
This paper is organized as follows: section 2 presents the brainstorming collaborative activities and the System of Information Systems approach. In section 3, we present first ECOPACK and MEMORAeSoIS platforms then we describe a use case to illustrate our proposal and how ECOPACK has been integrated within MEMORAeSoIS. In section 4, a conclusion and future works are presented.
2 LITERATURE REVIEW
2.1 Brainstorming collaborative activities
Brainstorming approach was coined by Osborn [8] who defines a set of rules aiming to stimulate ideation productivity. Brainstorming can be seen as a training process to stimulate cognitive processes for imagination and flexibility in creative thinking. Two kinds of brainstorming activities can be distinguished [7]: (i) participants are alone and prepare future collaborative sessions (ii) participants work during collaborative sessions involving one or more teams.
In the first case, even if participants are alone, their activities are considered as collaborative because they are accomplished for the benefit of a group of workers [7]. For example, people insert in a common repository a new document; annotate a document or a fragment of a document produced by others. In this type of activities participants use a tool designed for personal PCs or a Web application. While preparing a brainstorming session, participants use different Information System and Web tools. For example, a tool facilitating project management such as Trello or Kanban system. The set of systems and tools used while preparing a brainstorming session, produce a set of resources that are in major case stored on participants’ PCs. To share these resources, participants have for example to use email system or a file hosting service such as DropBox. However, there are no tools allowing capitalizing heterogeneous resources of different systems.
In the second case, during brainstorming sessions, people can progressively agree on some definitions and on some decisions. To this, participants use dedicated tools such as iMindMap, or Mindmeister. These tools allow users discussing, make notes, visualizing data of brainstorming session. However, once again discussions and argumentations may require the study of external resources that are not accessible from the brainstorming tools. Consequently, to carry out collaborative activities, it is necessary not only to have a tool to discuss with, to visualize data, to make comment but also a tool to access other useful systems, to share all resources, to keep track of external resources (from other systems) or the resources produced during brainstorming activities.
2.2 System of systems approach
The term System-of-Systems (SoS) is widely recognized and is often used to describe complex assembly of distributed stand-alone parts. Its application area spans from original military to other domains, especially system engineering. To give a common understanding of SoS concept and its characteristics, various efforts have been made. In [5], “An SoS is defined as a set or arrangement of systems that results when independent and useful systems are integrated into a larger system that delivers unique capabilities”. The capability is the ability to achieve a desired effect. In [9] a SoS is a collection of systems tied together to create a more complex system. The resulting system is able to achieve a global mission that its subsystems alone cannot fulfill; a SoS has five key features [6]: (i) Operational independence of elements (ii) managerial independence of elements (iii) evolutionary development (iv) emergent behavior (v) geographical distribution.
In [11], authors go further in defining an evolutionary SoS by adding two principles: (i) the complexity of the SoS framework does not grow as constituent systems are added, removed, or replaced (ii) the constituent SoS do not need to be re-engineered as other constituent systems are added, removed, or replaced.
Also, according to the Systems Engineering Guide for SoS [5], an SoS can be classified according to the way it is managed and its openness to change and new capabilities:
- Virtual SoS: in this type of SoS, there is a no central management authority. Large-scale behavior may emerge. This type of SoS must rely on invisible mechanisms for maintenance.
- Collaborative SoS: the component systems interact (more or less voluntarily) to reach agreed upon central goals. The central players collectively decide how to provide or deny service, thereby providing some means of enforcing and maintaining standards.
- Acknowledged SoS in this type of SoS, changes in the systems are based on collaboration between the SoS and the system. The SoS has recognized objectives, a designated manager, and resources for the SoS;
---
1 https://trello.com/
4 https://www.mindmeister.com/fr
however, the constituent systems retain their independent ownership, objectives, and development approaches.
• Directed SoS are those in which the integrated system-of-systems is built and managed to reach specific purposes. It is centrally managed during long-term operation to continue to fulfill those purposes as well as any new ones the system owners might wish to address. The component systems maintain an ability to operate independently, but their normal operational mode is subordinated to the central managed purpose.
SoS approach promotes a new way of thinking for solving unprecedented and complex challenges. It does not impose specific tools, methods, or practices; instead, the development and maintenance of such systems relies on principles cited above. These principles serve the purpose of providing a unifying guide to define SoS architecture and identify the interfaces between the systems. The interfaces are the connective points that enable the individual systems to interoperate as a whole system. The maintenance of the interfaces enables the constituent systems to evolve while remaining in and contributing to the integrated whole.
We claim that the SoS approach is an adequate solution to resolve resources capitalization of independent systems.
3 OUR PROPOSAL
This paper addresses the question of capitalization resources of collaborative activities illustrated by the integration of ECOPACK platform and MEMORAeSoIS. In this section, we present first these two platforms then we present the interaction between ECOPACK and MEMORAeSoIS.
3.1 ECOPACK brainstorming platform
ECOPACK is a brainstorming digital platform that has been developed to meet the needs of ideation, innovation and strategic analysis of experts groups. This digital platform follows the knowledge ecosystem vision that fosters the dynamic evolution of knowledge interactions between users in order to enhance decision-making and innovation. Technically, ECOPACK platform aims at:
• providing a multi-user platform that support several devices (tablets, smartphones, personal computer, etc.) to allow different forms of collaboration;
• defining collaborative applications where each user benefits from different types of interaction and activity and is able to exploit his/her own resources to collaborate.
ECOPACK manages a set of projects where different types of data are grouped as items, presented and gathered by a main module (Fig.1). One of the originality of ECOPACK is that it provides a dynamic data representation as a graph, which facilitates their interpretation by experts. The graph can be shown in different ways following several types of graph drawing algorithms, including those that create hierarchical, organic, orthogonal and circular layout. Indeed, the different ways of visualizing a graph provide different insights, and hidden relationships and interesting patterns are revealed. This can help to make relevant observations, to carry out a pertinent strategic analysis, etc. Also, there is the ability to zoom in and out with the graph. In addition, the main module provides the possibility to create clusters within the graph. Clusters can be modified, and nodes can be added or removed from clusters. Cluster view also can be toggled on or off to show the graph with or without clusters based on the course of discussion taking place. Participants can create notes related to certain aspect of the graph shown in the main module. The creation of notes is handled by a separate dedicated application that resides within the participant’s device. Notes are then sent to the main module to be shown in the graph for everyone to see and discuss. Furthermore, notes can
target the graph as a whole, or a specific part of the graph. It also
can target a node or an edge, or even a group of nodes and edges
combined together as a cluster. Another feature is the dashboard
of experts’ activities with the platform. This is useful to give
precise statistical information of experts’ contributions.
ECOPACK platform provides thus several functionalities
allowing participants to accomplish a brainstorming session or
conduct a strategic analysis session based on data presented in the
form of a map. In addition, ECOPACK allow users keep track of
the results of their activities by the mean of automatic reports.
3.2 MEMORAESoIS platform
MEMORAESoIS platform [10] was developed to meet the need of
accessing, sharing and capitalizing resources coming from
independent and autonomous systems. MEMORAESoIS is based
on MEMORAE model [1]. Therefore, before given more details
about MEMORAESoIS we introduce first MEMORAE approach.
The MEMORAE approach is the combination of a model and a
web platform to manage heterogeneous knowledge in an
organization. It is based on OWL language and semantic web
standards (FOAF, SIOC, BIBO). Users in MEMORAE platform
are given access to several knowledge bases through a semantic
map. The later describes the knowledge in the base by the mean of
shared terminology in the organization. The main advantage of
this approach is that it allows indexing all types of resources
around a semantic map. Resources are defined as "information
vectors" and are distinguished according to whether they are
simple or complex. A simple resource is a whole. A document, a
note are examples of simple resources. Complex resources consist
of other resources such as a wiki or an agenda. In this way,
MEMORAE takes into account documentary resources and social
resources. MEMORAE platform allow thus users managing
various types of resources. However, all of them are native of the
platform. To give users the possibility to access and capitalize
resources created by external information systems, MEMORAESoIS
platform was proposed based on the capabilities of MEMORAE approaches (indexing, annotating,
sharing and tracing the resources).
MEMORAESoIS platform is a “Leader/Follower”
architecture. That is, MEMORAE platform is the leader system
responsible for the orchestration of SoIS as a knowledge base
serving all other Information Systems. Each follower system work
independently and has its own services/functions and databases.
So, while some systems are openly providing an API for
requesting their services, other systems are closed and operate as
black boxes. Information can be represented in different ways
within different systems, thus, the SoIS might have trouble access
information, if the services of that system are not available
through an API. Therefore, to solve this interoperability issue, two
methods exists [3]: (i) creating a software model of each system,
to collects data from the system and generates the outputs. (ii)
creating a high level common language to describe data, where
each system can represent its data such that other systems may
interpret. Creating a common language is a complex task and
requires knowing the information model of each system. In
MEMORAESoIS authors have opted for the first solution.
Resources of the external Information System is made available to
the user by means of the data wrapper and a server/observer
(details are presented in [10]). It is necessary, however, that each
system provides an API allowing an external system to interact
with it. The ideas present in this architecture of SoIS can be
summarized in the following list:
- The user is provided with access to several Information
Systems.
- The user can choose which Information System(s)
he/she would like to connect to. After connecting to
various systems of choice, the user can access resources
and services in their respective environment.
- It is then possible to work with resources produced by
different Information Systems from within the SoIS.
- The resources produced by different Information
Systems are managed in the SoIS by the means of the
services provided by the leader system.
- Using the services of the leader system as a way to
manage knowledge with the SoIS will enable the user
from indexing, sharing and tracing resources within the
SoIS.
3.3 Use case
To illustrate our proposal, we consider the scenario of the
challenges of naval defense of France. More precisely, we focus
on a question proposed by the regional association AR24 Picardie
of the Institute for Higher National Defense Studies (IHEDN):
how to strengthen France’ defense strategy? To answer this
question, brainstorming sessions (using ECOPACK) was
conducted from a database on the relationships between on the
one hand companies, suppliers, buyers and financiers of military
naval equipment and on the other hand armed conflicts and
diplomatic meetings. Thanks to the functionalities provided by
ECOPACK (ex. the possibility to represent data by a graph)
participants were able to make some observations that allowed
them to make hypothesis to improve the defense strategy.
However, information available in ECOPACK were not sufficient
to answer the question effectively. Indeed, the defense strategy is
obviously largely influenced by the global political and economic
context. For example, a country that is involved in a conflict may
need armaments to defend and consequently influence France’s
defense strategy. Users in a brainstorming session about defense
strategy may need more information about a conflict in a client
country. Some of the information concerning political and
economic context can be found in google search. A user may thus
make a search google and decide to share the information that
judges relevant with his colleagues. This information may be
relevant to evaluate the current defense strategy of a country
regarding the evolution of the political and economic context.
Consequently, the user’s observation and commentary made in a
brainstorming session may be influenced by different information
coming from other “system/web”.
2
It is primordial thus to not only keep track of information and knowledge generated during a brainstorming session but also to keep track of the different information coming from other systems used by users in the brainstorming session.
For the management level, information flow can be optimized if all resources are available in a centralized access point that can reference those resources from their original information systems. In the next section, we show how all resources useful for collaborative activities can be access and indexed from a single point.
### 3.4 Interaction of MEMORAe SoIS and ECOPACK
According to Salah et al. [10], to integrate a new system in MEMORAeSoIS, it is necessary to develop a new module that collects data from the target system and generates the outputs. Figure 2, shows the interaction between MEMORAeSoIS and ECOPACK. Data wrapper is responsible on collecting data from external systems while server/observer ensure the interaction with the leader system “MEMORAE platform”. Also, as mentioned in section 3.2, to be able to interact with an external system from MEMORAeSoIS the target system must provide an API for requesting their services. The initial version of ECOPACK does not provide an API allowing an external system to interact with it. The first step to allow MAMEOAeSoIS interacting with ECOPACK and thus capitalizing heterogeneous resources was to develop an API. The second step consists on developing the data wrapper in MEMORAeSoIs to collect data from ECOPACK.
Note that, once the data wrapper is developed all the functionalities to manage resources are already available in MEMORAeSoIS. Consequently, adding a new system to MEMORAeSoIS does not necessitate important development. ECOPACK platform was developed based on several pertinent technologies such as Akka and Restlet framework. Akka is a toolkit for building highly concurrent, distributed and resilient message-driven applications for Java and Scala. Restlet is an open source RESTful web API framework for the Java platform. It supports several data format, Internet transport and service description standards like HTTP, XML, JSON. So, to reach our goal we have developed a Restlet API to allow technical interoperability between ECOPACK and MEMORAeSoIS. This API allows accessing the resources stored in ECOPACK. Resources in ECOPACK are organized in projects and projects version. A Resource can be a brainstorming report, a comment, a note, a graph, etc. Note that in MEMORAeSoIS system resources of its components systems does not reside in the SoIS, but rather are referenced from the Information Systems comprising it.
In figure 3, we can see that ECOPACK is accessible from MEMORAeSoIS as well as other systems that already existed in the SoIS: Tiddlywiki, Twitter, Google search, Google contact, Microsoft OneNote and two PLM (Product Lifecycle Management) systems (OdooERP and ARAS Innovator).
5 [https://akka.io/](https://akka.io/)
6 [https://restlet.com/](https://restlet.com/)
We can also see a part of the semantic map describing the global domain of defense strategy. The concepts of this map allow indexing resources collected from ECOPACK and other systems. In figure 4, we see a part of ECOPACK resources that are imported in MEMORAeSoIS namely a set of reports concerning the “Sea Challenge” project and its versions. Figure 5 shows a set of additional resources concerning the challenges of naval defense of France.
Indeed, in figure 5 we can see a set of websites that addresses the subject of defense in France. Using the box of “Google Search” (accessible in MEMORAeSoIS) users can make a search on the web and capitalize all relevant websites links. All these resources that are interesting for the topic of France’s naval defense can now be indexed and thus capitalized by the adequate concept in the semantic map. Figure 6 shows a box to choose a concept of semantic map (Naval Challenge) to index resources of ECOPACK (reports of “Sea Challenge”).
4 CONCLUSIONS
The goal of this paper was to provide a solution to capitalize resources of collaborative activities, especially in the context of brainstorming activities. We have showed that in brainstorming activities participants use several Information System and Web tools to gather information about the topic of brainstorming. This raised the question of managing heterogeneous resources coming from independent and autonomous systems. To solve this issue, we have opted to use MEMORAeSoIS platform. Thanks to the flexibility of the architecture of this platform, we have integrated easily ECOPACK to this SoIS platform. This by developing an API to interact with ECOPACK and a data wrapper allowing
MEMORaESoIS to collect resources from ECOPACK. The most important value for the collaborative SoIS is in its ability to trace the result of users’ activities. That is, all resources generated while using ECOPACK and all external resources that are useful to accomplish an activity in ECOPACK. In addition, thanks to the semantic map of MEMORaESoIS, all users have a common understanding of the resources available in the SoIS.
The next step is to expand our work by developing software allowing export resources from MEMORaESoIS to ECOPACK in order to reuse them. Furthermore, MEMORaESoIS offers the possibility to access several useful systems and thus to collect various resources from different systems that may be useful for a brainstorming session. It is interesting thus to provide ECOPACK users the possibility to get these resources from MEMORaESoIS. Indeed, this may be done by exporting resources from MEMORaESoIS to ECOPACK and make them available through an item in ECOPACK (ex. a link of a website). Also we plan to develop a recommendation system allowing MEMORaESoIS users discovering useful resources indexed by the concept of other semantic map. This can be done using matching ontologies algorithms. Indeed, matching ontologies algorithms help to discover the equivalent concept between two or several ontologies. This kind of algorithms may be useful to discover equivalent concepts between two or several semantic map and thus to recommend useful resources to users.
ACKNOWLEDGMENTS
This project was carried out both under ECOPACK project (funded by ANR-ASTRID program) and in the framework of the Labex MS2T, which was funded by the French Government, through the program “Investments for the future” managed by the National Agency for Research (Reference ANR-11-IDEX-0004-02).
REFERENCES
|
{"Source-Url": "https://hal.science/hal-01996313/document", "len_cl100k_base": 5459, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 20248, "total-output-tokens": 6424, "length": "2e12", "weborganizer": {"__label__adult": 0.00031065940856933594, "__label__art_design": 0.0010280609130859375, "__label__crime_law": 0.0008854866027832031, "__label__education_jobs": 0.01016998291015625, "__label__entertainment": 0.00024247169494628904, "__label__fashion_beauty": 0.0002244710922241211, "__label__finance_business": 0.0030803680419921875, "__label__food_dining": 0.0004241466522216797, "__label__games": 0.0008935928344726562, "__label__hardware": 0.0009794235229492188, "__label__health": 0.0005960464477539062, "__label__history": 0.0009284019470214844, "__label__home_hobbies": 0.0002110004425048828, "__label__industrial": 0.0008935928344726562, "__label__literature": 0.0008349418640136719, "__label__politics": 0.001316070556640625, "__label__religion": 0.00040435791015625, "__label__science_tech": 0.407470703125, "__label__social_life": 0.0006275177001953125, "__label__software": 0.141845703125, "__label__software_dev": 0.42529296875, "__label__sports_fitness": 0.0002715587615966797, "__label__transportation": 0.0008158683776855469, "__label__travel": 0.0003058910369873047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29035, 0.02099]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29035, 0.73408]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29035, 0.91554]], "google_gemma-3-12b-it_contains_pii": [[0, 1183, false], [1183, 4688, null], [4688, 10815, null], [10815, 14521, null], [14521, 20669, null], [20669, 23687, null], [23687, 25380, null], [25380, 29035, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1183, true], [1183, 4688, null], [4688, 10815, null], [10815, 14521, null], [14521, 20669, null], [20669, 23687, null], [23687, 25380, null], [25380, 29035, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29035, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29035, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29035, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29035, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29035, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29035, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29035, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29035, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29035, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29035, null]], "pdf_page_numbers": [[0, 1183, 1], [1183, 4688, 2], [4688, 10815, 3], [10815, 14521, 4], [14521, 20669, 5], [20669, 23687, 6], [23687, 25380, 7], [25380, 29035, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29035, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
81da05f310386c807bc91dd2a3535f56b50dc3d5
|
Source Code Implied Language Structure Abstraction through Backward Taint Analysis
Zihao Wang\(^1\), Pei Wang\(^2\), Qinkun Bao\(^2\), and Dinghao Wu\(^1\)
\(^1\)Pennsylvania State University, University Park, USA
\(^2\)Individual Researcher, USA
zihao@psu.edu, uraj, qinkunf@apache.org, dinghao@psu.edu
Keywords: Program Analysis, Context-free Grammar, Static Analysis, Fuzzing, Data-flow Analysis, Taint Analysis
Abstract: This paper presents a novel approach for inferring the language implied by a program’s source code, without requiring the use of explicit grammars or input/output corpora. Our technique is based on backward taint analysis, which tracks the flow of data in a program from certain sink functions back to the source functions. By analyzing the data flow of programs that generate structured output, such as compilers and formatters, we can infer the syntax and structure of the language being expressed in the code. Our approach is particularly effective for domain-specific languages, where the language implied by the code is often unique to a particular problem domain and may not be expressible by a standard context-free grammar. To test the effectiveness of our technique, we applied it to libxml2. Our experiments show that our approach can accurately infer the implied language of some complex programs. Using our inferred language models, we can generate high-quality corpora for testing and validation. Our approach offers a new way to understand and reason about the language implied by source code, and has potential applications in software testing, reverse engineering, and program comprehension.
1 Introduction
Modern software can be thought of as abstract machines that operate on a set of symbols, similar to how the Turing machine operates on its tape. In computer science, the set of symbols used by a program is known as a language, which can be represented by a formal grammar. A grammar is a set of rules that define the structure of a language and how symbols can be combined to form valid sentences.
Source code implied language refers to the language that is implicitly defined by the source code of a program or system. This language includes, but is not limited to, program input and output formats, domain-specific languages (DSLs), and communication protocols between different components or systems. In contrast to a general-purpose language, source code implied language is often tailored to a specific domain or problem space and is often characterized by a specific syntax, grammar, and terminals.
Despite that Source code implied language are critical in many ways, they are not always as available as the programs themselves. In some cases, their grammars are available, but in a form that is not friendly to computers, such as the Adobe PDF specification which is defined in a 700-page document for Standardization (ISO) (2008); Adobe Systems Incorporated (2000) described in human language rather than a well-defined grammar. In some other cases, there are no formal specifications for the input language at all, like the LLVM Intermediate Representation.
In this paper, we propose a static analysis algorithm for extracting implicit grammars from program source code. Our key insight is that object-oriented programming languages typically define an implicit language with classes. These classes may contain encoders, such as a parser, or decoders, such as a printer, which are often written in easily comprehensible patterns. By analyzing their implementation, it is possible to extract a grammar that represents a subset of the language that the program operates on.
Our static analysis algorithm is based on static data-flow analysis. To extract accurate and high-quality grammars, we perform context- and path-sensitive taint analysis. We made several carefully calibrated trade-offs in the analysis to improve precision while maintaining scalability when analyzing common code patterns found in target class.
We implement the above grammar extraction al-
gorithm in a prototype and apply it on programs that can be compiled into LLVM IR to infer the target grammars automatically. To evaluate the precision of the abstracted grammar of pretty printers, we collect several small printer programs and libxml2 as test cases, and get expected results.
Our research makes the following contributions:
• We propose a novel algorithm for implicit grammar abstraction that only requires the source code of the program generating the language. To the best of our knowledge, our work is the first to achieve implicit grammar extraction without requiring access to program corpus or program input as grammar. Our approach is capable of abstracting both lexical and syntactical structures, as well as identifiers.
• We introduce a static analysis method that infers the possible values of string variables at each program point, which is crucial for generating valid inputs to a program.
• We present a prototype, which implements our algorithm and can produce readable grammars and valid corpus for languages such as XML.
Overall, our contributions provide an effective and efficient way to extract program grammars and generate valid inputs, which can significantly enhance the testing and security analysis of programs.
2 Background
In recent years, there has been a growing interest in developing machine-understandable grammars that can be processed and analyzed by computer programs (Harkous et al. (2020); Ammons et al. (2002); Gopinath et al. (2020); Lin and Zhang (2008)). These grammars are designed to be easily interpreted by software and can be used to automate a wide range of tasks, such as natural language processing, code generation, and data validation. By using machine-understandable grammars, developers can increase the efficiency and accuracy of their software and improve its ability to interact with other systems. In this paper, we explore the use of machine-understandable grammars for program output abstracting through backwards taint analysis, a novel technique for program output analysis that combines symbolic execution and dynamic taint analysis.
Machine-understandable grammars have numerous applications in software engineering, including efficient test generation. Over the decades, this technology has been extensively researched (Maurer (1990); Sirer and Bershad (1999); Copit and Lian (2005)). Take input grammar as an example. Fuzz testing and random testing are some automatic software testing techniques that generate random, invalid, or unbiased inputs to a program to reach abnormal target states. Utilizing the structure of inputs is crucial to improving the success rate and efficiency of these technologies (Chen et al. (2021); Zhong et al. (2020); Gopinath et al. (2020); Wu et al. (2019); Toffola et al. (2017); Wang et al. (2017); Yang et al. (2011)). Without knowledge of the input structure, testing methods often remain limited to the input-checking or parsing stage, failing to achieve improved test coverage. Recent advancements in generating input corpus for fuzzing have leveraged input grammars and grammar-aware mutation algorithms, resulting in significant improvements in fuzzing efficiency and coverage for specific targets.
2.1 Mining Input Specifications
Input grammar, as a form of code implied language, provides a description of the syntax and structure of the inputs expected by a program. It serves as a fundamental type of source code implied language, making the understanding and learning of a program’s input language an active research area. The related work mentioned in this section addresses related challenges and offers valuable insights that contribute to our approach.
Ammons et al. presents a machine learning approach called specification mining for automating the process of discovering formal specifications of protocols that code must follow when interacting with an application program interface or abstract data type (Ammons et al. (2002)). The approach infers a specification by observing program execution and summarizing frequent interaction patterns as state machines that capture both temporal and data dependences.
Lin et al. present the first work in extracting input grammar from programs with dynamic analysis approaches (Lin and Zhang (2008); Lin et al. (2010)). They identify most programs’ input grammar into two categories: top-down and bottom-up grammars, and perform runtime analyses for each type. They perform the top-down grammar analysis based on dynamic program control dependence and the bottom-up grammar analysis based on parsing stack track. By doing this, Lin et al.’s work can handle some large-scaled programs with white-box access. However, their work requires massive manual analyses and modifications to the targets.
Höschele and Zeller use dynamic tainting to trace the data flow of sample inputs and present their prototype AUTOGRAM (Höschele and Zeller (2016)).
AUTOGRAM defines input elements which follow the same data flow as one syntactic entity. By doing this, Höschele and Zeller’s method can identify functions related to input processing and further infer all the possible syntactical entities handled by the related functions. AUTOGRAM can address each grammar component to its corresponding variables in target programs, providing intuitional insights to following reverse engineering. Nevertheless, as the authors mentioned, the grammar AUTOGRAM learned is highly dependent on the given sample space. When the grammar grows, it very possible to misses some corner cases.
Furthermore, Wu et al. (2019) present REINAM, a reinforcement-learning approach to synthesize input grammar. Their two-phase approach includes: first using dynamic symbolic execution and satisfiability modulo theory (SMT) solver to obtain the program input grammar (Tillmann and de Halleux 2008; Xie et al. 2009), and second generating a probabilistic context-free grammar (PCFG) with the help of GLADE.
2.2 Grammar-Assisted Fuzzing
When fuzzing programs that take structured inputs, coverage-based fuzzers often use grammar-sensitive approaches to increase the coverage of the inputs they test. These approaches can be classified into three categories: grammar-based mutation (Holler et al. 2012; Veggalam et al. 2016; Guo 2017; Groß 2018; Zhong et al. 2020; Chen et al. 2021; Wang et al. 2019) grammar-based generation (Ruderman 2007; Valotta 2012; Aschermann et al. 2019; Yang et al. 2011; Godefroid et al. 2017), and program input synthesis without knowledge of the input structure (Godefroid et al. 2008; Wang et al. 2017).
3 Method
This section introduces our grammar-extracting algorithm. We will first present an overview of our approach and then describe some important details of the algorithm.
3.1 Overview
We first identify the source and sink functions of our analysis target using human knowledge. Then, we perform forward taint analysis based on the identified functions to extract all the functions that are transitively called by the source functions and call the sink functions. This provides us with the necessary data flow for abstracting the output grammar. We represent the source functions and tainted functions as nonterminal tokens, and primitive variables as terminals, in order to present the EBNF grammar. To improve the presentation of the grammar, we use a strongly regular syntax to approximate the abstracted grammar. Finally, we obtain a EBNF grammar of the output context represented by a series of production rules, which use regular expressions for lists.
3.2 Inter-Procedural Analysis
Once the source functions and the sink functions are determined, we perform a backward taint analysis. This analysis begins with the sink functions, which are a set of high-level printing or assembly functions located at the end of the output data flow. We then analyze the data flow to propagate taint backward to the determined source functions, as shown in Algorithm 1.
To better explain the program, we use an Interprocedural Control-Flow Graph (ICFG) instead of the Control-Flow Graph (CFG). Unlike the CFG, the ICFG contains two additional edges: call and return edges. The call edge represents the control flow from the caller to the callee, while the return edge indicates the reverse flow. By traversing the ICFG, we analyze the calling and called relationship for each function that is called. To ensure a context-sensitive data flow analysis, we use a function memory map to keep track of the data-flow context and status. For each function and its corresponding state, we create a node in our data flow. We limit our analysis to data flows that are feasible for both sources and sinks. For each feasible edge between the source and sink functions, we calculate, update, and propagate the data-flow summary to all callers.
We record the input state of each basic block within the given function and calculate its outgoing state based on the following three different situations:
If the basic block contains a call instruction, we cache the context switching behaviors to accelerate the calculation in the analyzeCallInContext function. This caching mechanism helps improve the performance of the analysis when encountering call instructions within the program. In the getSummary function, we reuse the existing analysis results if the summary is outdated and the new input state matches the old-n summary. However, if the new input state differs, we reanalyze this call with the updated summary input state and caller information.
If the basic block contains a Phi node, which is a special instruction in the LLVM framework used
1 Function analysisBasicBlock(f, c):
/* f: the analysis target function */
/* c: the input state */
forall BasicBlock b ∈ f do
if b contains outdated call then
c.update()
analysisFunction(b.Callee, c)
else
if b contains Phi node then
forall Incoming BasicBlock ib do
c.update(ib.Context)
end
else if f is source then
c ← taint(b)
end
end
end
end
/* Entry of the inter-Procedural analysis, */
/* analyze the whole program */
16 Function analysisInterProcedural(a):
/* a: the analysis target program */
Context c ← /0
forall Function f ∈ a do
analysisFunction(f, c)
end
end
for merging incoming values from different predecessor basic blocks, we perform a Phi resolution. This involves calculating the output state individually for each possible incoming value with the given input state. We then assign the computed output states to the corresponding following basic blocks in the meetOverPHI function. This resolution step ensures that the appropriate output state is propagated based on the different incoming values.
If the basic block is within a source function, the transfer function determines if the basic block belongs to a source function and initiates the propagation process. The transfer function plays a crucial role in propagating the data flow within the source function, ensuring that the relevant data and state information are properly analyzed and propagated. By considering these different situations and applying the respective functions, we are able to accurately track the data flow and determine the outgoing state for each basic block within the given function.
3.3 Intra-Procedural Analysis
Based on the context-sensitive inter-procedural analysis, an intra-procedural analysis described in Algorithm 2 is performed for each derived function.
Within the basic block calling sinks, we extract output context with a type-specific sink extractor and build it into a production rule. If only one value is extracted from the sink, we append a terminal token to the basic block production. Otherwise, we append the conjunction of all possible values to the basic block production. For each basic block, we build a control flow graph and calculate the constraints with a checker, PathChecker.
15 Function analysis(p, c):
if isBranch(p) then
DestMap ← /0
split (p)
// Split p into (p0, ..., pi),
// where p0 is the default path
for ∀ pi, pj ∈ p, and pi.Dest = pj.Dest do
if pi == p0 then delete pj
else Disjunction (pi.c, pj.c)
end
end
for ∀ pi, c /∈ DestMap do
DestMap.add (pi, c)
analysis (pi, c.case)
end
end
/* Entry of the intra-procedural analysis, */
/* analyze the whole function */
15 Function analysisFunction(f):
/* f: the analysis target function */
b ← f.firstBasicBlock // BasicBlock
p ← b.start // Path
c ← /0 // Condition
analyze (p, c)
end
For branch analysis, since there may be multiple cases (including the default one) going to the same destination, the path constraints are aggregated into a
disjunction as in Algorithm 2 from line 2 to line 12. First, a destination map collects path conditions that lead to the same destination. If any of the possible cases share the same destination, a condition disjunction is created or updated (line 7). We use a cache to store newly created path conditions. Whenever a cache miss happens, we need to construct a fresh value. Especially, if any two cases share the same destination with the default, the other case would be removed (line 6), because the default path condition guarder can be obtained by negating all other conditions.
4 Evaluation
In this section, we evaluate our tool on real-world applications by extracting their output language grammar and generate output samples.
4.1 Experiment Setup and Subjects
For the sake of independence, constancy and reproducibility, we conduct all experiments within docker containers living in a dedicated host machine. The configuration of the host machine and docker containers is shown as follows.
- CPU: Intel Xeon E5–2690
- Memory: 378 GB
- OS: Ubuntu 18.04 LTS
- Docker Base Imagine: ubuntu:16.04
- Compiler: GCC 5.4.0, Clang 6.0.1
- Linker: GNU gold linker 1.11
We evaluate PRETTYGRAMMAR based on several small test cases collected from GitHub. We build these projects with clang as well as GNU gold linker and archive all the temporary files.
Table 1: Subjects revolved in evaluation.
<table>
<thead>
<tr>
<th>Target Language</th>
<th>Output Program</th>
<th>Input Program</th>
</tr>
</thead>
<tbody>
<tr>
<td>Static String</td>
<td>HelloWorld, staticXMLPrinter</td>
<td></td>
</tr>
<tr>
<td>Dynamic String</td>
<td>loopPrinter, XMLPrinterClass</td>
<td></td>
</tr>
<tr>
<td>Expression</td>
<td>printExpression</td>
<td></td>
</tr>
<tr>
<td>Algebraic Equation</td>
<td>aePrinter</td>
<td></td>
</tr>
<tr>
<td>XML</td>
<td>testWriter</td>
<td>xmllint</td>
</tr>
</tbody>
</table>
The related subjects are listed in Table 1. TestWriter is a test program that comes with the libxml2 project. Xmllint is a tool within the libxml2 project that parses XML files and outputs the result of the parsing.
4.2 Correctness of Inferred Grammar
To analyze the accuracy of the output grammar abstracted by PRETTYGRAMMAR, we collected three small programs that produce human-understandable output language: a static string printer (HelloWorld), an algebraic equation printer (aePrinter), several xml printers (staticXMLPrinter, XMLPrinterClass), and an expression printer (printExpression).
First, we evaluate PRETTYGRAMMAR on several static printers and successfully get its precise output grammar. The test indicates that PRETTYGRAMMAR is able to identify all source and sink functions of different types of terminal as designed. The exact value of printed terminals are also abstracted correctly.
Next, we evaluate PRETTYGRAMMAR on two printers that requires input and contains conditional output branches. The raw result for each program is shown in Figure 1. Figure 1a shows that PRETTYGRAMMAR is able to inferring the type of terminals, and interpreting production guards. Since we have performed context-sensitive and path-sensitive taint analysis, PRETTYGRAMMAR abstracts three different non-terminals from the same function printer::printExpression(), each from a different set of input and output variable status. However, as we can see in Figure 1a some deterministic terminals such as “=” and “\n” have been identified as possible terminals. This is a common false positive caused by the implementation of security check in the llvm and C++ printing methods. It can be eliminated by identifying the characteristic data-flow and control-flow pattern.
4.3 Performance on Real-World Applications
An ideal program language analysis tool should be able to assist researchers effectively, correctly, and efficiently. In this part, we evaluate PRETTYGRAMMAR from three aspects. We first generate corpus with the algorithm described in the following part, and then feed the corpus to a program that takes valid input under the same language with the output we abstracted.
4.3.1 Effectiveness
For software testing, the coverage of input samples are crucial. A high coverage sample are much more meaningful than vast random samples which are likely rejected in a very early stage of the program process. Thus, we observe the average coverage of source code in terms of lines and functions of our 1000 generated corpus, and list the results in Table 2.
where we refer the code coverage of one static input as static coverage, knowing form the dynamic generate or mutated input during the fuzz testings, in which we refer the coverage as dynamic coverage. Due to the scalability of gcov, which is the coverage analysis tool we adopt, we can only count the coverage of libxml2.
As shown in Table 2 corpus generated by PRETTYGRAMMAR achieves the best performance in static coverage. PRETTYGRAMMAR has outperformed empty and random samples in all coverage indicators. The result of random strings almost evens with the empty one, indicating that in such situations, random input is very unlikely to pass the basic input checker as well as the empty input in real-world applications. However, corpus generated by PRETTYGRAMMAR has reached source code in a decent level. In terms of software testing, triggering more source code can greatly increase the chance of finding new bugs or triggering new crashes. Compared with empty and random corpus, PRETTYGRAMMAR generated corpus has greatly improved the effectiveness of corpus coverage.
On the other hand, PRETTYGRAMMAR also outperforms valid corpus which are grammar-correct xml files we collected. We consider two possible reasons: 1) PRETTYGRAMMAR generates some incorrect samples and triggered error handling path that the valid samples would never touch. 2) After observing some samples form both corpus, we notice both the structure complexity and the length of synthesis samples is apparently higher than collected ones. While larger and more complex inputs are more likely triggering more source code.
4.3.2 Correctness
To evaluate the correctness of the synthesized grammar, we performed an evaluation targeting the libxml2 XML linter xmllint. We fed the corpus we generated from libxml2 to xmllint and collected the feedback in the Table 3.
Table 3 presents the correctness evaluation results of the synthesized grammar for XML checked by libxml2 xmllint. The table shows the fraction of the check results in three categories: semantic correct, syntactic correct, and syntactic error. Among the tested corpus, only 3.22% are semantically correct, while 38.7% are syntactically correct, and 58.06% are syntactically incorrect.
5 Future Work
While our approach shows promising results, there are several directions for future work that could improve the algorithm’s effectiveness and applicability. We discuss some of these potential directions below.
5.1 Handling More Complex Languages
Our current algorithm can handle languages with both lexical and syntactical structures, as well as entity identifiers. However, it may struggle with more complex languages that include features such as nested structures or complex type systems. One direction for future work could be to extend the algorithm to handle more complex languages, potentially by handle more complex languages.
5.2 Improving Precision of Static Analysis
Our static analysis method currently can partially infer possible values for string variables at each program point. While this is useful for generating valid input strings, it may not capture all possible behaviors of the program. In future work, we could explore more advanced static analysis techniques to improve the precision of our inferred string values, potentially by leveraging more advanced static analysis techniques.
5.3 Applying the Algorithm to Real-World Programs
While we demonstrate the effectiveness of our approach on a set of small example programs, it remains to be seen how well it would perform on more real-world programs. Future work could involve applying our algorithm to a wider range of programs and evaluating its effectiveness in generating high-quality input strings. Additionally, we could explore the feasibility of integrating our approach into existing software testing frameworks.
5.4 Integration with Fuzzing Techniques
Our approach provides a useful tool for generating input strings for software testing, but it does not directly address the process of actually testing the software. Future work could involve integrating our algorithm with existing fuzzing techniques to automatically generate and test input strings. This could potentially involve leveraging machine learning techniques to guide the generation of input strings towards unexplored parts of the program.
6 Conclusion
Deriving source code implied language is significant for a wide variety of applications. In this paper, we propose a static analysis that learns the implicit language from a program’s source code. Our approach performs context-sensitive and path-sensitive taint analysis within the targeted class. To maintain context-sensitivity, we assign indirect calls with a potential callee pool and propagate the context environment to every possible candidate in the pool. To maintain path sensitivity, we represent conditional branches as nodes with constraints.
We implemented a prototype called PRETTYGRAMMAR in C++, based on the LLVM framework. Our experiments demonstrate that PRETTYGRAMMAR is effective and efficient in extracting grammar structures and generating output corpora for desired program output languages, such as XML.
Furthermore, we evaluated PRETTYGRAMMAR’s output grammar using libxml2’s XML linter, xmllint, and found that a large proportion of generated samples were syntactically correct.
Overall, our proposed approach shows promise in automatically generating program output corpora and can benefit a range of applications, including software testing, reverse engineering, and vulnerability analysis.
Acknowledgements
This research was supported in part by the National Science Foundation (NSF) grant CNS-1652790.
References
|
{"Source-Url": "https://faculty.ist.psu.edu/wu/papers/PrettyGrammar-i.pdf", "len_cl100k_base": 5508, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26934, "total-output-tokens": 7798, "length": "2e12", "weborganizer": {"__label__adult": 0.0004425048828125, "__label__art_design": 0.0002646446228027344, "__label__crime_law": 0.0003490447998046875, "__label__education_jobs": 0.0004887580871582031, "__label__entertainment": 5.805492401123047e-05, "__label__fashion_beauty": 0.00017058849334716797, "__label__finance_business": 0.00011980533599853516, "__label__food_dining": 0.00033211708068847656, "__label__games": 0.00046896934509277344, "__label__hardware": 0.0006361007690429688, "__label__health": 0.0004835128784179687, "__label__history": 0.0001577138900756836, "__label__home_hobbies": 7.051229476928711e-05, "__label__industrial": 0.0002868175506591797, "__label__literature": 0.0003006458282470703, "__label__politics": 0.0002428293228149414, "__label__religion": 0.0004346370697021485, "__label__science_tech": 0.006500244140625, "__label__social_life": 8.791685104370117e-05, "__label__software": 0.0032367706298828125, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.00032138824462890625, "__label__transportation": 0.00042057037353515625, "__label__travel": 0.0001809597015380859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32657, 0.03075]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32657, 0.60884]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32657, 0.8514]], "google_gemma-3-12b-it_contains_pii": [[0, 4032, false], [4032, 8963, null], [8963, 13658, null], [13658, 16836, null], [16836, 21277, null], [21277, 23509, null], [23509, 27484, null], [27484, 32657, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4032, true], [4032, 8963, null], [8963, 13658, null], [13658, 16836, null], [16836, 21277, null], [21277, 23509, null], [23509, 27484, null], [27484, 32657, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32657, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32657, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32657, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32657, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32657, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32657, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32657, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32657, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32657, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32657, null]], "pdf_page_numbers": [[0, 4032, 1], [4032, 8963, 2], [8963, 13658, 3], [13658, 16836, 4], [16836, 21277, 5], [21277, 23509, 6], [23509, 27484, 7], [27484, 32657, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32657, 0.03825]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
f3c10fa1cd7c6d7f0f49120face599dbab4cebbe
|
ABSTRACT
DevOps is a modern software engineering paradigm that is gaining widespread adoption in industry. The goal of DevOps is to bring software changes into production with a high frequency and fast feedback cycles. This conflicts with software quality assurance activities, particularly with respect to performance. For instance, performance evaluation activities — such as load testing — require a considerable amount of time to get statistically significant results.
We conducted an industrial survey to get insights into how performance is addressed in industrial DevOps settings. In particular, we were interested in the frequency of executing performance evaluations, the tools being used, the granularity of the obtained performance data, and the use of model-based techniques. The survey responses, which come from a wide variety of participants from different industry sectors, indicate that the complexity of performance engineering approaches and tools is a barrier for widespread adoption of performance analysis in DevOps. The implication of our results is that performance analysis tools need to have a short learning curve, and should be easy to integrate into the DevOps pipeline in order to be adopted by practitioners.
1 INTRODUCTION
DevOps is a modern software engineering paradigm that aims to reduce the time between changing software and delivering these changes into production with high quality [27]. This reduction in delivery time is achieved through organizational changes that bring together development and operations teams and processes with a high degree of automation, e.g., via continuous delivery (CD) pipelines and quality gates [16].
One of the most important quality aspects of a software system is performance. The performance of a system can be described as several system properties that concern the system’s timeliness and use of resources [17]. Common performance metrics are response time, throughput, and resource utilization. Performance requirements for software systems are typically defined by setting upper and/or lower bounds for these and other metrics. In order to ensure that such performance requirements can be met, several activities are required during the development and operation of these systems [8]. A common distinction is made between model-based activities, such as prediction using performance models [11], and measurement-based activities, such as load testing [18] and monitoring [14]. Historically, performance-related activities in software development and operations were tackled independently from each other, but the newly emerging DevOps concepts require and enable a tighter integration between both activity streams [9].
In our prior work [9], we discussed how existing solutions could support this integration, as well as open research challenges in the area of performance evaluation in DevOps. Despite the widespread adoption of DevOps practices and technologies, there are still many unanswered questions about DevOps. In particular, we focus on the following questions:
(1) How often are performance evaluations of applications developed using DevOps conducted in industry?
(2) Which performance evaluation tools are being used in the CD pipeline?
(3) What is the granularity of the analyzed performance data?
(4) Are performance models used in the CD pipeline?
To answer these questions, we performed a survey on the current state-of-practice of addressing performance concerns in industrial DevOps applications. Prior empirical studies show that the adoption of DevOps correlates with positive software quality outcomes [26]. Also, in the open source community, the usage of DevOps and continuous integration (CI) leads to more frequent releases [15].
However, these studies do not present the current practice of performance engineering in DevOps applications. Our survey is the first to focus on performance engineering practices in a DevOps setting.
Our study reveals that automatic performance evaluations are usually not integrated into automatic delivery pipelines and not performed regularly. In addition, performance modeling is not applied in most companies. In this study, we observed that diagnosing performance issues is typically performed based on “human intuition” [19]: engineers investigate hypotheses about what might have gone wrong in the system using data analytics to draw a conclusion about the observed performance issue.
The remainder of this paper is structured as follows. Section 2 provides an overview of related work, focusing on surveys about DevOps practices. Section 3 presents details about our methodology, including the survey design. The main results of our survey are discussed in Section 4. Section 5, discusses the main implications (which are summarized in Table 1) of our study. In Section 6, we discuss the threats to validity of our study. In Section 7, we conclude the paper.
2 RELATED WORK
Others have performed prior surveys to assess the state-of-practice of DevOps in industry. Several of these surveys were conducted by corporations that sell DevOps solutions to companies. While some surveys touched briefly upon the topic of software performance in DevOps, none of them focused on getting an in-depth overview of how performance engineering is applied in DevOps. Prior surveys on the organizational impact of applying DevOps in industry [1, 2, 12], assessed the DevOps adoption over different years [2] and the types of tools and techniques used in DevOps pipelines [3]. These surveys concluded from practitioner responses that DevOps has an increasingly large impact in industry. These prior surveys focused on the used tools, and underline how these tools are usable to optimize certain businesses and technology goals, such as improving software performance. In particular, software performance is discussed as one of the main drivers for using DevOps [3, 7].
Other drivers of the DevOps movement are: “more efficient time-to-production for new software; a better collaboration between IT and other lines of business; and more consistent and higher quality software deployments” [4]. Overall, the surveys conclude that the DevOps trend is substantial and long-term. Puppet [2] collected responses from 3200 surveyed practitioners, and reported that the percentage of teams that use DevOps (compared to other IT-related teams) increased from 16 % in 2014 to 27 % in 2017. As these percentages show, DevOps can still be considered relatively new and far from being applied widely in industry, as also reported by Logz.io [1] and Erich et al. [12]. CA Technologies [3] discusses the findings from an audience of 1425 senior IT and line-of-business executives and reports on the most critical DevOps demand drivers and tools, along with DevOps benefits and the factors that are driving DevOps. It is interesting to notice that improving the quality and performance of the applications is the top driver, with 42 % of the participants agreeing on this. Tool-wise, application performance management and monitoring (APM) [14] tools are perceived as the most important tools for DevOps by 38 % of the participants, while 37 % of the participants consider performance testing tools as critical. KMS Technology [4] surveyed 200 IT practitioners who were involved in transitioning to DevOps, and reported that 51 % had a very positive impression, and 79 % had achieved their desired goals. They also reported that the most significant challenge during the transition was the limited skill set and knowledge about DevOps among in-house IT staff (28 %). The second biggest challenge was a lack of support from the executive staff (23 %), followed by an inability to agree on and/or articulate the goals of the transition (18 %).
In addition, prior surveys of practitioners targeted the industrial adoption of performance testing [6] and CI [15]. The report by TechBeacon [6] is indirectly related to DevOps because the survey assessed performance engineering practices throughout the software development life cycle, and reported that 62 % of the participants agreed that performance engineering is important for DevOps. Hilton et al. [15] studied the barriers that developers face when using CI, and reported that the complexity of CI tools and flaky tests are important barriers for effective DevOps integration.
3 METHODOLOGY
This section describes the design of the survey, the way in which it was advertised, and the profile of the participants.
3.1 Survey Design
The survey design follows the guidelines for conducting surveys in software engineering by Linaker et al. [21]. We designed our web survey to answer how industry addresses performance in DevOps processes.
Our survey contained 58 questions, divided into three parts: 1) questions about the participants’ professional information (11 questions); 2) questions about development process models and team organization (30 questions); and 3) questions about performance assessment and evaluation (17 questions).
Based on the four aspects that are specified in Section 1, we defined the target audience for the survey mainly as DevOps engineers, software architects, software developers, software operation engineers, software performance testers, and software consultants with a focus on performance engineering at software vendors and consultant companies worldwide.
We developed a set of initial hypotheses, such as on the frequency of performance evaluations, the applied tools and the acceptance
of performance models. Based on the set of hypotheses, we derived a questionnaire plan, consisting of survey goals, such as "Measure capabilities of monitoring tools" or "Measure the completeness of the continuous delivery pipeline". Each goal is composed of a set of concrete questions by which we want to answer the corresponding goal. Additionally, the survey design aims not only at describing "what" happens, but also at answering "why" it happens in order to conduct an explanatory study as opposed to just being descriptive.
In order to enable comparison, we aimed at minimizing free text questions and introduced single and multiple choice questions as well as Likert scales as often as possible to order the choices. Questions with ordered choices are less difficult to answer for participants and easier to analyze for researchers than unordered ones [10, 13, 24].
3.2 Survey Context and Advertisement
We advertised the link of the survey through industry-related mailing lists such as the SPEC (Standard Performance Evaluation Corporation) mailing list, social media, related events such as De-vOpsDays and links in online computer magazines and blogs. In addition, the request for participation in the survey was spread via the authors’ network of industry contacts.
The data collection was conducted between May 2016 and March 2017. By the time this article was written, 26 full responses (all questions answered by participants) and 108 partial responses (a part of the questions answered) were gathered. The following sections of this paper are based on the 26 full responses only.
3.3 Survey Participants
The collected responses cover a wide range of education levels, processes, roles, work experiences and company sizes.
Approximately 85% of the participants have a university degree (i.e., a Bachelor’s degree (35%), a Master’s degree (25%), or a Ph.D. (25%)), while the other 15% of the participants hold a high school degree.
There is a variety of job positions represented in the sample; however, more than a half of the participants describe themselves as software developers, and less than 10% as DevOps engineer or performance engineer.
Most (56%) of the participants have 1 to 3 years of working experience in their current position, while 22% have 3 to 5 years of experience, and 22% have 5 or more years of experience.
The participants work in companies that have between 100 and 999 employees (42%), between 10 and 99 employees (31%), and between 1,000 and 9,999 employees (19%). The remaining participants work at companies that have less than 10 employees or more than 10,000 employees (8%).
Most participants apply continuous integration (54%) while continuous deployment (12%) and continuous provisioning (4%) are applied less frequently. Continuous integration is often (38%) applied in combination with agile processes, such as Scrum. Most participants (54%) use real-time data for process improvement.
4 THE MAIN RESULTS OF OUR SURVEY
In this section, we present the main results of our survey. The complete questionnaire, raw response data, and a more detailed analysis are publicly available online [5].
4.1 Performance evaluations are not regularly conducted in most companies
Approximately one third of the participants conducts performance evaluations on a regular basis (19% continuously, 8% daily, and 8% weekly). The other participants conduct performance evaluations monthly (8%), quarterly (27%), yearly (12%), less than yearly (8%), or never (12%). In addition, 50% of the participants spend less than 5% of their time, and only 20% spend more than 20% of their time on performance. 26% of the participants report that performance evaluations are assigned to dedicated persons or teams; 41% report to be in charge themselves (see Fig. 1).
4.2 Jenkins is by far the most widespread CI solution
There exists a wide variety of tools that support the continuous integration pipeline. Not surprisingly, version control systems (VCSs) are used by all surveyed practitioners. The vast majority uses Git (77%) and/or SVN (38%) as VCS. Jenkins is the most popular “end-to-end” solution for CI. A majority of 77% of the practitioners use Jenkins for continuous builds and 65% of the practitioners use Jenkins to deploy their software. Surprisingly, 50% of the practitioners use SSH as a deployment system, beating Puppet (31%) at the third place. The relatively heavy use of SSH suggests that CI solutions such as Jenkins cannot yet fulfill all wishes of practitioners, e.g., because such solutions are not capable of working with legacy code. To monitor performance, practitioners tend to rely on lower level system tools (35%), such as top, or Nagios (35%). APM tools (which are advertised as tools that support CI) are hardly used by practitioners (see Fig. 2).
4.3 Application-level monitoring is mostly done in an ad-hoc manner
Even though 70% of the participants reported to have access to monitoring data, the responses on how their systems are monitored were surprising (see Fig. 3). While monitoring system-level (and infrastructure) metrics is common, hardly any monitoring is conducted at higher levels, in particular, at the application-level (e.g., using application-internal metrics). The lack of application-level monitoring is reflected by both the reported granularity of measurements.
and the used tools. The granularity of monitoring is mentioned with decreasing occurrence from system level (50%), over application level (42%) and operation level (23%), to instruction level (4%). Typical system-level monitoring tools such as Nagios and Munin, or those provided by the (operating) system were mentioned (73%). As opposed to this, only 15% of the participants reported that they are using a (commercial) APM tool. Three participants reported about self-developed tools, which seems to be a current trend to use general-purpose data analytics and visualization components (e.g., logging and Graphite) to set up custom monitoring infrastructures.
4.4 Few practitioners use performance models, despite widespread interest
The results of our survey reveal that performance models are currently not used in industry and there appears to be no trend towards their adoption either (see Fig. 4). Our survey shows that 88% of the participants do not apply models for performance management, even though 18 (almost 70%) of them state that they would like to use such models. While most participants are aware of performance modeling formalisms, their knowledge seems to be shallow, since our results show that only 5 (19%) of the participants have (some) knowledge about queuing networks, i.e., the most well-known performance modeling formalism.
5 IMPLICATIONS OF OUR FINDINGS
As discussed in Section 4.1, most surveyed companies do not regularly conduct performance evaluations. In prior work, Leitner and Bezemer [20] showed that in most open source projects performance evaluations are not conducted on a regular basis either. These findings suggest that there is a mismatch between what the plethora of performance engineering research delivers, and what practitioners are really looking for. Below, we discuss the most important implications of our study for researchers.
5.1 The complexity of performance engineering approaches is a barrier for wide-spread adoption by practitioners
Software performance assurance activities are complex tasks by nature that require much knowledge of various aspects of the entire software life-cycle. As a result, performance engineering approaches, which are often highly complex, are not straightforward for practitioners to adopt and understand. For example, performance modeling is a widely leveraged technique in research that can be particularly suitable in a DevOps context. As performance tests can be conducted much faster on performance models than on real applications, performance models could work well for applications that release many times per day. Unfortunately, Section 4.4 shows that the application of performance models is rare in industry. The lack of participants’ knowledge is the most likely cause for not having a clear opinion about the underlining science of such models. Performance modeling techniques, being mostly research prototypes, often lack documentation and require expert knowledge to be leveraged, which makes their integration for non-experts tedious. Hence, the valuable outcomes of the performance models may be difficult for practitioners to interpret, digest, or even trust.
5.2 Performance engineering approaches must be lightweight
Our findings highlight the need for more lightweight performance engineering approaches, which still retain the necessary accuracy, as most practitioners do not possess in-depth knowledge about performance engineering techniques (see Section 4.4). A step towards such approaches might be automating aspects of existing approaches and hiding their associated complexity from the practitioner. The high amount of required effort upfront to construct and calibrate a performance engineering technique (e.g., performance
modeling) may be an extra barrier for industrial adoption. While academic studies show the benefits of performance models for reasoning about design decisions and trade-offs [23], industry may fear the high upfront cost.
In addition, automated and systematic performance engineering approaches, e.g., creating and updating performance models, may facilitate the adoption of such techniques in industry. While automated extraction approaches already exist [9], there is still no “one-click” solution, which would significantly reduce the entry barrier. One important step is to enable more lightweight performance engineering approaches. An example of tools that aim at reducing the entry barrier are APM tools. Unfortunately we did not observe wide-spread adoption of such tools by practitioners, yet (Section 4.3).
5.3 Performance engineering approaches must smoothly integrate with existing tools in the DevOps pipeline
A possible explanation for the low adoption of performance engineering practices in DevOps could be that performance engineering approaches are typically not designed with the consideration of DevOps as a general context. On the other hand, existing tools that are used in many DevOps settings, such as Puppet and Docker, do not integrate nicely with existing performance engineering processes in industry. For example, Section 4.3 shows that many practitioners still rely on low-level tools, such as SSH and system tools, to deploy their applications and monitor performance. In addition, we observed that even though many participants conduct application level monitoring, they do so without the use of specialized tools (such as APM tools).
Our recommendation for performance engineering researchers is to ensure that their tools integrate smoothly in existing DevOps pipelines. For example, in our observed in the survey responses that Jenkins CI is by far the most popular CI tool (Section 4.2). Hence, we recommend that researchers provide plugins that allow an easy integration of their performance evaluation tools in Jenkins.
6 THREATS TO VALIDITY
In this section, we discuss the threats to validity of our study.
Internal validity. Threats to internal validity relate to the participant bias and errors. A first internal validity threat concerns the possible selection bias for survey participants. To avoid such bias, we advertised the survey in a wide variety of channels (see Section 3.2). However, some of these channels (e.g., the SPEC mailing list) may target a specific audience. Hence, the results of our survey may be biased. In addition, our survey targeted industrial projects, which are mostly closed-source. Hence, our findings do not necessarily extend to open source projects. Future studies are necessary to further explore how performance is addressed in DevOps in other companies and in open source projects.
Construct validity. A threat to the construct validity of this study is that our survey consisted mostly of closed-ended questions. As a result, the richness of the responses may be affected. However, we felt that the advantages of closed-ended questions outweighed the disadvantages: closed-ended questions are easier to answer and analyze [10, 13, 22, 24]. Hence, we focused on closed-ended questions.
7 CONCLUSION
In this paper, we highlight the results of an independent survey that focused on performance engineering practices in DevOps. We found that two third of participants do not conduct performance evaluation on a regular basis, and among the ones that conduct performance evaluations, 50% of the participants spend less than 5% of their time on them. For what concerns the applied practices in DevOps, most participants perform continuous integration, while continuous deployment and continuous provisioning is seldom implemented. Tool-wise, Jenkins is the most used end-to-end tool for implementing DevOps practices. We also found that the use of performance models by practitioners is very low.
One explanation for the low adoption of performance engineering practices in DevOps could be that the DevOps movement is still in its infancy, and developers are still getting used to the opportunities that this movement offers in terms of automation of performance engineering processes.
Our survey shows that even though the adoption of DevOps is relatively widespread in industry, performance engineering practices are lagging behind. Future research should focus on assisting software developers and performance engineers to convert their existing performance engineering practices into the DevOps pipeline.
ACKNOWLEDGEMENTS
This research was conducted by the SPEC RG DevOps Performance Working Group.3 We would like to thank all survey participants for their responses. The authors have benefited from discussions with various colleagues during community events such as the Dagstuhl seminar on “Software Performance Engineering in the DevOps World” [25].
This work is partly sponsored by the German Research Foundation (DFG) in the Priority Programme “DFG-SPP 1593: Design For Future—Managed Software Evolution” (HO 5721/1-1 and KO 3445/15-1), the German Federal Ministry of Education and Research (grant no. 01IS17010, ContinuITy), and by the Swiss National Science Foundation project (178653).
REFERENCES
3https://research.spec.org/devopswg.
|
{"Source-Url": "https://users.encs.concordia.ca/~shang/pubs/its03-bezemerA.pdf", "len_cl100k_base": 4599, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 20040, "total-output-tokens": 5407, "length": "2e12", "weborganizer": {"__label__adult": 0.00023734569549560547, "__label__art_design": 0.00019347667694091797, "__label__crime_law": 0.00020933151245117188, "__label__education_jobs": 0.0008020401000976562, "__label__entertainment": 4.172325134277344e-05, "__label__fashion_beauty": 9.92417335510254e-05, "__label__finance_business": 0.0003690719604492187, "__label__food_dining": 0.0002467632293701172, "__label__games": 0.0003349781036376953, "__label__hardware": 0.00047206878662109375, "__label__health": 0.0003008842468261719, "__label__history": 0.00011992454528808594, "__label__home_hobbies": 4.965066909790039e-05, "__label__industrial": 0.0002899169921875, "__label__literature": 0.00014495849609375, "__label__politics": 0.0001766681671142578, "__label__religion": 0.0002410411834716797, "__label__science_tech": 0.0065155029296875, "__label__social_life": 6.639957427978516e-05, "__label__software": 0.00800323486328125, "__label__software_dev": 0.98046875, "__label__sports_fitness": 0.00016927719116210938, "__label__transportation": 0.0002601146697998047, "__label__travel": 0.00013899803161621094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25027, 0.02864]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25027, 0.08291]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25027, 0.94633]], "google_gemma-3-12b-it_contains_pii": [[0, 3746, false], [3746, 9512, null], [9512, 14871, null], [14871, 18620, null], [18620, 25027, null], [25027, 25027, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3746, true], [3746, 9512, null], [9512, 14871, null], [14871, 18620, null], [18620, 25027, null], [25027, 25027, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25027, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25027, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25027, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25027, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25027, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25027, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25027, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25027, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25027, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25027, null]], "pdf_page_numbers": [[0, 3746, 1], [3746, 9512, 2], [9512, 14871, 3], [14871, 18620, 4], [18620, 25027, 5], [25027, 25027, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25027, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
3f1b93c2700c161888c31c1972a2f065db65420b
|
Diagnosing Delivery Problems in The White House Information Distribution System
Mark Nahabedian & Howard Shrobe
MIT Artificial Intelligence Laboratory
Abstract:
As part of a collaboration with the White House Office of Media Affairs, members of the MIT Artificial Intelligence Laboratory designed a system, called COMLINK, which distributes a daily stream of documents released by the Office of Media Affairs. Approximately 4000 direct subscribers receive information from this service but more than 100,000 people receive the information through redistribution channels. The information is distributed via Email and the World Wide Web. In such a large scale distribution scheme, there is a constant problem of subscriptions becoming invalid because the user’s Email account has terminated. This causes a backwash of hundreds of “bounced mail” messages per day which must be processed by the operators of the COMLINK system. To manage this annoying but necessary task, an expert system named BMES was developed to diagnose the failures of information delivery.
Background
In January 1993, the new Clinton administration committed itself to the use of electronic media such as Email (and later the World Wide Web) for making government information widely available to the public. A collaborative effort between the White House Office of Media Affairs, the MIT Artificial Intelligence Laboratory and others quickly created a workable framework for wide-scale distribution of a stream of daily documents originating in the Executive Office of the President. The document stream includes daily press briefings, speeches by the President and other officials, backgrounder, proclamations, etc. In addition, the stream of released information includes special documents such as the National Performance Review’s reports on reinventing government, the proposed health care reform legislation, the yearly budgets, etc.
The Intelligent Information Infrastructure Project at the MIT Artificial Intelligence Laboratory created an information distribution server which functions as the focal point of the distribution chain. Documents are released from the Executive Office of the President through this system; they are sent from this system to a variety of archiving and retrieval systems around the country, to most on-line services (e.g. Compuserve, America Online), to about 4000 direct subscribers to the MIT server, and to a variety of other servers which further redistribute the documents. A survey of people connected to this distribution chain estimated that more than 100,000 people were receiving information through this medium.
Documents released through this service are coded with descriptive terms taken from two taxonomies: the first taxonomy categorizes the type of document (e.g. Press Release vs. Speech vs. Press Conference); the second taxonomy concerns content (e.g. Foreign Affairs, Domestic Affairs, Economy, Taxes). Subscribers to the service specify a personal profile consisting of combinations of the descriptive terms which characterize their interests; it is the server’s job to guarantee that subscribers receive exactly those documents which match their profiles in a timely manner.
Users establish a subscription and modify their profiles by filling out electronic forms (using either Email or the World Wide Web) and submitting them to the server. The ease with which users can manage their profiles is an important measure of the quality of service delivered.
The Problem
The environment just described is open, large scale, and anarchic. The system services thousands of users at hundreds of sites in dozens of countries. Users may establish, modify and terminate subscriptions at any time. User Email addresses registered with the server may become invalid at any time; occasionally users cancel their subscriptions before this happens, but this is comparatively rare. Also, configuration problems at subscribers' sites make their Email addresses temporarily unreachable even though the addresses are valid.
In either of these cases, "bounced mail" messages are sent to the MIT server informing it of the inability to deliver a message to the invalid Email address. Most Email systems do not consolidate these bounced mail messages; if you send two messages to an invalid Email address, you receive back two bounced mail messages. The White House information stream typically includes as many as a dozen documents a day; with a subscription base of 4000 direct subscribers this leads to a rather large volume of bounced mail traffic each day (more than 100 messages). The failure to handle these messages and to update the subscription database accordingly, leads to a perception by the administrators of the receiving sites that they are being "spammed."1 The White House, it is unacceptable to ignore the bounced mail traffic. A second class of problem arises when a user with a valid Email address attempts to terminate or modify a subscription without success; in this case, the perception is that the White House is spamming the subscriber personally, an even more unacceptable situation.
On the surface, it would seem that this problem is amenable to simple automation. However, the open, anarchic character of the Internet makes the problem quite complex: there are dozens of different mail servers, each with a unique "bounced mail" message format. In addition to the variety of Email servers speaking the Internet's native SMTP protocol [RFC821] there are also a large number of other protocol domains bridged to the Internet. These include UUCP, Bitnet, X.400 and a large number of proprietary Email systems (e.g. CC:mail, Microsoft Mail, etc.); bounced mail messages are often reformatted as they cross the bridge between protocol domains, sometimes losing information (and sometimes preserving information which is useless, such as one which directs the recipient to press the F1 key for more information). Within these other mail domains, the format of a mail address might be different from that used in the Internet; bounced mail messages from these domains often include their foreign format email address, rather than the Internet format address in our database.
A second set of complications arises from the variability of user's Email addresses. Many people have several Email addresses some of which are forwarded to another. Bounced mail messages in such cases often refer to the "forwarded to address" which isn’t in our subscription database. Furthermore, people often subscribe using one address, switch to a second one as their primary address (forwarding the first one to the new address) and then more or less forget about the first address; attempts to modify the subscription using the new, primary address are then unsuccessful, because the system is unaware of the new address. Similarly, if the new address becomes invalid, then a bounced mail message will be sent to the server referring to the new address, which is unknown to the server.
In some mail systems (e.g. UNIX) users may direct their mail streams to shell scripts or other programs for processing. "Vacation programs" are a common example of this, they send back to the sender a message saying that the recipient is away and unlikely to respond soon. This is a courtesy when sent in response to a personal correspondence but when sent back to a bulk distributor like the White House server it shows up as part of the bounced mail stream. In addition, nothing prevents users from writing new mail handling programs, including incorrect ones; when such programs fail, the sender of the message (as opposed to the author of the buggy program) is usually sent a bounced mail message (in principle the postmaster at the receiving site should be sent this message, but principles and reality don't always correspond in this world).
A final complication, shown in Figure 1, arises because of the presence of redistributors. Redistributors are people or programs which receive the original message stream and then relay it to a set of subscribers known to the redistributor but not to the primary White House distribution server. Virtually any subscriber may independently decide to act as a redistributor of the document stream (for example, by establishing a mailing list). If an Email address on a redistributor's list becomes invalid, the redistributor should be notified; however, often the original source of the message (us) is notified instead. To get the behavior we
---
1 Spamming: A colloquial term, now common in discussions about the Internet, which refers to the practice of filling up somebody’s electronic mailbox with unwanted material, often advertisements, complaints or flames. Origin unknown.
Incorrect, but often used route for bounced mail message
**Redistributor:** should reformat message so problems return here
**Final Site**
Should follow header info
**White House Server**
Known to White House system
**Known to Redistributor**
Figure 1: Redistributors Complicate Delivery Notification
Don't worry, be happy: In this approach, bounced mail messages are ignored. The sender builds up a rather large file of bounced mail messages which is periodically deleted. The destination sites receive many messages which are bounced, but this happens automatically. All told, a lot of resources are wasted, but nobody really cares because it’s largely invisible. To be fair, most maintainers do from time to time examine a sampling of the bounced mail traffic and attempt to address the problems.
Big bag of tools to aid the administrator: A number of ad hoc tools are built to aid the system administrator in making sense of the bounced mail traffic [RFC1211].
These help the conscientious list administrator to solve difficult problems, but much of the work remains manual.
Given the high visibility of the White House distribution system and its role as an early experiment in using the Internet to improve government services, neither of these approaches was acceptable. Instead we decided to implement an expert system to aid in the handling of bounced mail and to help in managing other problems such as a user's inability to terminate or modify a subscription.
### Structure of the System
The Bounced Mail Expert System (BMES) is a component of a larger system, called COMLINK, which is a substrate for building information distribution and group collaboration systems using Email, the World Wide Web and other Internet based transport protocols. At the core of COMLINK is an object oriented database which includes the following information:
**Subscribers:** Email address, personal name and subscriptions, date subscription started and date (if any) subscription turned off, whether this user is a redistributor.
**Network Hosts:** Subscribers at this host, upward and downward links in the domain name hierarchy, mail server type.
**Documents:** Descriptive terms, release dates, subject, etc.
**Queued Tasks:** Time to execute the task, task type and arguments.
BMES draws upon this information to help diagnose delivery failures.
BMES is a rule based diagnostic system driven by a file of bounced mail messages. Each message is a symptom of a failure in the delivery system. The user of BMES is the “postmaster” maintaining the White House COMLINK system. BMES’s task is to discover, if possible, the reason why a mail message was bounced and if diagnosis is not possible to present meaningful information to the user and help in gathering more information. If diagnosis is successful, then the system rectifies the problem, usually by suspending a user’s subscription.
For each message processed the system follows a standard pattern of processing:
- **Classification** of the mailer which sent the message
- **Abstraction** of the message to hide the syntactic differences between bounced mail messages.
- **Diagnosis** of the cause of the delivery failure, including:
- Heuristic generation of hypotheses
- Interaction with administrators at remote sites.
The first task is **Classification** during which BMES matches features of the message against required features in the taxonomy of mailer types. In practice, the classification is done by a rather ad hoc set of rules which search for specific features in the headers and the first part of the body of the message. These features include characteristic substrings within particular headers or in specific locations within the body (usually the first several lines) of the message. These rules were determined based on the authors’ observations of the bounce mail messages.
The system currently distinguishes 23 different types of mailers; these need not necessarily correspond to distinct pieces of mailer software, rather they correspond to the variety of distinct formats of bounced mail messages which we’ve observed. Some mailers have a rather broad range of configurability including the format of the bounced mail message to generate. We have no special knowledge of how the remote sites are being managed and so if two distinct hosts generate bounced mail messages which look different, we treat these as having been generated by distinct mailers even if this isn’t necessarily the case. New mailer types pop up occasionally, but this now happens rarely.
The second stage of processing is to **Abstract** the message, hiding the syntactic variability between the different formats of bounced mail messages but preserving their semantic commonality. For example, bounced mail messages typically contain a “transcript” which includes email addresses to which it was impossible to make a delivery, and an indicator of the cause of delivery failure. Similarly, most bounced mail messages contain a copy of the original message that couldn’t be delivered. The original message includes a set of “received” headers [RFC822], each of which corresponds to a mail server in the chain of delivery; the header identifies the host which handled the message, the time of handing, and in some cases the user to whom the message was intended to be delivered. (Note that this is different than the destination in the “to” header [RFC822] of the message, which is typically a generic address such as “Clinton-distribution”).
Abstraction is effected using the object oriented programming techniques of CLOS [CLOS]. Once the classification stage has identified the mailer type, BMES constructs a CLOS object whose class corresponds to the type of the mailer. This object mediates the abstraction phase. We established a class hierarchy corresponding to the mailer types and an object-oriented protocol\(^2\) that all mail messages must obey; the protocol consists of about a dozen methods. Each method in the protocol reflects an aspect of the common semantic content that any bounced mail message must contain. There is one method in the protocol which finds the transcript in the bounce message and a second one which maps over its failure descriptions, calling an action routine with the email address and a canonicalized version of the failure code. There are also protocol methods to locate the message text and then to map over the “received” headers [RFC822] contained in it. We use the class hierarchy to capture commonalities of message structuring. For example, the location within the bounced mail message and the encoding of the transcript and original message are idiosyncratic to each mailer, however several different mailers share the idea of partitioning the message body using the MIME standards [RFC1341] for structuring mail messages; however they may differ as to what fields they include. Therefore different classes implement the protocol methods differently, but where there is commonality this is captured by CLOS inheritance. All mailers which use MIME encoding, for example, are represented as subclasses of the common MimeStructured message class.
---
\(^2\) Here we use the term “protocol” in the same sense as in the “Meta-Object Protocol” [MOP] or the Joshua Protocol of Inference [Joshua], not in the sense of an Internet protocol such as SMTP [RFC822]. Fortunately, the object-model used here doesn’t use the “message passing” metaphor or we would also have confusion between mail messages and messages being sent to objects.
The power of this approach is that it abstracts away the syntactic variability exhibited by the variety of bounced mail message formats, while highlighting their semantic commonality. Higher levels of the system can expect any mail message to contain standardized information and to behave in standard ways, without to be concerned with the underlying syntactic variability.
The next stage of processing is Diagnosis which involves deciding whether the failure is permanent and whether the recipient is actually known to the COMLINK system. If the address in the mail message is found explicitly in the COMLINK database, if the failure is due to the user's account being closed out (as opposed to a transient error) and if the user has an active subscription, then BMES cancels the subscription.
Let the name in the reported address be ?name-1
Let the host in the reported address be ?host-1
For each child ?child-host of ?host-1
If ?name-1@?child-host is the email address of an active subscriber ?sub-1
Then suggest that ?sub-1 is a possible cause of the delivery failure
Rule 1: Probable-User-is-Child-Host
However, sometimes the bounced mail message reports an invalid address which is not present in the COMLINK database. At this point, the Heuristic Generation phase is entered. A small collection of heuristic candidate generation rules is used to suggest candidate addresses which are in the database and which might have led to mail being sent to the address reported in the message. For example the message might report a problem with "foo@ai.mit.edu"; in this case if "foo@w.ai.mit.edu" or "foo@mit.edu" are in the database, they would be good candidates for possible causes of the failure. A rule called Possible-User-at-Child-Host suggests the first. A second rule called Possible User-at-Parent-Host suggest the second. An English paraphrase of the first rule is shown in Rule 1.
Such candidate generation rules work by traversing COMLINK's map of the portions of the Internet domain name space for which it has subscribers. There are rules which suggest the superior domain (e.g. "mit.edu" is the superior of "ai.mit.edu"), any inferior domains (e.g. "w.ai.mit.edu" is an inferior of "ai.mit.edu"), and any sibling domains (e.g. "lcs.mit.edu" is a sibling to "ai.mit.edu") which the system knows about.
Most mailers attempt to deliver a message for several days when possibly transient problems are encountered; they deliver a failure message only after this elapsed time. Because of this long latency, bounced mail messages can continue to arrive for several days after a user's subscription has been canceled. If a bounced mail message refers to an Email address whose subscription has already been canceled, then the user of BMES is not bothered since the problem has already been handled; the message is presumed to have arisen during the period between the time when the Email address became invalid and when COMLINK was informed of this. This requires COMLINK to maintain an entry for users whose subscriptions have been canceled for a period of time after the cancellation; when BMES cancels a subscription, COMLINK creates a queued task entry in its database with a firing time of one month in the future. When this queued task runs it completely removes the user's account from COMLINK's database. However, dur-
In almost all cases, this situation arises when the failing address is reached through an "indirection": Either the address is on the mailing list of a redistributor, or it is the target of a forwarding entry for some other Email address, or there is an MX record [RFC974] involved. In these cases completely automatic processing isn't possible; there isn't enough information available to BMES to form a full diagnosis of the problem. Some of the required informa-
3 MX records are part of the Internet Domain Name System; the MX record for a host specifies which machine should actually receive mail addressed to the original host.
tion is at a remote site and can be obtained only by communicating with an appropriate person at the remote site. It is a further complication that we don’t actually know what remote site does have the information we need.
BMES can help make an educated guess: If it can find the original message included in the bounced mail message and if there are received-from headers in the original message, then the server mentioned in the header might have relevant information. In particular, any user at this server who is marked as a redistributor in the COMLINK database is a particularly useful candidate. Redistribution entries contain an Email address for the administrator of the redistribution list; BMES formats the first draft of a standard Email message to the maintainer asking if the failing address is known to the administrator of the list and, if not, requesting help in figuring out what else might be going wrong (the user is then offered the option of further editing the text of this message).
Another heuristic is to look for Email addresses similar to the failing one at each of the sites mentioned by the “received-from” headers and to then send to the postmaster at each of these sites a message explaining the problem and asking for help.
There are some techniques which we employ manually today which are subject to automation. One is used when there are a small number of users at the site which bounced the mail but when it still isn’t possible to make a definitive identification of the invalid address (either because the bounced message doesn’t contain an address or it contains one which doesn’t match any entry in our database). In this case, we generate one message for each user in our database known at the site; this message explains that we are having delivery problems and asks for the user’s help if possible. There are two useful outcomes: 1) One of the users knows what’s going on and helps us fix it 2) One of these messages bounces, but since the bounced message has the specific user’s address in it (which our normal messages lack since they are sent to the whole subscription list) we are now able to determine which address is invalid. This technique is analogous to techniques used in model-based troubleshooting where a new and maximally informative test is generated.
**Application Payoff**
This application is not a commercial venture and so payback in monetary terms is not a relevant metric for evaluation. BMES was created as a support tool within a collaboration between a research group at MIT and a line organization at the White House Office of Media Affairs. Each partner in this collaboration had their own goals: The participants from the Executive Office of the President wanted to make information routinely and reliably available to the public and to demonstrate the viability of the Internet as a model for the future National Information Infrastructure. The research group at MIT wanted to explore issues in computer supported collaborative work and in intelligent management of information. For both groups, management of the bounced mail problem is a necessary supportive task but one which cannot be allowed to consume valuable resources; in particular, neither group has substantial manpower to devote to the task. Therefore the relevant metric for evaluating the payback of the investment is in terms of the reduction of manpower contributions from the two groups. This in turn directly translates into the effectiveness of the system at handling bounced mail messages.

We have been collecting data on the effectiveness of BMES since early in its lifetime. Figure 2 shows this data for the bulk of calendar year 1995. During this period, 63,091 bounced mail messages were received. BMES was capable of automatically processing 48,031 of these messages or 76% of the total. As can be seen from Chart 1, there is a great deal of temporal variability in the system’s performance. It simply seems to be the case that some weeks we run into problems with sites whose mail servers provide less information, these weeks have lower overall perform-
ance. However, it is also noticeable that there is a long term trend of improvement in the system's performance. This is probably due to a combination two factors: 1) Over time, we have confronted most of the mailer types that exist and have built up useful heuristics for dealing with them. 2) Over time, there has probably been a stabilization of technology in the community and a switch to more robust and informative mailer software.
Over the whole lifetime of the project, the time per day put into bounced mail handling has declined from nearly 3 hours per day in calendar year 1993 to about 1/2 hour per day now. We would certainly like to drive this number down further, but the transformation so far has been a qualitative one: The three hours per day required at the start was simply not viable; today the task is annoying but well within scope.
<table>
<thead>
<tr>
<th>File_name</th>
<th>Chars</th>
<th>Lines</th>
<th>Defs</th>
</tr>
</thead>
<tbody>
<tr>
<td>New-db-interface</td>
<td>2,180</td>
<td>69</td>
<td>10</td>
</tr>
<tr>
<td>User-rules</td>
<td>10,662</td>
<td>292</td>
<td>21</td>
</tr>
<tr>
<td>Zwei-msg</td>
<td>2,798</td>
<td>79</td>
<td>17</td>
</tr>
<tr>
<td>Understanding-bounced-mail</td>
<td>41,067</td>
<td>1,106</td>
<td>104</td>
</tr>
<tr>
<td>Zmail-commands</td>
<td>25,450</td>
<td>655</td>
<td>20</td>
</tr>
<tr>
<td>Mailer-vanilla-unix</td>
<td>10,308</td>
<td>256</td>
<td>11</td>
</tr>
<tr>
<td>Mailer-smailer</td>
<td>3,820</td>
<td>101</td>
<td>2</td>
</tr>
<tr>
<td>Mailer compose</td>
<td>4,090</td>
<td>95</td>
<td>2</td>
</tr>
<tr>
<td>Mailer-mime</td>
<td>6,212</td>
<td>148</td>
<td>9</td>
</tr>
<tr>
<td>Mailer-mmdf</td>
<td>7,482</td>
<td>176</td>
<td>14</td>
</tr>
<tr>
<td>Mailer-mdmf</td>
<td>4,789</td>
<td>125</td>
<td>4</td>
</tr>
<tr>
<td>Mailer-mime-mdmf</td>
<td>5,810</td>
<td>163</td>
<td>6</td>
</tr>
<tr>
<td>Mailer-ucp</td>
<td>3,603</td>
<td>89</td>
<td>1</td>
</tr>
<tr>
<td>Mailer uucp warning</td>
<td>2,814</td>
<td>70</td>
<td>1</td>
</tr>
<tr>
<td>Mailer-ibm</td>
<td>3,352</td>
<td>86</td>
<td>3</td>
</tr>
<tr>
<td>Mailer-vines</td>
<td>3,130</td>
<td>79</td>
<td>2</td>
</tr>
<tr>
<td>Mailer-microsoft</td>
<td>4,894</td>
<td>124</td>
<td>4</td>
</tr>
<tr>
<td>Mailer-minos</td>
<td>2,724</td>
<td>71</td>
<td>3</td>
</tr>
<tr>
<td>Mailer-local-delivery-agent</td>
<td>5,242</td>
<td>127</td>
<td>6</td>
</tr>
<tr>
<td>Mailer-undeliverable</td>
<td>3,139</td>
<td>77</td>
<td>3</td>
</tr>
<tr>
<td>Mailer-cc</td>
<td>2,514</td>
<td>63</td>
<td>3</td>
</tr>
<tr>
<td>Mailer-aol</td>
<td>3,497</td>
<td>93</td>
<td>6</td>
</tr>
<tr>
<td>Mailer-lisp</td>
<td>4,825</td>
<td>122</td>
<td>4</td>
</tr>
<tr>
<td>Mailer-mercury</td>
<td>3,266</td>
<td>84</td>
<td>3</td>
</tr>
<tr>
<td>Mailer-cstaeu</td>
<td>3,324</td>
<td>80</td>
<td>2</td>
</tr>
<tr>
<td>Mailer smtp</td>
<td>4,049</td>
<td>104</td>
<td>4</td>
</tr>
<tr>
<td>Mailer-ksgbb</td>
<td>3,405</td>
<td>86</td>
<td>4</td>
</tr>
<tr>
<td>Bounced-mail-complaint-reply</td>
<td>1,882</td>
<td>51</td>
<td>5</td>
</tr>
<tr>
<td>Check-recipient</td>
<td>2,541</td>
<td>61</td>
<td>3</td>
</tr>
<tr>
<td>Simple-redirection</td>
<td>5,491</td>
<td>140</td>
<td>5</td>
</tr>
<tr>
<td>Relay-zmail-command</td>
<td>25,987</td>
<td>702</td>
<td>65</td>
</tr>
<tr>
<td>Total</td>
<td>214,347</td>
<td>5,574</td>
<td>342</td>
</tr>
</tbody>
</table>
Table 1: Code Distribution in BMES
Implementation
Both COMLINK and BMES are implemented within the Symbolics Genera environment, which runs both on Symbolics hardware and on Digital Equipment Corporation Alpha AXP workstations (using the Open Genera emulator software from Symbolics). BMES is integrated with Genera's ZMail4 mail client which is built on an extensible substrate for complex mail handling applications. Much of the system relies on this substrate for low level processing such as mail file and header parsing, pattern matching and string searching. BMES itself is implemented in Joshua and makes extensive use of its Protocol of Inference to reason about the contents of the mail messages. BMES itself is invoked as a Zmail command which is applied to the mail file containing the bounced mail messages. When mail messages need to be sent to postmasters or users at remote sites, this is facilitated by use of ZMail's programmatic interface. Table 1 shows the component files in the system, including number of characters and lines of source text and number of definitions (rules, lisp functions, methods etc.).
Deployment and Maintenance History
Work on BMES was begun in the spring of 1993 as an adjunct to a predecessor system to COMLINK (called FORUM) which represented the first collaboration between the MIT AI Lab and the White House Office of Media Affairs. The bulk of BMES was completed by the summer of 1993. As COMLINK's development proceeded, a second version of BMES was developed by modifying the first version to take advantage of the extra information maintained by COMLINK. For a few months, COMLINK and FORUM were run in parallel while users were encouraged to switch their accounts over. During this period, both versions of BMES were run to manage problems from the two streams. The final cutover to COMLINK was completed in early 1995. Since that time, new features have been added to BMES as necessary.
It is interesting to note that BMES was literally developed and deployed simultaneously; it was an experience in evolutionary design of a complex software system. As soon as there was useful functionality, it was deployed and then enhanced during its ongoing operation.
4 There are several other products named Zmail which are not related to the one included in the MIT Lisp Machine software systems and its commercial offshoots such as Symbolics' Genera.
BMES is an unusual application: It is a component of the COMLINK system which supports thousands of users but there is only one user of BMES itself. That user is also the developer and maintainer. Currently, the bounced mail processing is done at MIT; however, we anticipate complete hand-off of the COMLINK system in the near future at which time, personnel in the Executive Office of the President will assume responsibility. As with much else about this application a crisp definition of deployment is not easy. A large population has received information from the White House for several years now and the management of email delivery problems has been substantially automated as part of that task. It is true that the system is still operated by its developers, but that was the anticipated situation at the outset. Routine sustainable operation has been achieved and that has enabled other aspects of the project to proceed without undue drain on scarce personnel resources.
Future Work
Though BMES greatly reduces the effort required to process the mail backwash from a bulk electronic mail distribution, there is room for improvement. The addition of some form of reverse mapping of MX records would help to identify an address on the distribution list based on an address as determined from a bounce message. The domain name system does not provide such a mapping, so one would have to be constructed by iterating over all mail sites in the distribution database and doing a domain MX lookup for each one. Because of changes to the distribution database and the DNS, this reverse mapping would need to be updated regularly.
As it is currently implemented, BMES is difficult to extend as new mailer types are discovered and as existing ones change. This difficulty is because the work of identifying mailer type is distributed over a number of ad-hoc parsers. As one adds a parser to recognize a new mailer type, one must be careful that this parser does not also recognize the messages of previously implemented mailer types. Perhaps reimplementing the parsers using a rule based parser generator would simplify the definition of mailer types.
The ideal solution to the problem of handling bounced mail would be the universal adoption of standards which specify how mail delivery status information is reported. If delivery failure notifications explicitly stated the reason for failure, and the failing address, as well as any addresses from which it might have been derived, then BMES could be replaced by a much simpler tool. Only one simple parser would be needed to extract the information from the bounce message. The system would require fewer, simpler rules for identifying the problem subscription. Recognizing the problem of numerous bounce mail formats, the Network Working Group of the Internet Engineering Task Force has recently proposed a set of standards [RFC1891, RFC1892, RFC1893, RFC1894] which specify how mailers should report delivery status. As sites upgrade their mailers to ones that adhere to these standards, there will be fewer and fewer bounce messages that will require a system like BMES to interpret.
References
Sigplan Notices, 23(Special Issue), September 1988.
|
{"Source-Url": "http://www.aaai.org/Papers/IAAI/1996/IAAI96-277.pdf", "len_cl100k_base": 7134, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 25240, "total-output-tokens": 7904, "length": "2e12", "weborganizer": {"__label__adult": 0.0002853870391845703, "__label__art_design": 0.0003345012664794922, "__label__crime_law": 0.0008087158203125, "__label__education_jobs": 0.0014934539794921875, "__label__entertainment": 0.0001962184906005859, "__label__fashion_beauty": 0.0001729726791381836, "__label__finance_business": 0.0010538101196289062, "__label__food_dining": 0.0003247261047363281, "__label__games": 0.0005898475646972656, "__label__hardware": 0.00249481201171875, "__label__health": 0.00046753883361816406, "__label__history": 0.0005106925964355469, "__label__home_hobbies": 0.0001392364501953125, "__label__industrial": 0.0005431175231933594, "__label__literature": 0.00040984153747558594, "__label__politics": 0.0010061264038085938, "__label__religion": 0.00035381317138671875, "__label__science_tech": 0.181884765625, "__label__social_life": 0.0002378225326538086, "__label__software": 0.1998291015625, "__label__software_dev": 0.60595703125, "__label__sports_fitness": 0.00017940998077392578, "__label__transportation": 0.0006527900695800781, "__label__travel": 0.00023186206817626953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34528, 0.04829]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34528, 0.28832]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34528, 0.92798]], "google_gemma-3-12b-it_contains_pii": [[0, 4044, false], [4044, 8743, null], [8743, 11036, null], [11036, 16293, null], [16293, 20271, null], [20271, 24435, null], [24435, 29442, null], [29442, 33845, null], [33845, 34528, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4044, true], [4044, 8743, null], [8743, 11036, null], [11036, 16293, null], [16293, 20271, null], [20271, 24435, null], [24435, 29442, null], [29442, 33845, null], [33845, 34528, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34528, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34528, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34528, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34528, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34528, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34528, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34528, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34528, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34528, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34528, null]], "pdf_page_numbers": [[0, 4044, 1], [4044, 8743, 2], [8743, 11036, 3], [11036, 16293, 4], [16293, 20271, 5], [20271, 24435, 6], [24435, 29442, 7], [29442, 33845, 8], [33845, 34528, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34528, 0.24818]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
26d6fffbd48845820a4adf6787c9cb00c10bf610
|
XEP-0259: Message Mine-ing
Joe Hildebrand
mailto:jhildebr@cisco.com
xmpp:hildjj@jabber.org
2009-01-21
Version 0.1
<table>
<thead>
<tr>
<th>Status</th>
<th>Type</th>
<th>Short Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>Deferred</td>
<td>Standards Track</td>
<td>mine</td>
</tr>
</tbody>
</table>
In servers that deliver messages intended for the bare JID to all resources, the resource that claims a conversation notifies all of the other resources of that claim.
Legal
Copyright
This XMPP Extension Protocol is copyright © 1999 – 2020 by the XMPP Standards Foundation (XSF).
Permissions
Permission is hereby granted, free of charge, to any person obtaining a copy of this specification (the "Specification"), to make use of the Specification without restriction, including without limitation the rights to implement the Specification in a software program, deploy the Specification in a network service, and copy, modify, merge, publish, translate, distribute, sublicense, or sell copies of the Specification, and to permit persons to whom the Specification is furnished to do so, subject to the condition that the foregoing copyright notice and this permission notice shall be included in all copies or substantial portions of the Specification. Unless separate permission is granted, modified works that are redistributed shall not contain misleading information regarding the authors, title, number, or publisher of the Specification, and shall not claim endorsement of the modified works by the authors, any organization or project to which the authors belong, or the XMPP Standards Foundation.
Warranty
## NOTE WELL: This Specification is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. ##
Liability
In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall the XMPP Standards Foundation or any author of this Specification be liable for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising from, out of, or in connection with the Specification or the implementation, deployment, or other use of the Specification (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if the XMPP Standards Foundation or such author has been advised of the possibility of such damages.
Conformance
This XMPP Extension Protocol has been contributed in full conformance with the XSF’s Intellectual Property Rights Policy (a copy of which can be found at <https://xmpp.org/about/xsf/ipr-policy> or obtained by writing to XMPP Standards Foundation, P.O. Box 787, Parker, CO 80134 USA).
Contents
1 Introduction 1
2 Requirements 1
3 Use Cases 1
3.1 Determining Support: Servers 1
3.2 Determining Support: Clients 2
3.3 Receiving a message to the bare JID 3
3.4 Broadcasting ownership requests 3
3.5 Claiming ownership 4
3.6 Notification of ownership claim 5
3.7 Claim processing 5
3.8 Claims for Multi-User Chat rooms 6
4 Error Cases 7
4.1 Invalid "whose" 7
5 Business Rules 8
5.1 Generating IDs 8
5.2 ID Semantics 8
5.3 Comparing IDs 8
5.4 Accepting Multiple IDs 8
5.5 When to send? 8
5.6 Legacy Clients 8
6 Implementation Notes 8
7 Accessibility Considerations 9
8 Security Considerations 9
9 IANA Considerations 9
10 XMPP Registrar Considerations 9
11 XML Schema 10
1 Introduction
At the time of original writing of this XEP, many XMPP servers handle message stanzas sent to a user@host (or "bare") JID with no resource by delivering that message only to the resource with the highest priority for the target user. Some server implementations, however, have chosen to send these messages to all of the online resources for the target user. If the target user is online with multiple resources when the orginal message is sent, a conversation ensues on one of the user’s devices; if the user subsequently switches devices, parts of the conversation may end up on the alternate device, causing the user to be confused, misled, or annoyed.
This XEP proposes an approach for cleaning up the leftover conversation shards on alternate devices, paving the way for servers to deliver messages to multiple devices. As the basic approach, the receiving server asks all of the resources of a user “whose message is this?”. The first resource to say "mine!" wins.
2 Requirements
- Large changes SHOULD NOT be required to existing servers
- Clients that do not implement the new protocol MUST be able participate in conversations
- All messages MUST NOT be delivered to all devices at all times, due to scale concerns
- Clients that do not own the message MUST be notified when a different device claims ownership of the message
- Multiple clients MUST be able to unambiguously decide which of them owns a given message.
3 Use Cases
3.1 Determining Support: Servers
If a server implements the Mine capability, it MUST specify the 'urn:xmpp:tmp:mine:0' feature in its service discovery information features as specified in Entity Capabilities (XEP-0115) or Service Discovery (XEP-0030). Clients MUST NOT send ownership changes if their server does not support this feature.
Listing 1: Client requests information about its own server
```xml
<iq type='get'
```
---
3 USE CASES
from='romeo@montague.net/orchard'
id='info1'>
<query xmlns='http://jabber.org/protocol/disco#info'/></iq>
Listing 2: Server responds with mine feature
<iq type='get'
to='romeo@montague.net/home'
from='montague.net'
id='info1'>
<query xmlns='http://jabber.org/protocol/disco#info'>
...
<feature var='urn:xmpp:tmp:mine:0'/>
...
</query></iq>
3.2 Determining Support: Clients
Clients that support this protocol MUST support XEP-0115, and MUST add the 'urn:xmpp:tmp:mine:0' feature to their entity capabilities, in order to allow for potential server optimizations.
Listing 3: Romeo publishes his capabilities
<presence from='romeo@example.net/home'>
<c xmlns='http://jabber.org/protocol/caps'
hash='sha-1'
node='http://example.com/clients/Mine'
ver='j+5eLRCz6NP6IEPob80JB6sWR3Y='/>
</presence>
Listing 4: Romeo responds to capabilities inquiry from his server
<iq from='romeo@example.net/home'
id='disco1'
to='example.net'
type='result'>
<query xmlns='http://jabber.org/protocol/disco#info'
node='http://example.com/clients/Mine#/WmLAKHhB87d0qn5NUgxrr5NbfE='>
<identity category='client' type='pc' name='Mine'/>
<feature var='urn:xmpp:tmp:mine:0'/></query></iq>
3 USE CASES
3.3 Receiving a message to the bare JID
When a server that implements the Mine capability receives a message addressed to a user’s bare JID, it MUST:
- Ensure that no "whose" element is already on the message. See the Errors section for processing.
- Add a whose element to the message, containing an id attribute with a new value
- Ensure that the the same value of the "id" attribute is never sent to the same session
Messages that have been processed to include a valid "whose" element, are now also considered an "ownership request"
Listing 5: Juliet sends Romeo an undirected message
```xml
<message
from='juliet@example.com/balcony'
to='romeo@example.net'
type='chat'>
<body>Wherefore art thou, Romeo?</body>
<thread>0e3141cd80894871a68e6fe6b1ec56fa</thread>
</message>
```
Listing 6: The ownership request, before broadcasting
```xml
<message
from='juliet@example.com/balcony'
to='romeo@example.net'
type='chat'>
<body>Wherefore art thou, Romeo?</body>
<thread>0e3141cd80894871a68e6fe6b1ec56fa</thread>
<whose xmlns='urn:xmpp:tmp:mine:0' id='4'/>
</message>
```
3.4 Broadcasting ownership requests
The receiving server MUST send a copy of the ownership request to each of that user’s non-negative priority resources. Each copy of the message MUST contain a whose element, each of which has the same id attribute.
Listing 7: Romeo's server forwards copies of the message to all of his resources
```xml
<message
from='juliet@example.com/balcony'
to='romeo@example.net/home'
```
3 USE CASES
3.5 Claiming ownership
When one client for a receiving user detects that the user’s attention has been directed to a given message, that client MUST send an ownership claim (mine!) to the bare JID of the receiving user. If there was a thread element in the original message, it MUST be included in the acceptance notification. There MUST NOT be a body element in the message, and the message SHOULD use the same message type as the ownership request. The mine element MUST include an id element for each of the messages that the client wants to accept. The mine element MUST include at least one id.
Listing 8: Romeo’s "work" client claims ownership
```xml
<message
to='romeo@example.net'
from='romeo@example.net/work'
type='chat'>
<thread>0e3141cd80894871a68e6fe6b1ec56fa</thread>
<mine xmlns='urn:xmpp:tmp:mine:0' id='4'/>
</message>
```
3.6 Notification of ownership claim
As with all messages sent to a bare JID at a server implementing the Mine feature, the acceptance message MUST be forwarded to all of the non-negative priority resources.
Listing 9: Each of Romeo’s clients receives the claim
```xml
<message
to='romeo@example.net/home'
from='romeo@example.net/work'
type='chat'>
<message
to='romeo@example.net/work'
from='romeo@example.net/work'
type='chat'>
<message
to='romeo@example.net/mobile'
from='romeo@example.net/work'
type='chat'>
```
3.7 Claim processing
When a client receives an ownership claim that was sent from that client for an ID that has not been previously claimed, the client MUST note that the message associated with the ID has been confirmed, and ignore any further ownership claims for that ID.
When a client receives an ownership claim that was sent from a different client of the same user for a ID that has not been previously received, the client MUST note that the message associated with the ID has been retracted, and ignore any further ownership claims for that ID. Retracted messages SHOULD be removed from the client’s user interface, or otherwise marked in some way as retracted.
Clients MUST ignore ownership claims for IDs for which they have no corresponding message. Assuming that messages are delivered and processed in order, these rules should ensure that exactly one client resource has a confirmed copy of the message.
### 3.8 Claims for Multi-User Chat rooms
The same approach that has been described for one-to-one messages above can also be used by Multi-User Chat (XEP-0045) rooms. Rooms that want to participate MUST send the ‘urn:xmpp:tmp:mine:0’ feature in the room’s disco info. The room MUST then perform the role of the server in the above descriptions, ensuring that unique ID’s are assigned to all outbound groupchat messages that were addressed to the bare JID of the room. Ownership claims MUST be sent to the bare JID of the room, not the receiving user. This capability might be used to distribute questions to multiple experts in a room, such that a single expert answers a question.
**Listing 10: Message is sent to the room**
```
<message
from='hag66@shakespeare.lit/pda'
to='darkcave@chat.shakespeare.lit'
type='groupchat'>
<body>Harpier cries: 'tis time, _'tis time.</body>
</message>
```
**Listing 11: Room forwards message to all participants as ownership request**
```
<message
from='darkcave@chat.shakespeare.lit/thirdwitch'
to='crone1@shakespeare.lit/desktop'
type='groupchat'>
<body>Harpier cries: 'tis time, _'tis time.</body>
<whose xmlns='urn:xmpp:tmp:mine:0' id='5'/>
</message>
```
```
<message
from='darkcave@chat.shakespeare.lit/thirdwitch'
to='wiccarocks@shakespeare.lit/laptop'
type='groupchat'>
<body>Harpier cries: 'tis time, _'tis time.</body>
<whose xmlns='urn:xmpp:tmp:mine:0' id='5'/>
</message>
```
```
<message
from='darkcave@chat.shakespeare.lit/thirdwitch'
to='hag66@shakespeare.lit/pda'
type='groupchat'>
<body>Harpier cries: 'tis time, _'tis time.</body>
<whose xmlns='urn:xmpp:tmp:mine:0' id='5'/>
</message>
```
---
4 ERROR CASES
Listing 12: A participant claims ownership
```xml
<message
to='darkcave@chat.shakespeare.lit'
from='crone1@shakespeare.lit/desktop'
type='groupchat'>
<mine xmlns='urn:xmpp:tmp:mine:0' id='5'/>
</message>
```
4 Error Cases
4.1 Invalid "whose"
A server receives a message addressed to the bare JID of a user, from a different user than the one in the to address, containing a "whose" or "mine" element, it MUST NOT forward the message on to any clients. This case is always either an attack, a misconfiguration, or the result of bad code. If the user in the from address is already known to the user in the to address (for example, to user in the to address has a presence subscription to the user in the from address), the server MAY send back a helpful "bad-request" error.
Listing 13: Romeo responds to a bad request from his friend Juliet
```xml
<message
to='juliet@example.com/balcony'
from='romeo@example.net'
type='error'>
<thread>0e3141cd80894871a68e6fe6b1ec56fa</thread>
<body>My client runneth over</body>
<whose xmlns='urn:xmpp:tmp:mine:0' id='4'>
<error type='modify'>
<bad-request xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/>
<text>Yours</text>
</error>
</message>
```
However, if the user in the from address is not known to the user in the to address, or the server prefers not to send helpful errors, the server MUST treat the message as if it was addressed to an unknown user. Otherwise, sending a message with an invalid "whose" or "mine" could allow an attacker to probe for valid users at a site.
5 Business Rules
5.1 Generating IDs
The value of the id attribute sent by servers MUST be valid output from the NODEPREP profile of stringprep.
5.2 ID Semantics
The value of the id resource is completely opaque; receiving clients MUST NOT use any apparent order or semantic in the value of the id to perform optimizations or business logic.
5.3 Comparing IDs
Clients MUST only compare the value of ID’s for equality, never for order. ID’s MUST be compared for equality octet-for-octet or codepoint-for-codepoint; a basic string comparison with no extra canonicalization.
5.4 Accepting Multiple IDs
A client MAY send multiple id elements in an acceptance. Clients that receive a notification with multiple IDs MUST process each ID individually, as if multiple claims had been sent.
5.5 When to send?
To avoid race conditions and edge cases (including invisibility), if both the client and server support the Mine capability, the client SHOULD send ownership queries regardless of whether or not the client sees other resources for the same user online, or the capabilities of those other resources.
5.6 Legacy Clients
Clients that do not implement the Mine capability MAY be sent notifications by the server. The server MAY be optimized to avoid these notifications, however.
6 Implementation Notes
Some examples of events that might lead to a client sending an ownership claim:
• Clicking on a toast notification for the message
• Bringing the client window to the front within a short time after receiving the message, where the message is then displayed to the user
• Bringing the tab containing the message to the front
• Beginning to type a response to the message
• Closing the window containing the message at least several seconds after the message was received
• Clicking an accept button next to a message
• Shutting down the screen saver while the message is in the top-most window
• A camera notices the user’s eyes directed at the message
7 Accessibility Considerations
Some care should be given to the events that can cause ownership claims, particularly in the MUC client implementations, such that users with different abilities all have a chance to claim ownership.
8 Security Considerations
Clients MUST ignore acceptance notifications received from other users.
9 IANA Considerations
This document requires no interaction with the Internet Assigned Numbers Authority (IANA).
10 XMPP Registrar Considerations
This XEP proposes the new namespace ‘urn:xmpp:tmp:mine:0’.
\textsuperscript{4}The Internet Assigned Numbers Authority (IANA) is the central coordinator for the assignment of unique parameter values for Internet protocols, such as port numbers and URI schemes. For further information, see \texttt{<http://www.iana.org/>}.
11 XML Schema
```xml
<?xml version='1.0' encoding='UTF-8' ?>
<xs:schema
xmlns:xs='http://www.w3.org/2001/XMLSchema'
targetNamespace='urn:xmpp:tmp:mine:0'
xmlns='urn:xmpp:tmp:mine:0'
elementFormDefault='qualified'>
<xs:element name='whose'>
<xs:complexType>
<xs:attribute name='id' type='xs:string' use='required'/>
</xs:complexType>
</xs:element>
<xs:element name='mine'>
<xs:complexType>
<xs:sequence>
<xs:element ref='id' minOccurs='1' maxOccurs='unbounded'/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name='id'>
<xs:complexType>
<xs:simpleContent>
<xs:extension base='xs:NMTOKEN'/>
</xs:simpleContent>
</xs:complexType>
</xs:element>
</xs:schema>
```
|
{"Source-Url": "https://xmpp.org/extensions/xep-0259.pdf", "len_cl100k_base": 4326, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 32172, "total-output-tokens": 5157, "length": "2e12", "weborganizer": {"__label__adult": 0.0004086494445800781, "__label__art_design": 0.0002617835998535156, "__label__crime_law": 0.0019407272338867188, "__label__education_jobs": 0.0009617805480957032, "__label__entertainment": 0.00025200843811035156, "__label__fashion_beauty": 0.0001590251922607422, "__label__finance_business": 0.0012731552124023438, "__label__food_dining": 0.0002168416976928711, "__label__games": 0.0015659332275390625, "__label__hardware": 0.00830841064453125, "__label__health": 0.0002157688140869141, "__label__history": 0.0002803802490234375, "__label__home_hobbies": 9.822845458984376e-05, "__label__industrial": 0.0005125999450683594, "__label__literature": 0.00034499168395996094, "__label__politics": 0.0003120899200439453, "__label__religion": 0.00035190582275390625, "__label__science_tech": 0.060546875, "__label__social_life": 0.00012743473052978516, "__label__software": 0.2900390625, "__label__software_dev": 0.630859375, "__label__sports_fitness": 0.00035452842712402344, "__label__transportation": 0.00045680999755859375, "__label__travel": 0.00018680095672607425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17637, 0.03434]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17637, 0.03145]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17637, 0.81637]], "google_gemma-3-12b-it_contains_pii": [[0, 433, false], [433, 2968, null], [2968, 3705, null], [3705, 5753, null], [5753, 6933, null], [6933, 8466, null], [8466, 9322, null], [9322, 10514, null], [10514, 12541, null], [12541, 14111, null], [14111, 15496, null], [15496, 16872, null], [16872, 17637, null]], "google_gemma-3-12b-it_is_public_document": [[0, 433, true], [433, 2968, null], [2968, 3705, null], [3705, 5753, null], [5753, 6933, null], [6933, 8466, null], [8466, 9322, null], [9322, 10514, null], [10514, 12541, null], [12541, 14111, null], [14111, 15496, null], [15496, 16872, null], [16872, 17637, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17637, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17637, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17637, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17637, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17637, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17637, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17637, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17637, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17637, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17637, null]], "pdf_page_numbers": [[0, 433, 1], [433, 2968, 2], [2968, 3705, 3], [3705, 5753, 4], [5753, 6933, 5], [6933, 8466, 6], [8466, 9322, 7], [9322, 10514, 8], [10514, 12541, 9], [12541, 14111, 10], [14111, 15496, 11], [15496, 16872, 12], [16872, 17637, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17637, 0.01]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
afc1f23440431f7a2b0b0712a44719adb15ca6aa
|
ACO documents clustering – details of processing and results of experiments
Łukasz Machnik*
Department of Computer Science, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warszawa, Poland
Abstract
Ant algorithms, particularly Ant Colony Optimization met-heuristic, are a universal and flexible solution. In this publication the author presents the implementation of that technique in the documents clustering area – the new documents clustering method. The aim of this document is to present the details of the ACO documents clustering method, potential ways to optimize its processing and detail results of experiments.
1. ACO-based clustering method
Noticed analogy between finding the shortest way by ants and finding documents most alike (the shortest way between documents), and in addition ability to use agents who construct their individual solutions as an element of the general solution, became the stimulus to begin research on using the ant based algorithms in the documents clustering process [1].
1.1. Details of processing
The method of document clustering which is introduced here, is based on the artificial ant system [2,3]. Application of such a solution will be used as a method of finding the shortest path between the documents, which is the goal of the first phase (trial phase) of the method in question. The second phase (dividing phase) will have a task to actually separate a group of documents alike.
The aim of trial phase is to find the shortest path connecting every document in the set using ACO algorithm [4,5]. That is equivalent to building a graph, whose nods would make up a set of analyzed documents. The probability of choosing the next document $j$ by ant $k$ occupying document $i$ is calculated by the following function (1).
*E-mail address: L.Machnik@ii.pw.edu.pl
In the above formula, \( Z_k \) represents a list of documents not visited by ant \( k \), \( \tau_{ij}(t) \) represents the amount of pheromone trail between documents \( i, j \), \( \alpha \) is the intensity of pheromone trail parameter, \( \beta \) is the visibility of documents parameter, however, \( s_{ij} \) is the cosine distance between documents \( i \) and \( j \). After ants complete their trace the pheromone trail is evaporated and new amount of pheromone is left between every pair of documents. The amount of pheromone that is left by the ants is dependent on quality of the constructed solution (length of the path). In practice, adding the new portion of pheromone to trail and its evaporating is implemented by the formula presented below. This formula (2) is adapted to every pair of documents \( (i,j) \).
\[
\tau_{ij}(t) \leftarrow (1 - \rho) \cdot \tau_{ij}(t) + \Delta \tau_{ij}(t) .
\] (2)
In the above formula, \( \rho \in (0, 1) \) stands for the pheromone trail decay coefficient, while \( \Delta \tau_{ij}(t) \) is an increment of pheromone between documents \( (i,j) \). Below the dependence (3) that controls the amount of pheromone left by ant \( k \) between the pair of documents \( i,j \) is presented.
\[
\Delta \tau^k_{ij}(t) = \begin{cases}
n/L_k(t), & \text{for } (i,j) \in T^k(t), \\
0, & \text{for } (i,j) \in T^k(t)
\end{cases}
\] (3)
In the above formula, \( T^k(t) \) means a set of document pairs that belong to the path constructed by ant \( k \), \( L_k(t) \) is the length of path constructed by ant \( k \), while \( n \) is the amount of all documents. Finding the shortest path connecting every document in the set will be equivalent to building a graph, whose nods would make up a set of analyzed documents. Documents alike would be neighboring nods in the graph, considering that the rank of the individual nods will fulfil the condition of being smaller or equal to 2, which means that in the final solution one of the documents would be connected to only two others (similar) – each document in the designed solution would appear only once. Gaining of such a solution would mean the end of the first phase, known as preparing.
The code below represents the trial phase.
```plaintext
1 Procedure sequence_preparation()
2 {
3 reset_pheromone();
4 initialize_ants(number_of_ants);
5 for(number_of_ants)
6 {
7 reset_ant();
```
In the following stage of the process it is necessary to separate a group of documents alike in a sequence obtained in the first phase. The separation of groups is obtained by appropriate processing of the sequence of documents (the shortest path) received in the preparing phase [6]. Following individual steps of that process is described. The vector that represents the first document in the sequence is recognized as centroid $\mu$ of the first group that is separated. In the next step we calculate the sum of all elements (positions) of the centroid vector. After that we calculate the cosine distance between the centroid vector $\mu$ and the vector $D$ that represent the next element of documents sequence. Next, we check the condition (4). If it is true, then the considered element permanently becomes the member of the first group. We recalculate the value of centroid and try to extend this group by adding the next element from the sequence.
$$\delta \ast \sum_{k=1}^{n} t_{\mu k} < \cos(\mu, D).$$ \hfill (4)
The $\delta$ parameter is called the attachment coefficient and its range is $(0,1]$. However, if the condition is false, then the separation of the first group is finished and the separation of the next (second) group begins. The vector of the considered document that could not be added to the first group becomes the initial centroid of the new group. The whole process is repeated from the beginning. Processing is finished when the whole sequence of documents is done.
The code below represents the dividing phase.
```c
Procedure groups_separation()
```
---
**ACO documents clustering – details of processing...**
```c
build_solution();
update_best_document_sequence();
{
distribute_pheromone();
}
```
---
Pobrane z czasopisma Annales AI- Informatica [http://ai.annales.umcs.pl](http://ai.annales.umcs.pl)
Data: 03/02/2019 01:34:15
---
**UMCS**
2 while (available_documents)
3 {
4 if (current_document == first_document)
5 new_group_creation();
6 add_document_to_group(current_document);
7 centroid_calculation(current_group);
8 }
9 else
10 {
11 if (check_attachment_condition)
12 add_document_to_group(current_document);
13 centroid_calculation(current_group);
14 }
15 else
16 {
17 new_group_creation();
18 add_document_to_group(current_document);
19 centroid_calculation(current_group);
20 }
21 }
22
1.2. Variants of the method
The number of separated groups depends precisely on the attachment coefficient. When we use a big value (close to 1) of $\delta$ parameter as a result of processing we receive a large number of groups with a high degree of cohesion. The decrease of $\delta$ value causes receiving smaller number of groups with less cohesion. In connection with the above conclusion, there is a possibility to propose two variants of considered method [6,7].
The first variant called by the author – single pass, is based on very precise execution of the trial phase – a lot of ants. The duration of the first phase increases, however, this activity permits to accept a smaller value of the attachment coefficient during the dividing phase and finishing processing after the single pass of algorithm – the single trial phase and the single dividing phase.
The clustering method that uses the single pass variant is the example of non-hierarchical clustering method. The main advantage of that method is that operator does not have to set the expected number of clusters at the beginning of processing. The results received in this variant are less precise than those from
the second variant, however, the time of processing is much shorter than that of
the second proposed variant. This type of considered method can also act as a
trial phase for other clustering algorithms. The example can be separations of
centroids for K-means method.
The second variant called by the author – periodic, differs a little bit from that
proposed earlier. It assumes periodic processing of both phases: trial and
dividing. In every iteration of dividing phase the small numbers of neighbors are
connected into small groups. The value of attachment coefficient is very high in
initial phases and is gradually decreased to allow group creation in next
iterations. Each group during processing is represented by centroid. After group
creation and centroids calculations the next iteration can be started – finding the
shortest path between centroids and documents. The whole process is finished
when all documents are connected as a single cluster or when the stop criterion
is reached.
This variant is an example of agglomerative hierarchical clustering method
that begins with a set of individual elements which are then connected to the
most similar elements forming bigger and bigger clusters. The result of
hierarchical technique processing creates a nested sequence of partitions. The
main partition is placed at the top of hierarchy. It includes all elements from the
collections under consideration. The base of hierarchy creates individual
elements. Every middle level can be represented as combination of clusters that
are at the lower level in hierarchy. User can choose any level that satisfies him
as solution.
1.3. Optimization
The second variant proposed by the author is the dynamic one. It means that
during each iteration the optimal solution (the shortest path) is changed. The use
of optimization method that adopts solution to changing optimum is
recommended. The key aspect is to use solution that was received in the
previous phases – the previous iterations; to find solutions to the changing
problem. Till now one of the dynamic problems that was solved by using ant
algorithms has been that of finding a route in the telecommunication network
[8,9]. In the presented method (periodic variant), a change (adding new
calculated centroids) takes place in the exact point of time (next iteration) and it
is required that algorithm should adopt to the change. In the basic version of
presented method after the problem is changed (adding new centroids and
erasing the previously grouped documents) algorithm is reset. If we assume that
the change of problem is relatively small, it is probable that the new optimum
will be connected with the old one. It can be useful to transfer knowledge that
was discovered during creating the old solution to build the new one.
To reach the strategy described above, the author proposes to use the modification of pheromone trail between documents as a response to changing the problem: adding new centroid and erasing document. During pheromone trail modification the problem is to keep right balance between resetting the correct amount of pheromone to make the process of finding new optimal solution flexible, and to keep enough knowledge to accelerate the searching process. The strategies of pheromone modification were presented inter alia in publications [10,11]. Modifications that were described in those publications can be called – global, but their disadvantage is the fact that they do not include place where the change occurred. According to that, to calculate the initial amount of pheromone trail for iterations <>1, the author proposes using the strategy that is called $\eta$-strategy, described in [12]. The „$\eta$-strategy” uses heuristic information, distance between documents, to define a degree of compensation that should be performed on a value of pheromone trail. This method is based on implementing the function that is presented below to calculate pheromone trail for every couple of documents/centroids $(i,j)$:
$$\tau_{ij} \leftarrow (1-\gamma_i) \star \tau_{ij} + \gamma_i \star (n-1)^{-1}. \quad (5)$$
Parameter $\gamma_i \in [0,1]$ is called the reset value and for every document/centroid its value is proportional to the distance between the document/centroid $i$ and the new added element $j$. The value of the reset parameter:
$$\gamma_i = \max\left(0, d^*_j \right), \quad (6)$$
where
$$d^*_j = 1 - \left( s_{\text{avg}} / \lambda * s_j \right), \quad (7)$$
$$s_{\text{avg}} = \left[ n \star (n-1) \right]^{-1} \sum_{i=1}^{n} \sum_{k<i} s_{ki} \quad (8)$$
$$\lambda \in (1, \infty). \quad (9)$$
The parameter $n$ defines the number of elements that take part in processing.
2. Results of the experiments
2.1. Experimental system
The experiments that are presented in this publication were performed using the KLASTERYZATOR ACO document clustering system. That system was implemented by ANSI C++. During the research two collections of documents were used. The first collection was McCallum newsgroups that contained documents from twenty forums from the USENET network. Documents were chosen randomly. The second set was created by the documents from the Reuters-21578 repository. The documents from that collection were representatives of the biggest thematic groups.
2.2. Clustering algorithms
In [1] the most popular clustering method was presented. In the experimental system three of them were implemented: K-means (non-hierarchical), single link method (hierarchical) and average link method (hierarchical). These methods were chosen because they are popular and commonly implemented in practice and that is the reason why they were good candidates for comparison.
2.3. Results evaluation
The results of experiments were evaluated using the internal quality measure – intra-cluster variance. This method was chosen for two reasons. Firstly, the application of ant-based clustering in the real clustering task required the evaluation of the obtained results without knowledge of the correct solution. Secondly, these functions provided additional information about structure of the obtained solutions and can therefore help to understand and analyze results. Additionally it is important to remember that the number of groups that we received from processing was also a cluster evaluation measure. The method presented by the author has unique properties to control the trend of cluster creation number.
2.4. Number of groups
The ACO clustering method is characterized by the ability to identify the number of clusters in the collection that is processed. The majority of popular methods (K-means, single link method, average link method) require the input parameter that constitutes the number of outcome groups. This kind of behavior requires the ‘a priori’ knowledge of collection that will be processed or interaction with another algorithm that has the preparatory function. Such interaction is very often the source of many problems. Also, clustering algorithms that are able to identify the number of clusters automatically have many limits. Incorrect choice of number and value of the centroids can have dramatic impact on final solution. This kind of situation can observe in Figs. 4 and 5.
On the other hand, impossibility to directly define the number of resultant clusters can be recognized as a disadvantage. There are many applications in which the user requires the ability to define that value by himself. The clustering method presented in this publication beside identifying the number of resultant clusters, delivers a tool to manipulate the trend of cluster identification number. This tool is used for attachment of the coefficient $\delta$. Fig. 1 describes flexibility in manipulating the number of clusters using $\delta$ parameter.
number of groups=f(attachment coefficient)
2.5. Sizes of the groups
Figs. 2 and 3 present the way of forming the sizes of the groups for the methods considered during experiments. The analysis of the results shows that the method proposed by the author is characterized by the proportional distribution of elements among clusters. Also, the trend of creating one superior group can be noticed. The results of ACO processing are quite similar to those for K-means processing. It is quite important to observe that the ACO clustering method has a tendency to limit the effect of creating one superior group instead of creating more balanced clusters with a high degree of cohesion (Figs. 4 and 5). The single link method and the average link method give much worse results then the first two methods. They have a tendency to create one predominant group.
Fig. 2. The sizes of groups – news collection (1) ACO method, (2) K-means method, (3) single link method, (4) average link method
2.6. Quality
The results of experiments presented in the figures show that the quality of ACO clustering is very high for both texts collections. The results obtained for different numbers of groups demonstrate the dominance of ACO method over other tested methods. The quality stability of the results for ACO clustering should be noticed.
The results generated by the single link method and the average link method are quite similar but the difference between them and other results is significant.
For the K-means method we received good results for some small number of groups but the quality of processing is getting worse at a larger number of groups. For the K-means method we can also observe the dramatical deterioration of results quality with a very large number of groups. This effect is caused by random selection of centroids and can be limited by using special algorithms for centroids generation.
Fig. 4. The value of variance – news collection
Fig. 5. The value of variance – reuters collection
2.7. Time
The experiments show that for small collections of documents the ACO method is much slower than the other tested methods. However, it should be noticed that the bigger size of collection the presented method tends to be ahead of the competitors. Only the single link method is able to return the results faster than the ACO method but at the same time the quality and group distribution is much worse.
Fig. 6 presents the time of processing for the documents collections with different sizes. The time of processing depends on the number of resultant groups. The results with the best quality and good speed are obtained only by the method proposed by the author. It is also important to note that the fastest results are generated using quite a small group of ants. It is associated with loss of quality but even so the results are still better than those obtained by other methods.
Fig. 6. Relation between time and number of processed documents
Conclusions
The experiments confirm an argument that the ant algorithms can be successfully implemented in the text documents processing. The attempt of creating valuable clustering method based on ACO meta-heuristic was successful. This proves the universal nature and flexibility of ACO meta-heuristic. The tests performed in test environment proved the utility and advantages of the method created by the author of this publication. The results obtained during experiments are characterized by good quality, speed for big collections of documents and flexibility in determining a number of resultant groups. It seems that it is possible to increase the performance of calculations by implementing a parallelization in processing. This topic will be dealt with in future research by the author.
References
|
{"Source-Url": "https://journals.umcs.pl/ai/article/download/3096/2292", "len_cl100k_base": 4151, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 25324, "total-output-tokens": 5297, "length": "2e12", "weborganizer": {"__label__adult": 0.00036787986755371094, "__label__art_design": 0.0005521774291992188, "__label__crime_law": 0.0006041526794433594, "__label__education_jobs": 0.0016794204711914062, "__label__entertainment": 0.00012958049774169922, "__label__fashion_beauty": 0.00023365020751953125, "__label__finance_business": 0.0004758834838867187, "__label__food_dining": 0.00040268898010253906, "__label__games": 0.0007257461547851562, "__label__hardware": 0.0013723373413085938, "__label__health": 0.0011615753173828125, "__label__history": 0.0005354881286621094, "__label__home_hobbies": 0.0001894235610961914, "__label__industrial": 0.000865936279296875, "__label__literature": 0.00049591064453125, "__label__politics": 0.00041031837463378906, "__label__religion": 0.0005049705505371094, "__label__science_tech": 0.461181640625, "__label__social_life": 0.00019681453704833984, "__label__software": 0.017913818359375, "__label__software_dev": 0.5087890625, "__label__sports_fitness": 0.0003643035888671875, "__label__transportation": 0.0007023811340332031, "__label__travel": 0.00024819374084472656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21501, 0.04425]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21501, 0.47595]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21501, 0.91513]], "google_gemma-3-12b-it_contains_pii": [[0, 1827, false], [1827, 4239, null], [4239, 6129, null], [6129, 7885, null], [7885, 10685, null], [10685, 13180, null], [13180, 15679, null], [15679, 16665, null], [16665, 17630, null], [17630, 18643, null], [18643, 21501, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1827, true], [1827, 4239, null], [4239, 6129, null], [6129, 7885, null], [7885, 10685, null], [10685, 13180, null], [13180, 15679, null], [15679, 16665, null], [16665, 17630, null], [17630, 18643, null], [18643, 21501, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21501, null]], "pdf_page_numbers": [[0, 1827, 1], [1827, 4239, 2], [4239, 6129, 3], [6129, 7885, 4], [7885, 10685, 5], [10685, 13180, 6], [13180, 15679, 7], [15679, 16665, 8], [16665, 17630, 9], [17630, 18643, 10], [18643, 21501, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21501, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
c3d88712997e775be190939810461df31efa7d37
|
[Proceeding] Automatic Configuration of Opaque Network Functions in CMS
Original Citation:
Availability:
This version is available at: http://porto.polito.it/2572147/ since: October 2014
Publisher:
IEEE
Published version:
DOI: 10.1109/UCC.2014.122
Terms of use:
This article is made available under terms and conditions applicable to Open Access Policy Article ("Public - All rights reserved"), as described at http://porto.polito.it/terms_and_conditions.html
Porto, the institutional repository of the Politecnico di Torino, is provided by the University Library and the IT-Services. The aim is to enable open access to all the world. Please share with us how this access benefits you. Your story matters.
(Article begins on next page)
Abstract—Cloud Management Systems (CMS) such as OpenStack are commonly used to manage IT resources such as computing and storage in large datacenters. Recently, CMS are starting to offer customers also the possibility to customize their network infrastructure, allowing each tenant to build his virtual network made of elementary blocks such as traffic monitors, switches, routers, firewalls, and more. However, tenants have to choose those network services among the list of services made available by the CMS and have no possibilities to customize the applications they want.
This paper examines some of the modifications required in CMS to support a tenant-centric network service model, in which each tenant can install and configure their preferred network functions, without being limited to use only the list provided by the CMS. A prototype implementation validates the proposed approach and demonstrates the extent of the modifications in terms of languages and software components.
I. INTRODUCTION
The concepts of Software Defined Network (SDN) [1] and Network Function Virtualization (NFV) [2] allow Network Service Providers (NSPs) and companies to give more freedom to their customers. Unfortunately today any changes about location and settings of customers Virtual Machines (VMs) in data center networks have to be managed by the operator. In addition, tenants can use just the functions provided by their NSP. For these reasons, network operators are looking at new possible scenarios where tenants are offered the possibility to create Virtual Networks [3] managed and configured by the tenants themselves without requiring operators action. In this way a tenant could define how his traffic should be processed using a set of network functions chosen by himself. This could allow also a tenant to decide how to connect his resources (VMs), without having directly control of the physical network. In particular if these functions are distributed in the operator network or if they are all located in a data center, this service does not change from a tenant point of view. Today NSPs, which offer cloud-based solutions, leverage a Cloud Management System (CMS) to manage computing and storage in their data center, and another component, called Network Operating System (NOS), for network management. NOS and CMS interact to guarantee a multi-tenant environment: the NOS receives from the CMS a virtual network definition for each tenant and configurations for each function of that network. This interaction is limiting, because the tenant is allowed to build his virtual network only by choosing from network functions provided by his NSP. Hence if a tenant would like to insert a different function, the operator has to modify his system, taking care of the integration of the new function in terms of configuration and communication with the other components (like other network functions or the NOS).
In this paper, we propose a possible solution that enables configuration of network functions that are opaque from the operator point of view. In particular a network function is a module that processes traffic in a specific manner and could be implemented in software or deployed into a physical network element (e.g., firewall, DPI, NAT, router, etc). In our vision, NSPs could allow tenants to insert new functions, written by any programmer, in their virtual network, but the operator should not know how those functions work and which type of functions they are. Thus operators would handle these opaque network functions like black boxes, assuring however their total integration in the operator’s network. This means that tenants have to be able to configure any functions in one of the ways supported by the functions themselves: taking the example of a tenant that uses a firewall, he has to be able to load a set of protection policies on the firewall, similarly a traffic’s pattern in a DPI to check possible attacks.
The remainder of this paper is composed as follows: in Section II, we describe the different works that have completed our background; Section III presents an overview of our architecture; in Section IV, a prototype of our solution is described in detail; in Section V, we demonstrate the validity of our implementation through two use cases; finally Section VI concludes presenting possible future works.
II. RELATED WORK
The research world has presented different works somehow related to ours. Among these works, we can find possible architectures for managing Network Service Chains (NSCs). One of these architectures was described in the work made by Beliveau [4], while the NSC Architecture (NSCA) is presented in [5]. However, such architectures do not have any mechanism to extend the set of functions allowed, and hence to introduce new functions neither to configure them. In addition, the concept of chain is more static than a virtual network: traffic can follow just one path chosen based on tenants policies, rather than being able to follow any arbitrary path in the network.
Another proposal related to virtual service chains is being developed within the European project UNIFY. The approach taken by this project is close enough to ours, because, in UNIFY, NSPs can distribute network functions in the whole network, locating management aspects in an automated orchestration engine [6] [7]. The UNIFY project has also expressed the need to have a service abstraction model for defining and programming service chains: however, at the best of our knowledge, it seems that the configuration of the single network functions that compose a service chain is overlooked, leaving the configuration issue an open topic.
A service description is needed by the CMS to understand the basic requirements of an opaque network function. H. Song has noted in [8] the need of a standardization of the information model, in order to represent the user’s functional and resource requirements, and to map and apply these requirements to the underlying infrastructure. Literature helps us with different solutions, which address description at service level and at resources level, from both the hardware (physical and virtual) and software points of view. One of these proposals is VXDL [9] that is defined as a language for describing a virtual network topology, including storage, computing and links, and a virtual timeline, to specify when a certain resource is needed. Unfortunately this temporal constraint is difficult to synchronize with the orchestration engine. In this context, another example is the network-centric cloud architecture proposed in [10], where a centralized control layer should manage the resources available for all network services.
Finally there have been several approaches in literature for configuring network functions like the NETCONF [11] and SNMP [12] protocols. However from an operator point of view, the use of such protocols is quite limiting because tenants can use just those network functions which support such configuration protocols, while we are envisioning an architecture flexible as much as possible.
III. THE PROPOSED ARCHITECTURE
In our architecture, the main actors are: operator (or Network Service Provider, NSP), tenant and programmer.
The main objective of our architecture is to give flexibility to tenants, by allowing the set of functions available to a tenant to be extended according to the tenants needs. Reaching this goal by progressively increasing the overall number of network functions offered by the NSP is not trivial, because any requirement coming from a tenant might imply a huge integration cost; also, different tenants might request support for different network functions. This is why our proposal focuses on giving the possibility for a tenant to introduce any new network functions implemented by third parties (we refer to them as programmers) in his virtual network, and be able to configure them through a unified API provided by his network operator.
We also would like to relieve the programmer from the burden of integrating his own network functions, implemented as Virtual Network Functions (VNFs)1, in every specific NSP architecture. The VNFs should be readably usable in any present and future architecture, without the need of specific integration efforts.
Finally the network operator should be able to load into his own network any third-party VNF without additional complications. Furthermore, we would like to avoid the insertion of any VNF-specific configuration plug-ins inside the network operator’s CMS: this avoids the problem of supporting arbitrary front-ends inside the unified view offered by the CMS.
A. Challenges
There are challenges to be solved both when inserting such VNFs inside a virtual network and when configuring them. With respect to the insertion problem, there should be a way to load a VNF into a virtual network and link it to the other ones; furthermore, the spectrum of VNF configuration methods is very wide and, even if we can categorize them in common types, every function has its own quirks.
The insertion problem can be solved already by many CMS. If a programmer can provide a disk image of his VNF, a CMS can treat it like a regular Virtual Machine; also, since many of their network plug-ins already support patching VMs inside a virtual network, a basic level of insertion can be achieved today. Many of the outstanding issues are related to the configuration phase instead; hence we focused our attention on them.
We also believe that, by having a rich configuration service, less complexity is needed in the insertion phase. As an example, let us consider the case of a third-party router deployed into a virtual network: in a traditional scenario, a tenant is required to deploy the router into a virtual network, then access its configuration interface through a virtual console (or similar mechanism) to configure the network interfaces of the router in terms of IP address, routing protocols, etc. In our vision, there should be no need to access this VNF-specific interface, and the tenant should be able to configure the router through the same API that he used to deploy the router in the network. In addition, having a suitable configuration service, an automatic configuration service could be enabled, for example, in the case of tenant’s configuration errors. Considering the same router and a third-party web cache connected to the same subnet, if the tenant changes the subnet prefix and reconfigures just the IP address of the router interface, the NOS could be able to recognize such misconfiguration and hence should have the means to fix this error configuring properly the web cache.
Inserting opaque functions might bear possible high risks for NSPs: due to the lack of relationship between the programmer and the NSP that is installing an unknown function
1We use indifferently the terms “network function” and “VNF".
that is aware of all the particular techniques and parameters needed for that method. As shown in Figure 1, each translator configures the VNF directly. Having separate translators also makes the system more extensible and manageable, as it allows an easier insertion, replacement and removal of configuration methods: when the operator wants to support a new configuration method, the operator has just to make available a new translator.
Configuration translators receive multiple inputs (Figure 1) : (i) the tenants configuration received from the operator-defined API and saved into an object model to know the actual values that should be set inside the VNF; (ii) the VNF configuration rules, to know the format required to deploy those configuration values into the VNF; (iii) a set of VNF access parameters required to connect to the VNF (e.g., IP address of the VNF, root password, etc...) and to load the configuration into it.
The structure of the object model and the VNF configuration rules are VNF-specific; they are both provided by the programmer through a description file, written in the unified description format. This allows the programmer to write the description file only once, and use the same file even across different NSPs. The VNF access parameters are, instead, translator-specific and VNF-independent: the number and type of these parameters is standardized for each translator, but their actual run-time values are set by either the network operator or the programmer, depending on the specific case.
D. Configuration translators inputs
An instance of the object model, specific for a VNF, collects the configuration parameters of that VNF, provided by the tenant. The object model instance is self-descriptive: in other words, one can discover its structure from the instance itself. This is important because when the configuration translator receives the object model instance, it can derive the structure of the model that was used by the programmer in the description file; this is crucial to generate the VNF configuration in the right format. Using an object model also makes easier to change in a transparent way the global API provided by the operator and avoids data-structure formats specific for translator to collect the VNF configuration chosen by the tenant.
The VNF configuration rules are a set of directives used to drive the translator in generating the VNF configuration in the right format (Figure 1). They express the way to translate the structure and content of the object model instance into the specific structure required by the VNF configuration method. If a specific VNF supports multiple configuration methods, the programmer can include VNF configuration rules for all of them in the same description file.
The VNF access parameters are used to instruct the configuration translator about how to connect to the VNF and load the configuration provided by tenant. As explained before, the programmer does not set all of these parameters,
because some of them might be tied to some management aspect internal to the NOS, like VNF location.
All of these inputs will be used to generate the final VNF configuration, following the workflow shown in Figure 1. Taking the example of a firewall, a user would like to define the network policy rules. In this case, the object model instance contains the set of policy rules themselves; the VNF configuration rules specify the format of policy rules in the particular VNF architecture; VNF access parameters describe how to program the policy rules inside the firewall (e.g., the IP address, port and protocol required to connect to the firewall to deploy the configuration).
IV. ARCHITECTURE IMPLEMENTATION
This section describes a prototype implementation of the architecture presented. We have also validated its workflow using two use cases described in the next section. We start to present some details, which have been left out of the description to keep the architecture more generic, about the choice of the languages used for the description file and the VNF access parameters. Then we describe our prototype and its validation.
Listing 1: YANG language example.
```yang
module router {
import ietf-inet-types { prefix inet; }
import ietf-yang-types { prefix yang; }
list interfaces {
//api:file:header "//Beginning of the Config File";
//api:file:list_format "\%NAME \n";
//api:file:separators "\n\n";
//api:file:footer "\n //End of the Config File";
key name;
leaf name { type string; }
list ethernet {
//api:file:list_format "\%NAME \%VALUE \n";
//api:file:separators "\n";
//api:file:footer "\n";
key name;
leaf name { type string; }
leaf address {
//api:file:leaf_format "\%NAME \%VALUE\n";
type inet:ipv4-address;
}
leaf hwid {
//api:file:leaf_format "hw−id \%VALUE\n";
type yang:mac-address;
}
}
}
}
```
A. Languages Choices
The YANG language [14] has been chosen for the description file. YANG is a data modeling language developed by IETF to model configuration and state data manipulated by the NETCONF protocol. In particular YANG was chosen for several reasons: it is orthogonal to network protocols and it is implementation-independent and human-readable; it is also a language developed with network configuration in mind and extensible, as it allows creation of user-defined statements.
In our case, the configuration data for a VNF is modeled in YANG by creating an object model specific for that VNF. An example of a possible YANG description file for a router is shown in Listing 1, where we define a structure to save the state of Ethernet interfaces. The idea is to have a data structure to enumerate all interfaces of a given router and, for each of them, store all of the network and physical addresses associated with that interface. Accordingly, a top-level interfaces list is defined to include the names for all the interfaces to be configured; a nested ethernet list contains all addresses specific for an Ethernet interface.
YANG provides by default a number of directives to validate some properties of its statements. Examples of directives provided by YANG are: type checking; a default value for a leaf statement; definition of mandatory or optional statements (like leaf, list, leaf-list and others). Other simple validations are possible through the definition of new YANG types. A more complex validation system would require an extension of the YANG language.
Since, in the proposed solution, the description file includes both the structure of the object model and the VNF configuration rules, it means that those rules have to be specified in the YANG language as well.
B. VNF configuration rules syntax
VNF configuration rules take the form of special comments in the description file (Listing 1). These rules are defined in a particular statement with the following structure:
```yang
<Translator_N>:<Rule_N> <Rule_V>
```
where `<Translator_N>` specifies which configuration translator the rule belongs to and `<Rule_N>` and `<Rule_V>` represent the rule name and value. This allows us to group all the rules for a specific translator under a specific prefix: we can consider them similar to a programming language namespace, that allows us to reuse a rule name across translators, if we need to. `<Translator_N>` can assume values like “file”, “cli”, “rest”, etc that denote the translators created in our system.
As an example, let us consider a translator to configure VNFs using files: each rule for this translator is preceded by the prefix “//api:file:”. We can see some of them in Listing 1: separators, list_format, header and footer. All rule values are interpreted as strings. When generating the configuration file, header and footer are printed respectively before and after the current element (e.g., list or leaf), while separators is used to separate child nodes of the current element (of course it is not applicable to a leaf statement, which does not have child nodes by definition). Furthermore
2 Usually a network interface is assigned only one network and physical address, but this is not true in the general case.
3 In fact could be needed to have existence constraints: this is case when a parameter could exist only if another one was set or if this one has a particular value.
For the sake of simplicity, in our implementation we have set all the VNF access parameters as configurable by the tenant. In a real world scenario, however, some of these parameters (e.g., the IP address where the VNF is located) should be managed just by the operator.
Finally we can note that our solution supports functions that require multiple configuration files. The Config_File_API library can be instructed to write different portions of the same YANG file into different configuration files, so that VNFs that require it can dump different parts of their data into different locations. This can be done because of the object model abstraction: for the purpose of the Config_File_API library, a YANG list at topmost level of the YANG file is no different from another list nested under it.
```
module bind9 {
list zone {
//api:file:list_format "%%NAME \"%%VALUE\"" {\n"
//api:file:separators ";\n"
//api:file:footer ";\n"
key name;
leaf name { type string; }
leaf type {
//api:file:leaf_format "%%NAME \"%%VALUE\"\n"
type string: }
leaf file {
//api:file:leaf_format "%%NAME \"%%VALUE\"\n"
type string: }
leaf master {
//api:file:leaf_format "%%NAME { \"%%VALUE\";\n"
type string: }
}
}
}
```
V. TESTING
Our prototype was validated using two network functions: Bind9 and Vyatta Core. Bind9 is an implementation of a DNS server and we have defined a YANG description file for this VNF collecting all the information needed to guarantee its correct behavior regardless of the role it is configured to act as: an excerpt of this description file is shown in Listing 2.
For our test, we have started manually, in our prototype, an instance of Bind9 and we have configured it to act as Secondary Master (which gets the zone data from another Name-server that is the Primary Master for that zone) by editing its object model through the REST interface. To better understand the test, we show an excerpt of the final configuration file, automatically generated by our system, where we have defined a zone in Bind9 syntax (Listing 4). In particular our test first uses a bash script to send HTTP messages to the NOS through the REST interface. After that, the Bind9 instance is interrogated directly to validate that the expected configuration was created and was loaded correctly. The workflow of our test is shown in Figure 2, as well as the structure of our prototype: first of all, we have sent two messages to set the VNF access parameters and the configuration parameters for Bind9; the Config_File_API has read its three inputs already explained, to generate and load the configuration file into the VNF; we have interro-
gated directly the Bind9 VNF to verify that all the process worked fine.
We have done a similar test for the second use case, Vyatta Core, which is a software router. Listing 1 shows an excerpt of YANG description file for this router. For our test, we configured an Ethernet interface, defining its IP address and the other main parameters, as shown in Listing 3. As in the previous case, we have validated our configuration with another bash script. This test, as in the previous case, has created an instance of an ethernet list in the YANG object model and has set its parameters. Then we have validated the Level 3 configuration of the Vyatta Core instance by testing its reachability through an ICMP request.
Listing 3: Vyatta configuration file.
```yaml
interfaces {
ethernet eth0 {
address dhcp
duplex auto
hw-id 00:0c:29:64:66:1c
mtu 1500
smp_affinity auto
speed auto
}
}
```
Listing 4: BIND9 configuration file.
```yaml
zone "example.com" {
type slave:
file "db.example.com"
masters { 192.168.1.10; }
}
```
VI. CONCLUSION AND FUTURE WORK
This paper focuses on opaque network function configuration inside NSP’s networks. After illustrating the type of services that NSPs provide to their customers, the need of the tenant-centric model was motivated and it was illustrates how to extend the typical CMS architecture to integrate third-party VNFs. To do this, we leverage the use of a VNF description file that allows the NOS to know the main aspects of an external VNF.
Finally we presented a prototype of our solution. This prototype was validated by implementing VNF configuration through configuration files, using a solution that is independent from the specific format used by the VNF for its configuration files (e.g., XML, text file or proprietary). Our tests produced a successful validation and it consisted of a specific translator to create configuration files, which interacted with two different network functions: Bind9 (a DNS server) and Vyatta Core (a software router).
Possible future extensions could be the addition of more intensive validation mechanisms, since currently we leverage only the validation instruments provided by YANG. In particular this type of work could regard both the validation of configuration output (e.g., more complex constrains checking) and validation of the correct integration in the system (e.g., guarantee that all requirements defined by a final user are respected or guarantee the expected behavior of the virtual network). Also our solution could be tested with other types of VNFs to validate different configuration file formats.
ACKNOWLEDGMENT
The authors would like to thank PLUMgrid, Inc, a startup based in California, USA, which has supported this work.
REFERENCES
|
{"Source-Url": "http://porto.polito.it/2572147/1/14NVSDN_ServiceInsertion.pdf", "len_cl100k_base": 5163, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22285, "total-output-tokens": 6593, "length": "2e12", "weborganizer": {"__label__adult": 0.0003056526184082031, "__label__art_design": 0.0003921985626220703, "__label__crime_law": 0.0003464221954345703, "__label__education_jobs": 0.0005440711975097656, "__label__entertainment": 0.00012034177780151369, "__label__fashion_beauty": 0.00015044212341308594, "__label__finance_business": 0.0005984306335449219, "__label__food_dining": 0.00030875205993652344, "__label__games": 0.0004553794860839844, "__label__hardware": 0.0027008056640625, "__label__health": 0.0005121231079101562, "__label__history": 0.00030612945556640625, "__label__home_hobbies": 8.922815322875977e-05, "__label__industrial": 0.0006237030029296875, "__label__literature": 0.0002624988555908203, "__label__politics": 0.0003075599670410156, "__label__religion": 0.0004115104675292969, "__label__science_tech": 0.171142578125, "__label__social_life": 8.547306060791016e-05, "__label__software": 0.049407958984375, "__label__software_dev": 0.77001953125, "__label__sports_fitness": 0.00023055076599121096, "__label__transportation": 0.0006022453308105469, "__label__travel": 0.0002467632293701172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28310, 0.02049]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28310, 0.47388]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28310, 0.88853]], "google_gemma-3-12b-it_contains_pii": [[0, 1036, false], [1036, 6087, null], [6087, 11985, null], [11985, 14989, null], [14989, 20514, null], [20514, 23285, null], [23285, 28310, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1036, true], [1036, 6087, null], [6087, 11985, null], [11985, 14989, null], [14989, 20514, null], [20514, 23285, null], [23285, 28310, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28310, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28310, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28310, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28310, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28310, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28310, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28310, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28310, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28310, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28310, null]], "pdf_page_numbers": [[0, 1036, 1], [1036, 6087, 2], [6087, 11985, 3], [11985, 14989, 4], [14989, 20514, 5], [20514, 23285, 6], [23285, 28310, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28310, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
34602af6a9ca08f0497f8693c5f622ba414b1360
|
I-ESA’08 Paper Summary
Table of Content
Introduction .................................................................................................... 1
Business Impact of Interoperability ................................................................. 1
Frameworks and Architectures for Interoperability .......................................... 2
Service Oriented Architectures for Interoperability ......................................... 3
Approaches and Solutions for Model Driven Architectures ................................ 3
Enterprise Modelling for Interoperability ...................................................... 3
Approaches for Cross Organizational Processes ............................................. 4
Design and Execution of interoperable Services ............................................. 4
Semantic Services ........................................................................................... 4
Interoperability in Engineering ....................................................................... 5
Interoperability in Product Design and manufacturing Engineering ............... 5
Methods and Application for Semantic Mediation ........................................... 6
Methods and Tools .......................................................................................... 6
Case Studies in Healthcare and Lifecycle Management .................................... 7
Introduction
The following is a short summary of the papers presented at the I-ESA’08 Conference, held in Berlin, 2008-03-26/28. The content of all papers available in electronic form is described in an abstract type format listing only the first (and second) author. The papers are grouped according to their main subject following partly the grouping of the conference sessions and proceedings. Papers in the groups are arranged in alphabetic order of the authors. Conference session and paper number are indicated by (xn-m) at the end of the paper summary.
Business Impact of Interoperability
R. Goncalves et al, present an analysis of the current Portuguese practices relating to European eGovernment. Starting with examples of International efforts on ICT integration and the expected evolution of proprietary, National and International standards, the paper describes the three most important Portuguese initiatives: enterprise’s portal, citizen’s portal and citizen’s card. A framework for common services and a model-driven interoperability framework will be the base for further initiatives(a1-2).
S. Izza et al, discuss the concept of agility of information systems, provide an approach to measure agility and study the role of interoperability in achieving agility. It evaluates the agility combining the agility measure of five complementary aspects: Process, Organizational, Informational, Resource and Environmental(a1-1).
H. Weigand, explores how the strategic value modeling approach c3-value can be of help in the decision making process in an IT outsourcing evaluation. The outsourcing process is described(b6-3).
Frameworks and Architectures for Interoperability
N. Chungoora, R. Young, discuss the possible configuration of frameworks to capture semantically enriched manufacturing knowledge for manufacturing interoperability. Feature oriented ontology-driven semantic frameworks, based on explicit definitions of manufacturing terminology and knowledge relationships, offer an attractive approach to solving manufacturing interoperability issues(c2-2).
A. De Nicola et al, propose an ontological framework supporting ALS (autonomic logistics services) and the dynamic composition of its ad-hoc maintenance programs. In particular the authors propose BPAL (Business Process Abstract Language) as the formal ontological foundation, derived from the BPMN proposed by the OMG(b7-1).
M. Heather et al, discuss the logical foundations for the infrastructure of the information market and propose an architecture for achieving interoperability using categorical higher order logic while meeting Gödel’s requirements for soundness, completeness and effectiveness(a3-2).
K. Mertins et al, give an overview about SME situation regarding enterprise interoperability and related research activities. A network systems framework and integrated methodological and software service solutions will be introduced for tackling SME challenges for cooperation establishment and operations starting and ending from a business perspective. An extended MO²GO software based process assistant will be explained in more detail(b5-1).
N. Protogerous et al, present the European project FUSE approach, which provides a methodology and a framework for support of services unified process to be used both by the IT industry and by individuals with little or no IT-experience, such as specific domain experts, end users, testers and community members. The FUSE Framework is based on and makes use of the Unified Process OPEN, extended participatory design (PD) and similar methodologies(b1-2).
W. Qingqing et al, propose a data exchange framework for data exchange in distributed and heterogeneous systems constructed on the architecture of Web services, in which a data provider deploys Web services for data exchange, publishes description of service function and exchange data on a register centre. Data requesters search for web services on the register centre according to their requirements on function and data. A prototype system is implemented to verify the proposed framework and matching mechanism(a5-1).
M. Rabe, P. Gocev, propose a semantic Web framework for rule-based generation of knowledge and simulation for cooperation and interoperability within product design and manufacturing engineering projects. Data and knowledge within the manufacturing domain are modelled within ontologies applying rule-based mapping. The framework facilitates the generation of new knowledge through rule based inference that enriches the ontology(c2-1).
S. Radeschütz et al, introduce a framework that offers various alternatives for matching process data and operational data to obtain a consolidated data description. The concept of deep business analysis is introduced to allow profound analysis and optimization of relevant data(c6-1).
T. Scheibler, F. Leymann, introduce a framework that provides configuration capabilities for EAI (executable enterprise application integration) patterns. The framework also allows to generate executable integration code from EAI patterns using a model-driven architecture approach. A tool providing this framework is presented(a6-1).
J. Ullberg et al, present a service interoperability framework implemented as an extended influence diagram describing a theory of enterprise service interoperability. The theory is augmented with a meta-model containing the information needed to perform an analysis of interoperability. A fictional example is provided to illustrate the employment of the meta-model and the theory in the context of IT decision making(a3-1).
I. Zinniku et al, describe a solution which supports rapid prototyping by combining a model-driven framework for cross-organisational business processes with an agent-based approach for flexible process execution and demonstrate how the W3C recommendation for semantic Web
service descriptions can be combined with the model-driven approach for rapid service integration(a4-1).
**Service Oriented Architectures for Interoperability**
*L. Bastida et al*, analyse the organisational and technological challenges an organisation adopting service-oriented architectures (SOA) faces and propose a set of best practices that will enable an organisation to efficiently adopt SOA. Discussing myths about SOA like easy integration of legacy systems and others, the four pillars of SOA (maturity, technology, governance and change management) and implications of adopting SOA are presented(b1-3).
*M. Hiel et al*, introduce an extension to the Service Oriented Architecture, called Adaptive Service Oriented Architecture (ASOA), leveraging it with concepts and mechanisms from Autonomic Computing and Agent Technology. The constituents and implications of an ASOA are illustrated with a prototypical architecture which deals with interoperability issues(b1-1).
*C. Schroth*, discusses a service-oriented reference architecture for business media that overcome the drawbacks of today’s B2B software products and services. Based on the IEEE Recommended Practice for Architectural Description (IEEE 1471-2000) in combination with Schmid’s Media Reference Model, this reference architecture provides four main views: community (structural organization), process (process-oriented organization), services and infrastructure(c1-1).
*C. Schroth*, presents a reference architecture for service-oriented business media which allow the different involved stakeholders to organize and implement cross-company collaboration as efficiently as possible. Applying this reference architecture to the case of public administration, demonstrates that “Lean” service consumption and provision between organizations can be realized and the seven major categories of “waste” (defects, overproduction, excessive inventory, transportation, waiting, motion, over-processing) are reduced(a2-3).
**Approaches and Solutions for Model Driven Architectures**
*R. Grangel et al*, present a proposal for goal-oriented enterprise knowledge modelling based on UML profiles, which is focused on representing enterprise knowledge. It is developed at the CIM level and presents different models to capture software requirements of a knowledge management system. In particular, the meta-model concerning goal dimension and the derived and implemented UML profiles are shown. The resulting goal diagram is explained by means of an example. This work aims on linking enterprise modelling and systems development(a4-3).
*Z. Panxiang et al*, describe a B/S MIS’s UI (user interface) framework based on model-driven runtime (MDR) and introduce the modelling process of the UI requirement analysis model in the requirement analysis stage, including the task model and domain model showing how BSMDR (business service model driven runtime) transform such models into platform independent models, including Object Model, Layout Model, Content Model, Presentation Model, Interaction Model and Mapping Model. Finally, the authors focus on the design and implementation of the BSMDR Framework and demonstrate their approach with an example(a4-2).
**Enterprise Modelling for Interoperability**
*R. Anderl et al*, introduce an object oriented approach for a process modeling language. Using UML as a starting point an object oriented process modeling method is differentiated. The basic concepts which are needed for process modeling are put into an object oriented context and are explained. The paper also deals with the most important methods behind object oriented process modeling and gives an outlook about this approach(b5-3).
Approaches for Cross Organizational Processes
E. Folmer, J. Bastiaans, compare several methods that can be used for design of semantic message-based B2B interaction standards thereby supporting the search of adequate methods for design of B2B standards(c6-2).
S. Koussouris et al, present generic models of the most common e-business transactions carried out mainly by small and medium enterprises. These models are constructed using state-of-the art notations and methodologies, which facilitate the application-to-application interconnection and the automated business document exchange between enterprises, governmental and banking institutions, covering not only national or sector specific business domain transactions but also cross-border and cross-sector processes(c5-2).
J. Touzi et al, describe a prototype to support morphism between a BPMN collaborative process model and a collaborative SOA architecture model. The authors propose the design of a collaborative SOA architecture according to MDA (model-Driven Approach) principles, using the business model (the needs) to design a logical model of a solution (logical architecture). The business model is a collaborative business model (BPMN, at the CIM level), while the logical model is a collaborative architecture model (UML, at the PIM level). This paper presents the theoretical aspects of this subject, the mechanisms of morphism and the dedicated translation rules. A prototype of a demonstration tool embedding the transformation rules and running those principles is described(c5-1).
S. Truptil et al, present the first results of a French project on IT system interoperability in emergency situation: a meta-model of crisis situation and its ontological links with collaborative process design, and also the treatment of a first case of study, a NRBC (Nuclear Radiological Bacteriological Chemical) exercise(b7-2).
Design and Execution of interoperable Services
T. Dirgahayu et al, present a design concept called abstract interaction for designing interaction behaviour of service compositions and demonstrate in an example the suitability of the concept for designing interaction behaviour at high abstraction levels by comparing it to BPMN interaction concept(b2-3).
S. De Labey, E. Steegmans, show that interoperability in Java applications can be achieved without compromising transparency by deferring interoperability provisioning to a pre-compiler allowing programmers to focus on the implementation of the business logic without being distracted by heterogeneity issues occurring in the service architecture in which their application will eventually be deployed(b2-1).
Y. Shiyang et al, present a preference-based service level matchmaking concept for composite services. The model is particularly efficient for multi-QoS by using utility function and suitable for price-sensitivity situation by introducing an acceptable price and propose a preference-based service level matchmaking model and algorithm. Experimental results indicate effective matching of a service level conforming to consumer preference(b3-1).
M. Tong et al, analyse the state of the art of service models in and look at several key aspects of such models in details, e.g., roles, interactive behaviours, value and risk, etc. A new service behaviour model for co-production features of services named “Service-Provider-Customer (SPC)” is presented, including its graphical representations and attribute-based semantics descriptions as well as its validation through a case study in ocean logistics(b2-2).
Semantic Services
S. Izza, L. Vincent, present a service similarity approach for service matching in the context of ODSOI (Ontology-Driven Service-Oriented Integration) project that concerns the intra-enterprise integration issues in the field of manufacturing industry. The approach is based on an extension of
OWL-S service similarity. It proposes a rigorous quantitative ranking method based on some novel semantic similarity degrees. An implementation of this ranking method is provided in the form of a prototype coded on a Java platform(c7-1).
T. Kul et al, propose a novel event pattern based on a semantics operator for complex event processing pattern-oriented application to process RFID data. A formalized event hierarchy is used to model complex events together with an event ontology and abstract hierarchical views allowing to view the system activities at different levels. Several complex event patterns are proposed based on semantic event operators(b7-3).
K. Popplewell et al, outline the approach to be followed in the European research project SYNERGY, which envisages the delivery of Collaboration Knowledge services through interoperability service utilities (ISUs): trusted third parties offering web-based, pay-on-use services. The approach aims to (a) provide semantic ontology-based modelling of knowledge structures on collaborative working; (b) develop a service-oriented self-adaptive solution for knowledge-based collaboration services; and (c) facilitate the testing and evaluation of the efficiency and effectiveness of the solution in concrete case studies(c7-3).
Interoperability in Engineering
R. Anderl et al, aim to advance knowledge integration in product development, to support successful communication and cooperation in collaboration efforts and to tackle the new challenges in global engineering(c6-3).
A. Errasti, R. Poler, explore a methodology to support the redesign of internal and external operational integrated processes, applying the GRAI meta model and the design principles for interoperability, in order to improve the overall performance of an engineer to order supply chain. This research also includes a case study in the producer goods sector from an original equipment manufacturer (OEM) point of view(b5-2).
P. Mihók et al, summarize how trust and security can be considered in collaborative environments. Partial results of the field studies of two European IST projects, FLUID-WIN and SEAMLESS, are presented. Identity management problems and trusted operational scenarios are treated(b4-4).
R. Moksony, A. Giuliano, promote enterprise interoperability B2(B2B) platforms - developed in the European FLUID-WIN project - within the manufacturing sector for SMEs that are interested in cross-border business interactions. Focus is on the integration of different domains along the supply chain integrating both logistic service providers and financial service providers into the supply chain network(b4-1).
H. Weinaug, M. Rabe. The FLUID-WIN project is developing and using new business process models and methods for web support of a multi-disciplinary B2(B2B) network as base for the related tool developments for the smooth integration of logistic and financial services into a B2B manufacturing network(b4-2).
M. Zanet, S. Sinatti, follow an approach for platform design that introduces a new level of business, that has been called B2(B2B). The paper describes briefly the FLUID-WIN project that targets this new approach, and sketches the platform components to be developed in order to integrate the activities between significantly different business entities(b4-3).
Interoperability in Product Design and manufacturing Engineering
N. Lanshun et al, propose a novel iterative multi-attribute auction mechanism for reverse auction settings with one buyer and many sellers based on competitive equilibrium. The auctions support incremental preference elicitation and revelation for both the buyer and the sellers. Experimental results show that the co-evolutionary computation based iterative multi-attribute auction is a practical and nearly efficient mechanism. Mechanism and framework can be realized as a multi-
agent based software system to support supplier selection decision and/or deal decision for both the buyer and the suppliers in B2B markets and supply chain(c3-3).
L. Shu et al, propose a new service engineering methodology named SMDA, to assist service providers to build better service system, which has as a part, the Service Quality Function Deployment (SQFD) to consider quality aspects of a service system. SQFD focuses on designing, evaluating and optimising service quality in the lifecycle of services. The three phases of SQFD, i.e., build-time QFD-oriented service quality design, run-time service performance evaluation and service performance optimization, are illustrated in this paper(c3-2).
**Methods and Application for Semantic Mediation**
D. Beneventano et al, developed a unified description of data models for ontology-driven semantic mapping, which is called the Logical Data Model (LDM) Ontology to support semantic mapping. LDM is a superset of the relational, hierarchical, network, object-oriented data models, which is represented as a graph consisting of nodes with labelled edges(a5-2).
M. Jankovic et al, presents an approach in IV&I (Inventory Visibility and Interoperability) business application development, which is based on business processes and user requirements represented in a form of an enterprise model. This approach is beneficial in supporting cross-enterprise business application integration when used in conjunction with semantic mediation tools(b6-2).
A. Smirnov et al, propose an approach to creation of self-organising service networks to support semantic interoperability between virtual enterprise members. Since the centralized control is not always possible, the approach proposes decentralized communication and ad-hoc decision making based on the current situation state and its possible future development. It proposes usage of self-organising networks of knowledge sources and problem solvers. The paper is devoted to questions of semantic interoperability in a kind of agent-based service networks and virtual enterprises. Ontologies are used for description of knowledge domains(a5-3).
**Methods and Tools**
G. Gautier et al, compare the organisation and the human perspectives on collaboration in order to identify the barriers to its implementation. In particular, it focuses on issues related to the life cycles, organisation structure, information flow and human motivation. It also introduces the case of virtual organisations and their difficulty to generate efficient collaboration(c1-3).
J-P. Pesola et al, describe a configurable tool integration solution (the Merlin ToolChain) that integrates project management, requirements management, configuration management and testing tools. Experiences from real life industrial case demonstrate the usefulness in collaborative software development and validation(a6-2).
V. Rajsiri et al, present a knowledge-based methodology for collaborative process definition dedicated to automate the specification of virtual organization collaborative processes. The approach takes as input, knowledge about collaboration from a collaborative platform (6napse developed by EBM WebSourcing), and produces as output a BPMN (Business Process Modeling Notation) compliant process. The knowledge instantiates an ontology contributing to the collaborative process definition. The ontology consists of (i) collaboration attributes, (ii) description of participants and (iii) collaborative processes inspired from the enterprise Process Handbook (MIT) and is described in some details(c3-1).
A. Välimäki, J. Kääriäinen, describes in a case study the process of mining distributed Scrum organizational patterns (Scrum is an agile project management method). The experiences and improvement ideas of distributed Scrum have been collected from a global company operating in the automation industry(c1-2).
R-M. Åhlfeldt, E. Söderström, examined healthcare and its information security problems and needs in the three Scandinavian countries: Norway, Sweden and Finland. Data were collected via case studies, and the results were compared to show both similarities and differences between Norway and Finland vs. Sweden. Similarities include the too wide availability of patient information, an obvious need for risk analysis, and a tendency to focus more on patient safety than on patient privacy(a7-2).
G. Benguria, I. Santos, present a strategy for becoming and staying interoperable in SME environments. This strategy consists of three pillars: an improvement cycle to guide the establishment and the maintenance of the interoperable status; an interoperability maturity model as a repository of good practices for being interoperable; an assessment method to be able to measure the level of interoperability and being able to establish feasible goals. A preliminary case study shows first results(a7-1).
J. Kääriäinen, A. Välimäki, studied the concept of Application Lifecycle Management (ALM) and gathered first experiences with a company moving towards distributed application lifecycle management. The results show that several benefits were gained when introducing an ALM solution in the case company(a7-3).
|
{"Source-Url": "http://www.cimosa.de/Summary_I-ESA_2008.pdf", "len_cl100k_base": 4453, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 15186, "total-output-tokens": 4759, "length": "2e12", "weborganizer": {"__label__adult": 0.00035262107849121094, "__label__art_design": 0.0007772445678710938, "__label__crime_law": 0.0006375312805175781, "__label__education_jobs": 0.0033721923828125, "__label__entertainment": 0.00013124942779541016, "__label__fashion_beauty": 0.0002002716064453125, "__label__finance_business": 0.00502777099609375, "__label__food_dining": 0.0004780292510986328, "__label__games": 0.0006756782531738281, "__label__hardware": 0.0013208389282226562, "__label__health": 0.0010786056518554688, "__label__history": 0.0007009506225585938, "__label__home_hobbies": 0.00012671947479248047, "__label__industrial": 0.0020236968994140625, "__label__literature": 0.0005955696105957031, "__label__politics": 0.0005550384521484375, "__label__religion": 0.0005164146423339844, "__label__science_tech": 0.367431640625, "__label__social_life": 0.00017333030700683594, "__label__software": 0.0667724609375, "__label__software_dev": 0.54541015625, "__label__sports_fitness": 0.00026416778564453125, "__label__transportation": 0.0011568069458007812, "__label__travel": 0.0003154277801513672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23999, 0.00733]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23999, 0.13663]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23999, 0.87715]], "google_gemma-3-12b-it_contains_pii": [[0, 3065, false], [3065, 7309, null], [7309, 11020, null], [11020, 14902, null], [14902, 18783, null], [18783, 22690, null], [22690, 23999, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3065, true], [3065, 7309, null], [7309, 11020, null], [11020, 14902, null], [14902, 18783, null], [18783, 22690, null], [22690, 23999, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23999, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23999, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23999, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23999, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23999, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23999, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23999, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23999, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23999, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23999, null]], "pdf_page_numbers": [[0, 3065, 1], [3065, 7309, 2], [7309, 11020, 3], [11020, 14902, 4], [14902, 18783, 5], [18783, 22690, 6], [22690, 23999, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23999, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
f5d48ca33306068acc07fcf368a031886087893d
|
A URN namespace for network resources
Dijkstra, F.; van der Ham, J.J.
Citation for published version (APA):
General rights
It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons).
Disclaimer/Complaints regulations
If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.
A URN Namespace for Network Resources
Status of This Document
Grid Final Draft (GFD), Community Practice.
Copyright Notice
Abstract
This document specifies the procedure to create Uniform Resource Names (URNs) in the urn:ogf:network namespace. URNs in this namespace can be used to define logical network resources, such as devices, (logical) ports, (logical) links, and topologies.
Contents
Abstract ................................................................. 1
Contents ................................................................. 1
1 Introduction ......................................................... 3
2 Registration ......................................................... 3
2.1 Namespace Identifier ........................................ 3
2.2 Document Version ........................................... 3
3 Syntax ................................................................. 3
3.1 Syntactic Structure ........................................... 3
3.2 Encoding ....................................................... 5
3.3 Rules for Lexical Equivalence ......................... 5
3.4 Assignment .................................................. 5
3.5 Validation ................................................... 5
3.6 URN Rewriting ............................................... 5
1 Introduction
Uniform Resource Names (URNs) are persistent, globally unique identifiers [RFC 2141].
Topology exchange between network operators requires globally unique identifiers for network resources. The urn:ogf:network namespace provides globally unique identifiers for naming network resources without central registration.
This document defines and registers the urn:ogf:network namespace in accordance with [GFD.191]. It defines a procedure how organisation can create a globally unique organisation identifier, which can be used to prefix locally unique resource identifiers to form globally unique resource identifiers.
By re-using the domain name system with a date, no additional central registry or procedural overhead is required to create a globally unique organisation identifier.
Notational Conventions
The keywords “must” “must not”, “required”, “shall”, “shall not”, “should”, “should not”, “recommended”, “may”, and “optional” are to be interpreted as described in [RFC 2119].
2 Registration
2.1 Namespace Identifier
“urn:ogf:network:” The OGF is the Namespace Organisation for the urn:ogf:network namespace.
2.2 Document Version
Registration version number: 1
Registration date: 2013-04-23
3 Syntax
3.1 Syntactic Structure
A network resource URN (NURN) is defined by the following rules. These rules follow Augmented BFR [RFC 5234] format.
NURN = "urn:ogf:network:" ORGID ":" OPAQUE-PART *1QUERY *1FRAGMENT
ORGID = FQDN ":" DATE ; ID of assigning organisation
FQDN = 1*(ALPHA / DIGIT / ":" / "/") ; Domain name
DATE = YEAR *1(MONTH *1DAY) ; Date of creation of ORGID
YEAR = 4DIGIT
MONTH = 2DIGIT
DAY = 2DIGIT
OPAQUE-PART = *(ALPHA / DIGIT / OTHER)
OTHER = ALLOWED / EXTENSION
ALLOWED = "+" / "," / "-" / "/" / ":" / ":" / ":" / ":=" / ":="
EXTENSION = "!" / "$" / "(" / ")" / "*" / "@" / "#" / "&" / "&"
QUERY = "?" *QFCHAR
FRAGMENT = "#" *QFCHAR
QFCHAR = ALPHA / DIGIT / OTHER
ALPHA, DIGIT and HEXDIG are defined by [RFC 5234], OTHER is almost equal to <other> as defined by [RFC 2141], but lacks the single quote (’) character and includes the ampersand (&) and tilde characters (~). QFCHAR is a subset of pchar as defined by [RFC 3986], only lacking the single quote character (’) and percentage encoding ("%" HEXDIG HEXDIG).
ALLOWED characters MAY be used for the assignment of network resource URNs. EXTENSION characters SHOULD NOT be used to assign network resource URNs. To allow for future extensions, parsers SHOULD accept network resource URNs with EXTENSION characters.
The QUERY and FRAGMENT parts MUST NOT be present in any assigned URN. This specification reserves their use for future standardization.
A network resource URN MUST NOT contain percentage-encoded characters ("%" HEXDIG HEXDIG). It should also be noted that the following characters (which are either allowed by the URI or URN specification) MUST NOT be used in the OPAQUE-PART of a network resource URN: ";", "/", ":", ":", and ":".
DOMAIN is a fully qualified domain name (FQDN) of the URN assigning organisation in LDR format [RFC 5890]. Valid examples are example.net and example.xn--jxalpdplp. DATE is a date (either year, year+month or year+month+day). The combination of DOMAIN and DATE forms the organisation identifier, ORGID.
The full length of a NURN MUST NOT exceed 255 characters.
OPAQUE-PART is opaque, and MUST NOT be parsed or interpreted by any organisation except for the organisation that assigned the URN.
3.2 Encoding
A network resource URN uses a subset of 7-bit ASCII characters. No percentage-encoded characters are allowed.
3.3 Rules for Lexical Equivalence
Network resource URNs are lexical equivalent if and only if they are byte-equivalent after case normalisation.
Consider the following URNs:
1- urn:ogf:network:example.net:2012:path-glif-0418
2- UrN:oGf:NeTwOrK:eXaMpLe.NeT:2012:pAtH-gLiF-0418
URNs 1, 2, and 3 are lexically equivalent to each other.
3.4 Assignment
The ORGID part must belong to the assignment organisation, as described in section 5.1.
Assigned network resource URNs MUST NOT contain a fragment or query part.
The characters defined in EXTENSION SHOULD NOT be used in assignment of network resources URNs, and are reserved for future use. Only characters in ALPHA / DIGIT / ALLOWED SHOULD be used in the OPAQUE-PART.
The length of the URN MUST NOT exceed 255 bytes.
3.5 Validation
A network resource URN that does not follow the specified syntax SHOULD be rejected.
No specific validation service or resolution service is defined in this document.
A recipient should either use the credibility of the sender or some other mechanism to judge the correctness of a given URN.
3.6 URN Rewriting
A recipient MUST NOT rewrite the URN if the rewriting results in a URN which is not lexically equivalent to the received URN. In particular, percentage-decoding of the URN as
described in section 6.2.2.2. of [RFC 3986] MUST NOT take place.
If two lexical equivalent URNs with different capitalisation have been received, the recipient MAY pick one of the two capitalisations, and use that in all communications, effectively rewriting the URNs.
With the above exception, URNs SHOULD retain the same capitalisation in a message exchange.
4 Namespace Considerations
4.1 Scope
The urn:ogf:network namespace is created to allow network operators to uniquely define resources in their network and facilitate unambiguous exchange of topology data with other network operators.
The only requirement for naming network resources is administrative ownership of the domain name used for DOMAIN on the DATE of the identifier assignment (see section 5.1). No other central registration is required.
The intended use of the urn:ogf:network namespace is to describe logical network resources roughly on OSI layers 1 and OSI layer 2. “Logical network resources” intends to mean elements in a functional topology description, rather than physical resources. It is expected that a peering network is only interested in the functional description of the network, not of its (physical) implementation. Nevertheless, this document does not forbid the description of other resources, such a physical network resources for inventory management.
4.2 Resource Type Described
The exact type of resource described by a URN can not and MUST NOT be determined from the syntax of the URN. This information MUST be provided by the context or through other means by the data exchange protocol.
Network resources URNs SHOULD identify manifestations of a network resources — they should refer to a functional component in a network that remains in place for a prolonged period of time. New version of the resource SHOULD NOT receive a new identifier. The change of attributes over time SHOULD be dealt with by a protocol, not by a change of the URN.
4.3 Identifier uniqueness considerations
URN identifiers MUST be assigned uniquely – they are assigned to at most one resource, and MUST NOT be re-assigned.
URN assigning organisations MUST follow these requirements before assigning URNs to network resources.
A single network resource MAY be identified by multiple URNs.
5 Community Considerations
5.1 Process of Organisation Identifier Assignment
An organisation that wishes to become an assigning organisation, must pick a globally unique organisation identifier.
An organisation identifier consists of two components, a fully qualified domain name and a date, which must both be chosen by the assigning organisation.
The assigning organisation MUST be the administrative contact of the chosen domain [RFC 5890] for at least the duration of the date.
It is RECOMMENDED that the date is a year. Organisation that expect their DNS registration to be more volatile SHOULD pick a more fine grained date specification (year+month or year+month+day).
There is no need for the assigning organisation to register themselves at the Open Grid Forum (the Namespace Organisation for the urn:ogf:network namespace).
An organisation MAY use multiple organisation identifiers. For example, an organisation may pick a new organisation identifier in order to create a new syntax for their OPAQUE-PART syntax.
5.2 Process of Network Resource Identifier Assignment
An assigning organisation assigns OPAQUE-PARTs to it network resources. The following requirements apply to the OPAQUE-PART:
- The OPAQUE-PART MUST uniquely define at most one network resource;
- The OPAQUE-PART MUST NOT be re-assigned;
- The OPAQUE-PART SHOULD NOT specify any properties of the network resource;
• The **OPAQUE-PART** may contain some structure according to some policy internal to the assigning organisation.
• The **OPAQUE-PART** must have a valid syntax (use only allowed characters, does not exceed maximum length).
The reason that the **OPAQUE-PART** should not contain any properties is because a URN must be persistent: it must not change, even after the properties of the described resource change. Naming these properties in the URN gives a false sense of meaning to the URN. Peer may inadvertantly assume the identifier describes certain properties, and act upon that, even if the properties have long changed.
Good examples of URNs:
```
urn:ogf:network:example.net:2012:9ad7ef-mcasip-139284
```
Not so good examples of URNs:
```
urn:ogf:network:example.net:2012:sw3.rtr.example.net:port3-1:vlan118
link:eth:US_CHI-NL_AMS-3937
```
A useful syntax for **OPAQUE-PART** is `<type>:<year of creation>:<sequence number>`, e.g.
```
port:2013:129
```
While `port:2013:129` contains attributes (type and year of creation), these may be acceptable as they will never change. `link:24x7-protected:925-175` contains attributes about the type of link, which may change in the future, and is therefore not a good URN. `link:eth:US_CHI-NL_AMS-3937` is also not a good URN. It contains the end points of a path, which are unlikely to change. However, if the path is actually an Ethernet LAN, it is possible to add another end-point, changing these properties. The network domains along the path may use this identifier for monitoring and do not accept a change in identifier. For that reason, it is best never to add attributes to a URN identifier.
5.3 Identifier persistence considerations
[RFC 3406] requires that URNs must not be re-assigned. Ever. In practice, it is impossible to control what identifiers will be assigned in a few decades from now.
The requirement of the date in the organisation is sufficient guarantee. If a domain name is transferred, or an organisation decides to start over with the assignment of local identifiers, it is easy enough to create a new organisational identifier.
Any organisation that wishes to assign names in the urn:ogf:network namespace must do so after due diligence, and make sure that no re-assignment occurs within the namespace(s) of the organisation and that the assigned name does not contain attributes which can change during the lifetime of the resource.
6 Examples
Syntactically valid network resource URNs which MAY be assigned include:
- urn:ogf:network:example.net:2012:9ad7ef-mcasip-139284
- urn:ogf:network:example.net:20120916:4A6173706572
- urn:ogf:network:example.net:2012:
- urn:ogf:network:example.net:2012:l=214.56:x=a5y
The following URNs contain characters in the extension range. While they SHOULD NOT be assigned to network resources, a recipient SHOULD accept these examples:
- urn:ogf:network:example.net:20120916:4A6173706572(AMS-GEN)
- urn:ogf:network:example.net:20120916:l=*:x=a5y
The following example is a syntactically valid URN, which contains a query part and hence MUST NOT be assigned to a network resource, but MAY be used to query for a network resource, provided that subsequent standards define the syntax of the query part:
The following URNs is invalid, and SHOULD be rejected by a recipient, because a slash is not allowed in a URN (this is a limitation of all URNs, not just this specification):
7 Security Considerations
While this specification goes to great length to avoid accidental naming collisions, malicious software can easily craft a NURN to collide with an existing NURN. Recipients of a NURN MUST take such risks in consideration.
Recipients of a NURN MUST NOT assume that a NURN was crafted by the domain specified in the DOMAIN part of the NURN, without a proper validation check.
The allowed syntax is so limited that it is not expected that similar-looking malicious NURNs will be an issue. Users and applications should be able to detect the differences between urn:ogf:network:example.com:4638127 and urn:ogf:network:example.com:4638127.
Software that takes input from a user MUST ensure that the NURN is syntactically correct before transmitting it. For example, it SHOULD remove any trailing spaces from the user input.
Information in the OPAQUE-PART MUST NOT be interpreted to have any meaning whatsoever. While the originating domain may have included meaningful attributes in the NURN, these attributes may be out-of-date.
8 Prior Usage
URN identifiers in the urn:ogf:network namespace have been in use in three communities, GLIF, PerfSONAR, and AutoGOLE, with mutually conflicting syntaxes.
8.1 GLIF Community
The Global Lambda Integrated Facility (GLIF) is a community of research and education networks. Operators in this community agreed to use unique identifier for lightpaths, dedicated inter-domain circuits for researchers.
These identifiers take the form:
\[
\text{GLOBAL-ID} = "urn:ogf:network:" \text{DOMAIN ":" LOCAL-PART}
\]
\[
\text{DOMAIN} = 1*(\text{ALPHA} / \text{DIGIT} / "-" / ".") ; \text{Domain name}
\]
\[
\text{LOCAL-PART} = 1*(\text{ALPHA} / \text{DIGIT} / "-" / ".")
\]
For example:
\[
\text{urn:ogf:network:canarie.ca:kisti-uninett-glif-001}
\]
\[
\text{urn:ogf:network:es.net:4005}
\]
\[
\text{urn:ogf:network:dcn.internet2.edu:6811}
\]
The syntax is described in [GLIF-ID].
Identifiers described by the GLIF community generally do not contain a date, although it is possible to construct a URN which is both a valid NURN and GLOBAL-ID.
8.2 PerfSONAR Community
PerfSONAR is a distributed system for network performance monitoring on paths crossing several networks. Much of the perfSONAR protocols are standardised by the OGF in the Network Measurement (NM) and Network Measurement and Control (NMC) working groups. URNs in the urn:ogf:network namespace are used for topology description.
These identifiers take the form:
\[
\begin{align*}
PS-URN &= "urn:ogf:network" 1\text{DOMAIN-PART} *1\text{NODE-PART} *1\text{PORT-PART}\ \ 1\text{LINK-PART} 1\text{PATH-PART}\ 1\text{SERVICE-PART} *1\text{WILDCARD} \\
\text{DOMAIN-PART} &= ":\text{domain}=\ " 1\text{DOMAIN} \\
\text{NODE-PART} &= ":\text{node}=\ " 1\text{PART-CHAR} \\
\text{PORT-PART} &= ":\text{port}=\ " 1\text{PART-CHAR} \\
\text{LINK-PART} &= ":\text{link}=\ " 1\text{PART-CHAR} \\
\text{PATH-PART} &= ":\text{path}=\ " 1\text{PART-CHAR} \\
\text{SERVICE-PART} &= ":\text{service}=\ " 1\text{PART-CHAR} \\
\text{DOMAIN} &= 1*(\text{ALPHA} / \text{DIGIT} / "-" / ".") ; Domain name \\
\text{PART-CHAR} &= (\text{ALPHA} / \text{DIGIT} / "-" / "." / "/" / ".") \\
\text{WILDCARD} &= ":*" ; Used for queries.
\end{align*}
\]
For example:
\[
\begin{align*}
\text{urn:ogf:network:domain=example.net} \\
\text{urn:ogf:network:domain=example.net:node=packrat} \\
\text{urn:ogf:network:domain=example.net:link=WASH_to_ATLA} \\
\text{urn:ogf:network:domain=example.net:node=packrat:port=eth0} \\
\text{urn:ogf:network:domain=example.net:port=Interface_To_Geant} \\
\text{urn:ogf:network:domain=example.net:node=packrat:service=Optical_Converter} \\
\text{urn:ogf:network:domain=example.net:node=packrat:port=eth0:link=WASH_to_ATLA} \\
\text{urn:ogf:network:domain=example.net:node=AMS:port=3/1:link=AMS-GEN} \\
\text{urn:ogf:network:domain=example.net:path=IN2P3_Circuit} \\
\text{urn:ogf:network:domain=example.net:node=packrat:*}
\end{align*}
\]
The syntax is described in [perfSONAR-URN].
Identifiers described by the perfSONAR topology service are not valid network resource URNs. Note that the PS-URN syntax allows a slash in a URN, even though this is not allowed by [RFC 2141].
The meaning of the perfSONAR URNs is fundamentally different from network resource URNs: whereas perfSONAR URNs should specifically be parsed to find properties of the resource, this is not allowed for network resource URNs.
This document does not define a specific migration strategy for perfSONAR URNs.
8.3 AutoGOLE Community
Historically the AutoGOLE community used an invalid variant of the network resource URN. AutoGOLE is a proof-of-concept architecture where over ten organisations provide a persistent testbed to show their ability to perform automatic network provisioning across network domains. The resources used in that demo follow the following syntax.
```
AUTOGOLE-URN = "urn:ogf:network:" TYPE ":" NETWORK *LOCAL-PART
TYPE = ("stp" / "nsa" / "nsnetwork")
NETWORK = 1*(ALPHA / DIGIT / "-" / ".") ; Human readable string
LOCAL-PART = ":" 1*(ALPHA / DIGIT / "-" / ".")
```
No formal publication has been made to describe this syntax.
The historic AutoGOLE Identifiers are not valid network resource URNs. A drawback of these AutoGOLE identifiers is that they use a custom name to identify networks, and subsequently the organisational identifiers of the assigning organisations. Deploying this syntax on a large scale would require the set up of a namespace registry.
The AutoGOLE community is currently in the process of adopting the valid network resource URNs.
8.4 Backwards Compatibility
Applications that wish to be backward compatible with the GLIF-based, PerfSONAR-based and AutoGOLE-based URNs, are recommended to accept:
```
BC-NURN = "urn:ogf:network:" 1*(ALPHA / DIGIT / OTHER / "/")
```
Applications that decide to be liberal in the URN that they accept must anticipate that other clients may do a more thorough syntax check and reject these URNs. In particular, the slash is formally not allowed in URNs.
Applications that merely accept URNs according to the BC-NURN syntax can still be compatible with this specification. However, as soon as a possibility exists that the application sends out URNs that do not comply to NURN syntax, then the application is no longer compatible with the specification described in this document.
9 Contributors
Freek Dijkstra
SURFsara
Science Park 121
1098 XG Amsterdam
The Netherlands
Email: Freek.Dijkstra@surfsara.nl
Jeroen van der Ham
Faculty of Science, Informatics Institute, University of Amsterdam
Science Park 904, 1098 XH Amsterdam
The Netherlands
Email: vdham@uva.nl
Acknowledgments
The authors would like to thank the following people (in arbitrary order).
Jason Zurawski for also initiating this work in the network markup language (NML) working group.
Jens Jensen, Richard Hughes-Jones, Greg Newby, Joel Replogle, and Alan Sill for their help in establishing the urn:ogf namespace.
Aaron Brown and others at Internet2 for defining the urn:ogf:network namespace specification in the perfSONAR community and the network measurement (NM) working group.
Lars Fischer, Tom Lehman, Ronald van der Pol, and Thomas Tam for defining the urn:ogf:network namespace specification in the GLIF community.
Björn Höhrmann and Kadir Karaca Koçer of the urn and urn-nid mailing lists at the IETF for useful advice on long term requirements for URNs.
Intellectual Property Statement
The OGF takes no position regarding the validity or scope of any intellectual property or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; neither does it represent that it has made any effort to identify any
such rights. Copies of claims of rights made available for publication and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this specification can be obtained from the OGF Secretariat.
The OGF invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights which may cover technology that may be required to practice this recommendation. Please address the information to the OGF Executive Director.
Disclaimer
This document and the information contained herein is provided on an “As Is” basis and the OGF disclaims all warranties, express or implied, including but not limited to any warranty that the use of the information herein will not infringe any rights or any implied warranties of merchantability or fitness for a particular purpose.
Full Copyright Notice
This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included as references to the derived portions on all such copies and derivative works. The published OGF document from which such works are derived, however, may not be modified in any way, such as by removing the copyright notice or references to the OGF or other organizations, except as needed for the purpose of developing new or updated OGF documents in conformance with the procedures defined in the OGF Document Process, or as required to translate it into languages other than English. OGF, with the approval of its board, may remove this restriction for inclusion of OGF document content for the purpose of producing standards in cooperation with other international standards bodies.
The limited permissions granted above are perpetual and will not be revoked by the OGF or its successors or assignees.
References
|
{"Source-Url": "https://pure.uva.nl/ws/files/1962346/125282_GFD.202.pdf", "len_cl100k_base": 5737, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 33386, "total-output-tokens": 7600, "length": "2e12", "weborganizer": {"__label__adult": 0.0003247261047363281, "__label__art_design": 0.0006208419799804688, "__label__crime_law": 0.0006971359252929688, "__label__education_jobs": 0.001110076904296875, "__label__entertainment": 0.0001608133316040039, "__label__fashion_beauty": 0.0001914501190185547, "__label__finance_business": 0.000904560089111328, "__label__food_dining": 0.0002269744873046875, "__label__games": 0.0006198883056640625, "__label__hardware": 0.005832672119140625, "__label__health": 0.00030922889709472656, "__label__history": 0.0006241798400878906, "__label__home_hobbies": 0.00014197826385498047, "__label__industrial": 0.0008931159973144531, "__label__literature": 0.0005564689636230469, "__label__politics": 0.0004215240478515625, "__label__religion": 0.0005283355712890625, "__label__science_tech": 0.3193359375, "__label__social_life": 0.00012040138244628906, "__label__software": 0.1048583984375, "__label__software_dev": 0.560546875, "__label__sports_fitness": 0.00026917457580566406, "__label__transportation": 0.0006875991821289062, "__label__travel": 0.00023186206817626953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26627, 0.03853]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26627, 0.26244]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26627, 0.81626]], "google_gemma-3-12b-it_contains_pii": [[0, 1073, false], [1073, 2511, null], [2511, 2511, null], [2511, 3887, null], [3887, 5959, null], [5959, 7416, null], [7416, 9367, null], [9367, 11094, null], [11094, 13368, null], [13368, 15280, null], [15280, 16988, null], [16988, 19322, null], [19322, 21266, null], [21266, 22725, null], [22725, 24923, null], [24923, 26627, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1073, true], [1073, 2511, null], [2511, 2511, null], [2511, 3887, null], [3887, 5959, null], [5959, 7416, null], [7416, 9367, null], [9367, 11094, null], [11094, 13368, null], [13368, 15280, null], [15280, 16988, null], [16988, 19322, null], [19322, 21266, null], [21266, 22725, null], [22725, 24923, null], [24923, 26627, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26627, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26627, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26627, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26627, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26627, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26627, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26627, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26627, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26627, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26627, null]], "pdf_page_numbers": [[0, 1073, 1], [1073, 2511, 2], [2511, 2511, 3], [2511, 3887, 4], [3887, 5959, 5], [5959, 7416, 6], [7416, 9367, 7], [9367, 11094, 8], [11094, 13368, 9], [13368, 15280, 10], [15280, 16988, 11], [16988, 19322, 12], [19322, 21266, 13], [21266, 22725, 14], [22725, 24923, 15], [24923, 26627, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26627, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
589a9748db8637821ef001f0cd880da274ed7b27
|
UBIQUEST, For Rapid Prototyping of Networking Applications
Ahmad Ahmad Kassem, Christophe Bobineau, Christine Collet, Etienne Dublé, Stéphane Grumbach, Fuda Ma, Lourdes Martinez, Stéphane Ubéda
To cite this version:
Ahmad Ahmad Kassem, Christophe Bobineau, Christine Collet, Etienne Dublé, Stéphane Grumbach, et al.. UBIQUEST, For Rapid Prototyping of Networking Applications. IDEAS 2012 - International Database Engineering & Applications Symposium, Aug 2012, Prague, Czech Republic. pp.187-192, 10.1145/2351476.2351498. hal-00816034
HAL Id: hal-00816034
https://inria.hal.science/hal-00816034
Submitted on 19 Apr 2013
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
UBIQUEST, For Rapid Prototyping of Networking Applications
Ahmad Ahmad-Kassem\textsuperscript{1}, Christophe Bobineau\textsuperscript{2}, Christine Collet\textsuperscript{2}, Etienne Dublé\textsuperscript{1}, Stéphane Grumbach\textsuperscript{2}, Fuda Ma\textsuperscript{2}, Lourdes Martinez\textsuperscript{2}, Stéphane Ubéda\textsuperscript{3}
\textsuperscript{1}CNRS, \textsuperscript{2}Grenoble Institute of Technology, \textsuperscript{3}INRIA, \textsuperscript{4}INSA-Lyon
fuda.ma@insa-lyon.fr
\{etienne.duble,Lourdes-Angelica.Martinez-Medina\}@imag.fr,
\{ahmad.ahmad_kassem, Stephane.Grumbach,stephane.ubeda\}@inria.fr
ABSTRACT
An UBIQUEST system provides a high level programming abstraction for rapid prototyping of heterogeneous and distributed applications in a dynamic environment. Such a system is perceived as a distributed database and the applications interact through declarative queries including declarative networking programs (e.g. routing) and/or specific data-oriented distributed algorithms (e.g. distributed join). Case-Based Reasoning is used for optimization of distributed queries when as there is no prior knowledge on data (sources) in networking applications, and certainly no related metadata such as data statistics.
Categories and Subject Descriptors
H.2 DATABASE MANAGEMENT [Languages, Systems and Software]: Query languages, Query optimisation and processing, Rule-based program execution, Distributed databases, Distributed systems, Reasoning, Information networks
General Terms
Your general terms must be any of the following 16 designated terms: Algorithms, Management, Measurement, Documentation, Performance, Design, Economics, Reliability, Experimentation, Security, Human Factors, Standardization, Languages, Theory, Legal Aspects, Verification.
Keywords
Declarative networking, programming abstraction, case-based distributed query optimization.
1. INTRODUCTION
The trend towards ubiquitous computing is accelerated with – particularly, wireless networking – technologies interconnecting an increasing number of heterogeneous (mobile and wearable, energy constrained, personalized) devices that generate large amounts of data. These devices are autonomous, either, static or mobile and present constraints such as energy or communication capabilities. They usually take part in dedicated ad hoc networks, where applications deployment, configuration and management are tedious and require significant human involvement and expert knowledge.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
IDEAS12 2012, August 8-10, Prague [Czech Republic]
Editors: Bipin C. Desai, Jaroslav Pokorny, Jorge Bernardino
Copyright ©2012 ACM 978-1-4503-1234-9/12/08 $15.00.
In [1] we introduce our vision of a new high-level programming abstraction based on the emerging and promising declarative networking approach and declarative data manipulation expressions. Declarative networking is an emerging data-centric approach where the distributed environment is perceived as a distributed database and the applications interact through declarative queries [19, 17, 16]. This approach has been pursued at the network layer with the use of recursive query languages initially proposed to express communication network algorithms such as routing protocols [17] and declarative overlays [16]. It has been further pursued in [15], where execution techniques for Datalog are proposed. Distributed query languages thus provide new means to express complex network problems such as node discovery [22], route finding, path maintenance with quality of service [4], topology discovery, including physical topology [3], etc. The declarative networking approach is well-adapted to social systems (e.g. games, social networks, sharing), where data is pushed or pulled with incomplete knowledge in a dynamic environment.
Also declarative query languages have already been used in the context of ad-hoc networks. Several systems for sensor networks, such as TinyDB [18] or Cougar [8] have been proposed. They use the relational model to represent device (sensor) features and application data; they offer SQL-like languages to express data manipulation. These systems also address solutions to perform efficient data dissemination and query processing. In both cases, a distributed query execution plan is computed in a centralized manner considering the network topology and the capacities of the constrained nodes, which optimizes the placement of sub-queries in the network [8, 18]. Declarative methods have been used also for unreliable data cleaning based on spatial and temporal characteristics of sensor data [14].
As far as we know there is no system that integrates in a uniform way, network aspects, middleware and data management. UBIQUEST merges declarative programming languages and query languages for specifying data manipulations and distributed algorithms. Furthermore, these languages are used to intentionally express the destinations of messages, for naming and accessing data in the context of networks and dynamic environments.
The work presented in this paper describes the architecture and components of an UBIQUEST system – http://ubiquest.imag.fr – that implements this approach.
It is a sort of a large distributed database system that provides a unified view of “objects” handled in networks and applications. It blurs the borders between network, operating system and middleware layers. However, from the data management point of view it should provide a means (i) to localize data (mobile applications) or define the scope of a query, (ii) to consume, filter
and aggregate data (continuous queries), (iii) to consider query operators that may correspond to programs, (iv) to optimize query even when no metadata or statistics are available. For that, we use Case-Based Reasoning (CBR) – learning the cost of query plans (cases) while executing them – and pseudo-random query plan generation when classical optimization techniques are inappropriate.
The paper is organized as follows. Section 2 gives an overview of our approach based on an example unifying data and networks management functions. It defines an UBIQUEST system as a set of interconnected UBIQUEST nodes. Section 3 presents the architecture of an UBIQUEST node and details its components. Section 4 focuses on the execution engines that perform program execution and global query evaluation. Section 5 presents our proof of concepts as a platform for simulating UBIQUEST systems. Finally, Section 6 concludes the paper and discusses future work.
2. UBIQUEST DATA-CENTRIC APPROACH
With declarative networking, the network is abstracted as a large distributed database providing unified view of "objects" handled by both networks and applications. Such a database stores information about the declarative programs, routers configuration, states and characteristics of the network. Rule-based programs usually correspond to network operations or protocols triggered by data updates. Rules are evaluated over local data and may communicate updates to other nodes in the network using communication primitives.
The UBIQUEST approach merges the strengths of two areas (i) databases, and (ii) declarative networking. With this approach a programmer can specify the behaviour of the system / application (the what) rather than having to describe the details of the system (the how). This allows going one step further in the overlapping approaches, for example with destinations of messages resulting of a query.
An UBIQUEST system runs on a set of computing devices interconnected through a wireless network (cf. Fig. 1). Every device embeds a virtual machine in charge of data management, processing queries (data selection and updates) and messages propagation. A message is the unit of communication among UBIQUEST nodes. It has two main parts: (i) networking information (e.g. logical destination, next hop, TTL) and (ii) a payload where the content of the message (i.e. queries or items) is embedded.
All exchanges between nodes related to communication protocols, to resource discovery or to any other applicative aspects are carried out by queries and data. This blurs the traditional distinction between communication middleware and application layers. Queries are defined using either rule-based languages (e.g. for network data query expressions or distributed algorithms) or declarative query languages (e.g. for querying application data with a global point of view). For a detailed presentation of these languages refer to [1].
Query optimization is based on CBR-based approach and pseudo-random query plan generation. This means that we learn the cost of query plans (cases) while executing them. These cases are reused for generating plans for further similar queries. If there is no convenient case, we use classical heuristics and random choice (e.g. when there is no statistics for join ordering and selection of algorithms) to generate query plans.
To illustrate our approach, let us consider an application concerning a virtual world game divided in areas and having some avatars that are located within a single area at a time (see Fig. 2).
The objective of the game may be social interaction or team fighting; this does not matters for understanding the example. Every node of an UBIQUEST system has information on its own avatars and their neighbors (avatars located in the same area).
Data location is thus application driven. To illustrate our approach, let us consider an application concerning a virtual world game divided in areas and having some avatars that are located within a single area at a time (see Fig. 2).
Let us now assume that the Yellow avatar, owned by node G, is moved from area 7 (where avatar Green owned by node J is localized) to area 8 where the Red avatar (node E) is localized. The Positions table after this operation follows:
<table>
<thead>
<tr>
<th>Positions</th>
<th>Avatar</th>
<th>Area</th>
<th>Owner</th>
</tr>
</thead>
<tbody>
<tr>
<td>7</td>
<td></td>
<td>J</td>
<td>G</td>
</tr>
<tr>
<td>8</td>
<td></td>
<td>G</td>
<td>E</td>
</tr>
<tr>
<td>9</td>
<td></td>
<td>E</td>
<td>I</td>
</tr>
<tr>
<td>2</td>
<td></td>
<td>I</td>
<td>D</td>
</tr>
<tr>
<td>2</td>
<td></td>
<td>D</td>
<td>E</td>
</tr>
</tbody>
</table>
The movement is coded by several updates executed at node G (owner of Yellow) for cleaning area 7, changing the Area attributes of the avatar and finally for storing the new area exploration. The first update for cleaning the area 7 is:
\[
\text{Delete from Positions} \\
\text{Where Area = (Local Select Area from Positions}} \\
\text{where Avatar = 'Yellow'} \\
\text{and Area not in (Local Select Area from Positions}} \\
\text{where Avatar <> 'Yellow' and Owner = SELF)} \\
\text{Stored on SELF:}
\]
The keyword LOCAL indicates that the subquery has to be evaluated by the node over local data only. The sub-queries are local and the delete operation too as it concerns only data stored on SELF. Such a query is executed at the node level and processed in a distributed way with the following principles:
1. No centralized control. Query processing is performed in an environment that is highly dynamic, and has to adapt to and recover from the network evolution. The control needs to be fully distributed over the network.
2. Scarce metadata. The network being highly dynamic, there is no stable knowledge on the data organization. Resource discovery is combined with networking protocols.
3. Everything in the database. The network management is done through queries.
3. UBIQUEST NODE
An UBIQUEST node is a device equipped with an UBIQUEST Virtual Machine (UBIQUEST VM) complemented with a Device wrapper that allows device/VM interaction (see Fig 3). The UBIQUEST VM is composed of: (i) a Local DMS, (ii) an UBIQUEST API, and (iii) an UBIQUEST Engine comprising sub-engines in charge of evaluating global queries, executing rule-based programs, maintaining sensed data and the list of physical neighboring nodes.
3.1 Local DMS and UBIQUEST API
The Local DMS stores and manages data as Itemsets: application data (e.g. sensed data), network data (e.g. routing tables, neighbor table), rule-programs (e.g. distributed algorithms that can be dynamically loaded/removed from the system), and internal data (e.g. device specific data) used for running other UBIQUEST VM components.
The UBIQUEST API manages all interactions between the UBIQUEST Engines and the rest of the world: local applications, device sensors and other UBIQUEST VM through message exchange.
As shown in Fig. 3, the API is composed of: (i) the Application API, in charge of the interaction with applications running on the local node, (ii) the Reception and Emission modules to deal with message exchange among UBIQUEST nodes, (iii) the Sensing API that locally stores data coming from sensors embedded in the physical device, and (iv) the Payload Dispatcher, which manages Payload exchange among UBIQUEST VM sub-components.
The Application API module validates DLAQL queries/updates submitted by applications, and translates them into an internal representation before sending them to the UBIQUEST Engine for evaluation. The Reception Module receives messages from other UBIQUEST nodes and decides if the payload of the incoming message has to be treated locally. It checks if the local node is part of the logical destination of the message. This process may involve interaction with the UBIQUEST Engine to resolve intentional expressions of logical destinations (i.e. destination expressed using a query). Finally, the Reception Module sends the Payload of the message to the Payload Dispatcher to treat it, if the local node is one of the destinations, or forward the message to other destinations through the Emission Module, if not.
Using Payload, logical destinations and a ProgramId identifying a dissemination protocol, the Emission Module builds a new message, invokes the UBIQUEST Engine to compute the immediate physical destination(s) from the logical ones, and sends the message over the network.
The Payload Dispatcher maintains a record of the identifiers of payloads that are currently executed at the node. This allows determining if a received payload was already executed, and thus avoids loops. When it receives payloads from the Application API, it generates a new identifier for registering. When it receives payloads from the Reception Module, the Payload is forwarded to the UBIQUEST Engine for treatment.
When a payload is not in the record, the Payload Dispatcher generates a new identifier, registers the payload and transfers it to the corresponding engine. If the identifier is in the records, the payload dispatcher transfers it to the corresponding engine instance according to the query/result type and the payload identifier. If the message contains several destinations, the payload dispatcher sends it to the emission module, which constructs a message and propagates it over the network using a program (dissemination protocol) selected by the UBIQUEST Engine (Communication Module).
3.2 Sensing and Topology Engines
These two modules are autonomous and react to changes in the environment detected by the device and signaled through the Device Wrapper. The Sensing Engine gets the measures coming from physical sensors embedded in the device (e.g. temperature, location) and stores these values in corresponding itemsets. These itemsets are predefined and adopt a common structure (e.g. itemset Temperature(NodeId [key], value)). The Topology Engine is responsible of updating the Link itemset, defined as...
Link(NodeId [key], Neighbor [key]) according to physical network connections that are established or removed. The Link itemset is mandatory and is sufficient to permit communication among nodes.
3.3 Communication Module
The Communication Module has two different roles: (i) determine if the local node is part of the logical destination of incoming messages, and (ii) determine what is(are) the next hop(s) to transmit a message to a logical destination.
The logical destination of a message is either expressed extensionally using a list of node identifiers, either expressed intentionally using a query returning node identifiers, or expressed by a combination of both. If it is expressed extensionally, determining if the local node takes part in the logical destination of a message is straightforward. In the other case, the Communication Module asks the Distributed Query Engine to solve the intentional destination (i.e. obtain extensional destinations) before deciding.
To determine the next hop(s) for propagating a message, the Communication Module selects a propagation program and invokes the Rule Program Engine to execute it. The default propagation program simply do broadcasting to all neighbors (i.e. the next hops correspond to all items of the Link itemset). Other propagation programs may be written by developers (e.g. by exploiting and maintaining a routing table) and may be automatically selected by the Communication Module.
3.4 DQE Engine
The Distributed Query Engine is responsible of executing global – DLAQL queries. The DLAQL language extends the well-known SQL2 data manipulation language used to conform to the data distribution policy of UBIQUEST. This means that a DLAQL expression may explicitly indicate on which UBIQUEST node data has to be stored. The role of the DQE Engine is to build and execute efficient local query execution plans according to a given cost function (expressed as a combination of real cost parameters). Execution plans are composed of classical physical operators (implementing algebraic operations) and specific operators to invoke program or propagate subqueries. Efficient execution plans are selected using a combination of Case-Base Reasoning and pseudo-random query plan generation.
The Distributed Query Engine is composed of: (i) a Query Scheduler, (ii) a Query Optimizer and (iii) an Execution Engine. The Query Scheduler rewrites a global query into a set of sub-queries and schedules their evaluation (e.g. a global UPDATE query is decomposed into a sequence of SELECT, DELETE and INSERT sub-queries to read the old value, delete it and insert the new value). Moreover, this module rewrites a query considering local and distant Itemset fragments generating a query (or set of queries) equivalent to the original one.
The Query Optimizer is based on the Case-Based Reasoning (CBR) approach as in [22]. It proposes to retrieve and adapt query plans using the experiences gained from the execution of past similar queries. When no knowledge is available it randomly generates query plans using classical heuristics [13, 23].
3.5 Rule Program Engines
The Rule Program Engine is in charge of executing rule-based declarative programs exploited for specifying distributed algorithms (e.g. networking protocols, sub-query execution). The engine selects which rules have to be triggered and execute them over the local data. The rule execution may involve local data storage or emission to neighborhood.
The proposed Netlog and Questlog rule-based programing languages extend Datalog with communication primitives, as well as aggregation and non-deterministic constructs which are standard in network applications. The computation of rule programs is local, and the result can be either stored locally on an UBIQUEST node on which the rules run, or sent to other nodes. The Rule Program Engine receives payloads from the Payload Dispatcher and has to treat their Contents containing either items (new facts or query results) or predicates corresponding to queries.
If a Content contains facts, the Rule Program Engine identifies which rule-program has to be triggered by comparing new facts with predicates in the rule body (Netlog). Then, it retrieves the corresponding rules from the DMS and evaluates them in forward chaining mode till a fix point is reached.
If a Content contains a predicate corresponding to a query (Questlog), the Rule Program Engine identifies which rule-program has to be triggered by comparing the predicate with rule head, then it retrieves the corresponding rules from the DMS and evaluates them in backward chaining till the full query result is computed.
If a Content contains query results, these results are exploited to continue query evaluation.
4. Query and rule-based program execution
4.1 Distributed Query Execution
As said in section 3.4, incoming global queries are rewritten by the Scheduler as a sequence of queries that are to be evaluated in correct order to produce the final result (e.g. evaluate asynchronous subqueries before executing rewritten upper level query). These queries generally contain access to local data and to distant data, according to the horizontal fragmentation of global itemsets. An optimal query plan has to be generated for each one of these queries.
A query plan is a tree whose nodes are physical operators corresponding to data manipulation. A root node corresponds to a DLAQL command (i.e. SELECT, INSERT, DELETE, UPDATE), intermediate nodes correspond to computation operators (e.g. unions, joins, filter or aggregate), and leafs correspond to data access operators: local DMS querying, sub-query emission to all neighbors, or rule-based program invocation.
A case is generated for any (sub-)query expression Q evaluated on a node. It is composed of the expression of Q, a query plan P for Q and a set of cost parameter measures taken during the execution of P. A case is stored in the local case base of the node. The following example is a case for a query finding all avatars in area 7:
\[
\begin{align*}
Q &= \text{“Select Avatar from Positions where Area = 7”,} \\
\text{P} &= \text{Union( DMS(σ_{Area=A}(Positions)).} \\
\text{SubQ}(σ_{Area=A}(Positions)), \\
\text{Cost} &= \{\text{Energy}=0.5\%, \text{Time}=12\text{ms}, \text{Memory}=2\%, \ldots\} \\
\end{align*}
\]
The query plan P involves computation of a subquery on local DMS (DMS operator) and emission of a global sub-query (SubQ operator). Cost parameters are normalized when possible.
In our approach, the query plan generation is a recursive process.
The optimizer checks if the incoming query Q is known in the case base, i.e. if there exist similar cases corresponding to this query, select the best case/plan among them according to the optimization objectives (i.e. minimizing the given cost function). A similarity function to compare a case and a query has been defined based on classification of query expressions (i.e. based on DLAQL clauses, see [20]). If a plan is selected then it has to be adapted to the current Q expression and to be executed, otherwise a plan is generated using a pseudo-random approach. This generation process applies classical heuristics and random choice when missing metadata (e.g. statistics) is needed. For example, if Q contains a set of join operations, it select randomly one join operation and a corresponding algorithm (join operator) and return a plan for this join operation involving two sub-queries. The same process is applied recursively for every sub-query. An in depth description of the optimization process is given in [20].
The Execution Engine executes a query plan P using the well-known Iterator model [10] for the physical operators. It also coordinates the local and distant sub-queries and constructs a final result from sub-query results. During the execution, the cost parameters (energy, time, memory etc.) are measured and a new case is built.
4.2 Rule program execution
As we already explained the Rule Program Engine receives payloads from the Payload Dispatcher and has to treat their Contents containing either items (new facts or query results) or predicates corresponding to queries.
In addition, the Rule Program Engine propagates new items or new queries to other nodes, through the UBIQUEST API, and/or stores new items in the DMS. The Engine has some additional functions, such as timers, necessary for networking protocols, and also uses optimization techniques, such as the triggering of rules by new facts, which avoid unnecessary computations, when there are no changes in the input of rules.
Let us assume the following simple routing table maintenance protocol:
\[
\begin{align*}
\text{Route}(SELF, dest, dest, 1) & :- \text{Link}(SELF, dest). \\
\text{Route}(SELF, dest, neigh, 12) & :- \text{Link}(SELF, neigh). \\
\text{Route}(neigh, dest, \_ , 11), 12 := 11 + 1.
\end{align*}
\]
These rules will be executed if a new neighbor is discovered (i.e. new item in the Link itemset). Satisfied rules involve local storage and broadcasting of new facts (heads of the rules) to all neighbors (\(\_\) symbol). This is done via the Emission Module alone as the destination is expressed extensionally (i.e. all neighbors). The following sequence diagram describes the whole process:
4.3 Combining DLAQL queries and rule programs
Going back to the evaluation of the query
\[ Q = \text{“Select Avatar from Positions where Area = 7”, as} \]
One can figure out, this leads to useless sub-query evaluation. The distant part (SubQ) is useless because the node locally stores its avatar (localized in area 7) and its neighbors (i.e. in the same area).
Such knowledge can be represented as rule-based programs executing specific algorithms to solve these sub-queries. Here, this program is the local identity where no sub-query is emitted:
\[
\text{Positions}(\_ , area, \_ ) \leftarrow \text{Positions}(\_ , area, \_ ).
\]
The DQE Engine may exploit this program to solve (part of) the query Q, thanks to the correspondence table between queries and programs. The following sequence diagram shows the interactions among UBIQUEST node components for such an evaluation.
5. SIMULATION PLATFORM
We develop a platform (see Fig. 6) facilitating the development and monitoring of UBIQUEST applications. This platform offers tools for editing and compiling rule-based programs, a allows the simulation of network of UBIQUEST nodes.
The simulation platform has three main components:
- The Network Editor, which allows to build and simulate a network with various UBIQUEST nodes;
- The Network Monitor, which allows to visualize and interact with the network at run time; and
- The Node Monitor, which allows to monitor the activity of and interact with individual nodes.
The Network Editor allows creating groups of nodes, displaying the status of the nodes in each group and installing rule programs on them. They can have different colors, radio range, and characteristics, such as mobile or fixed. The system creates the groups and displays the nodes on the left part of the screen. Each node is listed and for each node one can see its identifier, address, position and radio range.
The Network Monitor offers the view of the different groups of nodes, represented by different shapes and different colors, and the connections between them (if the nodes are located inside the radio range of another node). Each node has a unique identifier. The Network Monitor also allows to interact with the network, and to modify its configuration before starting or during the simulation, by moving nodes, changing their radio range, or deleting edges or nodes for instance.
The Node Monitor exhibits information about the node selected by the user, displayed on the right part of the screen. It contains six tabs: Display itemset, Programs, Messages, API, DQE, and Statistics. The Display itemset tab allows the user to choose an Itemset existing in the DMS of the node and to display the values of each items of this Itemset. It is important to notice that the content is updated on the fly. For example, you can choose to display the content of the Itemset "Route" to see all the routes contained on the selected node. The next tab simply displays which programs are installed on the node with the possibility for the user to enable or disable them on this node. The Messages tab
one message in particular, you can display its content. The API tab permits to modify the content of the DMS of the selected node by adding an item in one of the Itemsets of the node, or by submitting queries expressed in DLAQL, as an application would do. The DQE tab allows to monitor query execution by exploring the case base (i.e. query families, query plans and measures of computation cost), displaying the list of pending subqueries and partial results for any of the queries running on the selected node.
The last tab shows some basic statistics about the node such as the number of Select queries or Update queries done in the database.
We also developed a simulation and emulation environment for a detailed analysis and evaluation of queries for a large class of algorithms and protocols.
6. ACKNOWLEDGMENTS
This work has been supported by the ANR-09-BLAN-0131-01 UBIQUEST Project (http://ubiquest.imag.fr), financed by the French National Research Agency (ANR).
7. REFERENCES
|
{"Source-Url": "https://inria.hal.science/hal-00816034/file/UBIQUEST_For_Rapid_Prototyping_of_Networking_Applications.pdf", "len_cl100k_base": 6439, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22752, "total-output-tokens": 8456, "length": "2e12", "weborganizer": {"__label__adult": 0.00034928321838378906, "__label__art_design": 0.00036716461181640625, "__label__crime_law": 0.0003781318664550781, "__label__education_jobs": 0.0010166168212890625, "__label__entertainment": 0.00013124942779541016, "__label__fashion_beauty": 0.0001876354217529297, "__label__finance_business": 0.0004646778106689453, "__label__food_dining": 0.0003898143768310547, "__label__games": 0.0006618499755859375, "__label__hardware": 0.0023136138916015625, "__label__health": 0.0006923675537109375, "__label__history": 0.00041747093200683594, "__label__home_hobbies": 0.0001232624053955078, "__label__industrial": 0.0007171630859375, "__label__literature": 0.0003466606140136719, "__label__politics": 0.00030303001403808594, "__label__religion": 0.0005178451538085938, "__label__science_tech": 0.27978515625, "__label__social_life": 0.00011259317398071288, "__label__software": 0.0199127197265625, "__label__software_dev": 0.689453125, "__label__sports_fitness": 0.00032520294189453125, "__label__transportation": 0.0008254051208496094, "__label__travel": 0.00025534629821777344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34967, 0.02865]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34967, 0.45613]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34967, 0.85342]], "google_gemma-3-12b-it_contains_pii": [[0, 1166, false], [1166, 7196, null], [7196, 11250, null], [11250, 17087, null], [17087, 23673, null], [23673, 29479, null], [29479, 34967, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1166, true], [1166, 7196, null], [7196, 11250, null], [11250, 17087, null], [17087, 23673, null], [23673, 29479, null], [29479, 34967, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34967, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34967, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34967, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34967, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34967, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34967, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34967, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34967, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34967, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34967, null]], "pdf_page_numbers": [[0, 1166, 1], [1166, 7196, 2], [7196, 11250, 3], [11250, 17087, 4], [17087, 23673, 5], [23673, 29479, 6], [29479, 34967, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34967, 0.04094]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
bc6de8689e82bff5a6a78865a2640b1386378788
|
DISCRETE VISUAL SIMULATION WITH
Pascal.SIM
Robert M. O'Keefe
Department of Computer Science
Virginia Polytechnic Institute and State University
Blacksburg, VA 24061, U.S.A.
Ruth M. Davies
Department of Mathematical Sciences and Computing
South Bank Polytechnic
ABSTRACT
Pascal.SIM is a collection of Pascal constants, types, variables, functions and procedures for developing event, activity, three-phase or process oriented discrete-event simulation models. Facilities are provided for queue processing, time advance and event list maintenance, control of entities and resources, random number generation and streams, and samppling from parametric and empirical distributions, statistics collection, and visual displays. Pascal.SIM has been designed as a minimal simulation tool. It includes less than 50 functions and procedures, and totals less than 800 lines of code. It is a basis for programming simulations in Pascal, where users can alter or extend the facilities provided, rather than a simulation programming language. The majority of Pascal.SIM conforms to the ISO Pascal standard, enabling high portability to be achieved. It can be used immediately with any Pascal that uses the string type descended from UCSD Pascal, for instance Pro Pascal, Turbo Pascal or Sheffield PRIME Pascal. Alteration of a few lines allows for use with any Pascal that provides a different string type, for instance VAX/VMS Pascal. This paper gives a tutorial presentation of Pascal.SIM, with emphasis on the facilities for visual displays.
1. INTRODUCTION
Recent years have seen some resurgence in interest in the use of general purpose programming languages as vehicles for simulation programs. In part this has been due to the increasing availability of good implementations for strongly-typed block structured languages such as Pascal, Ada, and Modula-2. Further, the use of microcomputers has accelerated such interest, since many Simulation Programming Languages and packages are too large for use on microcomputers, or when available on microcomputers, are highly inefficient.
To facilitate simulation on microcomputers, in 1982 the authors produced a system for programming discrete simulations in Pascal on the Apple II. Called AIMS (O'Keefe, 1983; O'Keefe and Davies, 1986a), it was composed of seven UCSD Pascal library units and a number of associated utilities which the model developer used to construct a simulation as a UCSD Pascal program. At about this time, other Pascal based systems were in development, including PASSIM (Uyenzo and Vaessen, 1980), based on GPSS, and SIMPAS (Bryant, 1980), a SIMSCRIPT-like language which is pre-processed into Pascal.
AIMS had a number of useful and sophisticated features, in addition to the more basic ones such as queue processing and sampling from parametric distributions. One of the units provided facilities for iconic visual displays (this was programmed in assembler, and made direct use of the Apple II's graphic functions in ROM), and another provided for continuous display of time series. Associated utilities included an editor for distribution data, which allowed for the formation of empirical distributions, and a shape editor, where an icon could be defined for future use in visual displays.
AIMS died with the relative demise of the Apple II and the shift to MS-DOS. However, much of it was reprogrammed entirely in Pascal to provide a highly portable discrete simulation system, and was rechristened Pascal.SIM. Various versions of Pascal.SIM have been in use in both education and industry for over 3 years.
2. THE DESIGN OF Pascal.SIM
The philosophy of Pascal.SIM is to provide the basics of a Simulation Programming Language, and little more. Those using Pascal.SIM can then add the facilities they need and change the underlying structure if they wish. A further aim is to provide a means whereby students could learn to write simulations in a familiar language using facilities that are well documented and easy to understand. Visual Interactive Simulation and animation has been very successful (Bell, 1985); therefore some facilities for iconic visual displays have been included.
AIMS enforced the three-phase world view, as first proposed by Tocher, where a simulation is perceived as a number of bound or scheduled events, plus a number of conditional events which are scanned. Pascal.SIM can be used to program a three-phase or a two-phase simulation (ie. pure event scheduling or activity scanning), or using an additional version of the executive, a simple process description simulation.
3. THE STRUCTURE OF A Pascal.SIM PROGRAM
3.1. The Three-Phase Method
The recommended structure of a three-phase orientated Pascal.SIM program is shown in Figure 1. At the heart of the simulation is the executive, the procedure run, which contains the time flow mechanism. Although the structure of this is provided, the user must enter the names of all events into a
case statement in this procedure. The number of conditional events \( \max.C \) and the time duration (\( \text{duration} \)) are both arguments. The user must code the events and the procedures \( \text{initialize} \) and \( \text{report} \); for a visual display \( \text{display} \) and \( \text{picture} \) must also be coded. \( \text{initialize} \) and \( \text{picture} \) are called once before \( \text{run} \); they initialize the simulation and the static picture respectively. \( \text{Report} \) should be called after \( \text{run} \), and should contain any end of run reporting, for example, final statistics prints. \( \text{Display} \) is called after every advance of the clock; it should be used to update the visual display as necessary.
### 3.2. The Two Phase Methods
The three-phase executive can be used for the two-phase event scheduling approach, by setting \( \max.C \) to zero and incorporating all the conditional event logic into the scheduled events. Similarly, a two-phase activity scanning approach can be used by incorporating all scheduled events into the model as conditional events.
### 3.3. The Process View
A separate executive has been written for this approach. Whilst not process interaction, in that processes can not signal each other, descriptions of independent processes are possible. This is sometimes referred to as process description. Further, both servers and transactions can have process descriptions - thus the approach is conceptually closer to process interaction than GPSS. Each process is written as a separate procedure; the user must enter the names of all processes into the executive \( \text{run} \).
### 3.4. The Entity
The basis of Pascal.SIM is an entity type, which is a Pascal record thus:
```
entity = "an_entity;"
an_entity = packed record
aval := boolean;
class := class.num;
col := colour;
attr, next.B := cardinal;
time := real;
end;
```
where the fields of the record represent:
- **aval:** The availability of the entity
- **class:** The number of the entities class
- **col:** The colour of the entity
- **attr:** The entities attribute number
- **next.B:** The next bound event or block that the entity will enter
- **time:** The time at which this will occur
An entity is always either available, entered in the calendar of future events, or is being used by another entity. If entered in the calendar, \( \text{aval} \) is false, and \( \text{next.B} \) and \( \text{time} \) will be set to appropriate values. Thus there are no explicit event notices in Pascal.SIM because an entity contains all relevant event inform-
```
{ Bound events }
procedure B1;
procedure B2;
:
:
{ Conditional events }
procedure C1;
procedure C2;
:
:
procedure display;
procedure run(duration: real; \( \max.C \) : cardinal);
procedure initialize;
procedure picture;
procedure report;
begin
initialize;
picture;
:
run(..., ...);
:
report;
:
end.
```
Figure 1: The structure of a three-phase Pascal.SIM program
mation. Entities are generated with the function \( \text{new.entity} \), and can be disposed of with the procedure \( \text{dis.entity} \).
Access to entities is achieved through the global variable \( \text{current} \), which always points to the entity that has caused the present event, or else by searching queues of entities.
The attribute number \( \text{attr} \) uniquely identifies each entity. If further attributes are required they can either be added to entities by using the attribute number to access another data structure, or else by adding in new fields to the entity record and recompiling Pascal.SIM.
In complex models, the developer would establish classes, where a class is a list of entities, and both the class and each entity may have attributes. For visual displays, the developer must enter classes into a class table, which holds information on the letter and colour used to represent an entity in the display.
### 3.5. Resources
A resource type, with associated routines, is provided to model passive entities which only serve. Resources are collected into a bin - in effect a bin is identical to the \( \text{STORAGE} \) of GPSS; a bin with only one resource is identical to a \( \text{FACILITY} \). Resources are said to be acquired and released by entities.
### 3.6. Functions and Procedures
The provided functions and procedures of Pascal.SIM are grouped into 11 groups, respectively:
```
518
queue processing
entities and classes
timing and the executive
facilities for process description
resources
error messages
random number generation and streams
sampling distributions
histograms
screen control
visual displays
The interface of Pascal.SIM, i.e. all constants, types, global variables and function and procedure heads, is shown in the appendix A. A certain excess redundancy is present in the four routines (give_top, give_tail, take_top, take_tail) which allow for giving and removing entities to the top and tail of a queue. Use of these allow students to develop First In First Out queueing models without having to explicitly dereference pointers.
4. AN EXAMPLE - ADMISSION TO HOSPITAL
The example to demonstrate how to program using Pascal.SIM is a hospital simulation, shown as an activity diagram in Figure 2. Two types of patients are admitted to hospital. Those not admitted for an operation undergo a short stay, and then return home. Patients admitted for operation undergo a pre-operative stay, an operation (which requires an open and available operating theatre), followed by a post-operative stay and discharge. Such a simulation is somewhat simplistic, but might be used to investigate various policies regarding bed and operating theatre provision.
Appendix B shows a Pascal.SIM three-phase orientated program for a visual simulation. An example visual display is shown in Figure 3. In the initialize procedure, the simulation and random number streams are initialized (via make_sim and make_streams), a bin called bed with 4 resources is created using make.bin, and the queues q1,q2,q3 and q4 are initialized using make_queue. The operating theatre is created, and scheduled to close in 8 hours. Note that cause is the scheduling procedure; the first parameter indicates the bound event that will be entered. This has to be specified as an integer, since Pascal does not allow procedure names to be passed as parameters and stored for future calling. Case statements in the executive relate these numbers to procedures calls.
4.1. Programming the Visual Display using the Three-Phase Approach
The visual display is composed of two parts - a static background picture which is written to the console once prior to the simulation run, and a dynamic display which moves over the static picture. The dynamic display can be updated either within an event, bound or conditional, or following a time beat (where one or more events will have been executed) in the procedure display.
Even with using text to program simple iconic visual displays, a minimum amount of screen control is essential. Cursor addressing must be possible; for colour displays the ability to set both foreground and background colour is necessary. Many terminals provide both of these, and thus the visual display routines are highly portable.
A static picture is created in the procedure picture. Entity classes 1 and 2 (respectively hospital stay only and operation patients) are entered in the class_table with letters 's' and 'o'. Both will appear blue, unless the field col in the entity record has been set to a colour - this overrides the class_table entry. To provide a background, blocks coloured magenta are entered in the display, and some simple annotation is provided using the gotoxy procedure in Pascal.SIM and the standard Pascal procedure write.
The procedure display provides for updating of the dynamic display after a time beat. The number of beds in use (bed.number-bed.num_avail) is written. At this point, the display is completely up to date. The simulation is then delayed relative to the time before the new time beat (tim-old.tim). If this is not done, the display advances too quickly for comfortable viewing. The new clock time tim is then written to the display, and the simulation (and thus the part of the picture generated within events) can continue.
Most of the visual display statements are embedded in the events. For instance, when a hospital stay only patient arrives, the following occurs (see procedure patient1.arrives) :-
- put patient on a queue for a bed
- show the arrival of the patient by horizontal movement display the hospital stay only queue for beds cause the arrival of a new hospital stay only patient
This means that the developer of a visual simulation must introduce dummy queues at various points in a process so that the queues can be written to the picture. (This is analogous to having to use dummy queues to collect statistics in GPSS.) Those interested in process description models in Pascal.SIM should refer to O’Keefe and Davies (1986b), which includes a process version of the hospital example.
6. PORTABILITY AND IMPLEMENTATION
Considerable portability is achieved by close adherence to the ISO Pascal standard. Only two non-standard Pascal facilities are used - the use of an underscore in names (which can easily be edited out), and the use of a string type. However, most Pascal implementations provide a string type, and Pascal.SIM can be implemented without change under any Pascal that uses the string type and associated functions descended from UCSD Pascal. Examples include Turbo Pascal, Pro Pascal, and the Pascal compiler for PRIME systems produced at the University of Sheffield in England. If a string type is defined differently, or different functions are provided, a few alterations are necessary. For instance, in VAX/VMS Pascal, the string type is varying array of char rather than string, and strings are concatenated directly using the addition operator rather than a concat function. If strict ISO Pascal is followed, and a packed array of char has to be used, then only one procedure is unusable. This is print.histogram, which prints histograms to a text file.
Pascal.SIM is normally implemented by some method of prior compilation. Methods include adding the functions and procedures to a library and the variables to a common area (this is the method of implementation in Pro Pascal), production of a unit or module, containing all of Pascal.SIM, that is then put in a library (for instance, UCSD Pascal), or by a similar method (for instance, implementation in VAX/VMS Pascal is achieved by production of an environment file for the constants, types and variables, and an associated module for the functions and procedures). Thus the facilities are available to any Pascal program by simple reference to the library, unit or whatever. Pascal.SIM has been used extensively with Turbo Pascal, which provides no facilities for separate compilation. Here the programmer must recompile Pascal.SIM with the simulation.
To implement Pascal.SIM, it is necessary to set up the screen control codes within a number of procedures. Many terminals can be made to accept ANSI screen control codes (for instance, IBM-PC monitors) or use an extension of ANSI (for instance, DEC VT100 and VT240). Thus ANSI screen control (and the extended ANSI descended from Textronix for colour text) is frequently sufficient. Additional copies of some screen control and visual display routines are provided for use with Turbo Pascal, which call the screen control routines built into Turbo Pascal.
Two random number generators are provided - one for 32-bit integer machines, one for 16-bit integer machines. These are respectively the linear congruence generators
\[ Z_{i+1} := (Z_i \times 16807) \mod 2147483647 \]
Discrete Visual Simulation with Pascal SIM
and
\[ Z_{i+1} := (Z_i + 3993 + 1) \mod 32767. \]
The 16-bit version is an implementation of a generator suggested and tested by Teshen, Sun and Wang (1984), which assumes that detection of integer overflow has been disabled. (Incidentally, the authors have found the built in mathematical functions of Turbo Pascal, for instance, exp and sin, to be poor. Hence distribution sampling methods that employ these, for instance the Box-Muller method for normal variates, provide relatively poor sets of samples, with too few samples from the tail of the distribution.)
7. CONCLUSIONS
The authors have mainly used Pascal SIM with Turbo Pascal and VAX/VMS Pascal. Pascal SIM, Turbo Pascal, a colour monitor, and an IBM-PC/XT or AT allow visual simulations of reasonable display quality to be developed and run. For statistical experimentation, the model can then be ported to a VAX, and the Pascal SIM statements relating to the visual display replaced by statements for statistics collection using histograms (an area of Pascal SIM that has not been covered in this paper).
Programming visual simulations can be time consuming, and typically in the hospital example there are more programmed statements relating to the display than to the logic of the simulation. This is true for other programming language orientated visual simulation systems, for example SEE-WHY (Fiddy, Bright and Hurriion, 1981).
The authors have found the three-phase world view the best for visual simulation. The three-phase method allows the picture to be updated after time dependent changes (bound events), state changes (conditional events), or time beats as appropriate. However, having the range of world views in one package, including two-phase, three-phase and process description views, is very useful for teaching. Students can program models using a number of views, and thus obtain a better understanding of frameworks for simulation model building than when using one approach.
The value of producing a Pascal based simulation tool may be considered questionable, given the recent emphasis on the entire process of model development (Nance, 1984), and the promise of Artificial Intelligence (O'Keefe, 1986). Many simulations are, however, still programmed in FORTRAN (Christy and Watson, 1983). Increasingly students of science and engineering subjects are learning Pascal as their main programming language. They will undoubtedly want to write simulations in Pascal. Pascal SIM provides a structure and the facilities to do this.
Acknowledgements
Many of the ideas in Pascal SIM can be traced back to a Pascal based system produced by John Crookes at the University of Lancaster, England.
This paper was completed whilst the first author was on leave from the Board of Studies in Management Science, University of Kent at Canterbury, England.
Pascal SIM is available on an IBM-PC disc for a nominal fee. It can be obtained from either of the authors or Decision Computing, 1 Worthgate Place, Canterbury, England. However, swift response to any request for Pascal SIM is not guaranteed! Please write - do not phone.
The following are trademarks:
- Pro Pascal: Prospero Software Limited
- Turbo Pascal: Borland International
- VAX/VMS: DEC
- IBM-PC: IBM
- UCSD Pascal: Regents of the University of California
- MS-DOS: Microsoft
- Ada: United States Department of Defence
REFERENCES
APPENDIX A: Pascal SIM FACILITIES
```plaintext
cost max.cell.num=16;
max.stream.num=52;
max.class.num=256;
max.sample.num=20;
```
max_string_length=80;
delay_num=2000;
type a.string=string[max_string_length];
cardinal=0..maxint;
colour=(null,black,red,green,yellow,blue,magenta,cyan,white);
stream_num=1..max_stream_num;
cell_num=0..max.cell_num;
class_num=1..max.class_num;
sample_num=1..max.sample_num;
string_length=1..max_string_length;
entity="an.entity;"
link="a.link;"
a.link=record
next,pre:link;
item:entity;
end;
queue=link;
an.entity=packed record
avail:boolean;
class:class_num;
col:colour;
attr,next,B:cardinal;
time:real;
end;
bin=record
number, num.avail:cardinal;
end;
histogram=record
cell:array[cell_num] of real;
count, width, base, total, eosq, min, max:real;
end;
lookup_table=table[1..max.sample_num,1..2] of real;
var
tim:real;
current:entity;
calendar:queue;
on_calendar:boolean;
suspended.chain:queue;
running:boolean;
original.seeds:seeds:array [stream_num] of cardinal;
class.table:array [class_num] of
record
let:char;col:colour;
end;
{ queue processing }
procedure make.queue(var qqueue);
procedure give(qqueue,link:entity);
function take(qqueue:link:entity);
procedure give.top(qqueue:entity);
procedure give.tail(qqueue:entity);
function take.top(qqueue:entity);
function take.tail(qqueue:entity);
function empty(qqueue:boolean;
{ entities and classes }
function new.entity(class.num:cardinal:entity;
procedure del.entity:entity;
procedure make.class(var cqueue,p:size:cardinal);
function count(var qqueue:cardinal;
{ timing and the executive }
procedure make.sim;
procedure cause(abeced,cardinal:entity:real;
procedure calendar.top;
{ facilities for process executive }
procedure branch(next:cardinal;
procedure remove.entity;
{ resources }
procedure make.bin(var from:bin:cardinal;
procedure acquire(var from:bin:cardinal;
procedure return(var from:bin:cardinal;
{ error messages }
procedure sim.error(a:cardinal;
{ random number generator and streams }
procedure make.streams;
procedure md(ess:stream_num:real;
{ sampling distributions }
function normal(u:stream_num:real;
function log.normal(u:stream_num:real;
function poisson(u:stream_num:real;
function negexp(u:stream_num:real;
function uniform(l:real:stream_num:real;
function make.sample(var sample.file:text;
var table:lookup_table;
function sample(sample:table:lookup_table:stream_num:real;
{ histograms }
procedure reset.histogram(var h:histogram;
procedure make.histogram(var h:histogram;
cell.base,cell.width:real;
procedure print.histogram(var pr:text;h:histogram;
state:boolean,plen:cardinal;
procedure log.histogram(var h:histogram;where,what:real;
{ screen control }
procedure make.screen;
procedure gotoxy(x,y:cardinal;
procedure clear.screen;
procedure set.foreground(c:colour;
procedure set.background(c:colour;
procedure reset.colours;
{ visual displays }
procedure delay;
procedure make.class_table;
procedure enter.class(n:cardinal;char:;colour:;);
procedure write.entity(x,y:cardinal;entity:);
procedure write.queue(x,y:cardinal;
bcolou:;queue:;max:;length:cardinal);
procedure write.block(x,y:cardinal;bcolou:;);
procedure move.v(x,y:cardinal;entity:;bcolou:;);
procedure move.h(y:cardinal;entity:;bcolou:;);
procedure write.time;
{ user written routines }
procedure display;
procedure initialize;
procedure picture;
procedure report;
{ simulation executive }
procedure run(duration:real;max.C:cardinal);
APPENDIX B: THE HOSPITAL EXAMPLE
program example;
var
bed:bin;
q1,q2,q3,q4:queue;
theatre:entity;
theatre.open, theatre.available:boolean;
{ true if theatre is open and available }
old.tim:real;
procedure patient1.arrives: { stay } { B1 }
begin
give.tail(q1,curr);
move.h(12,2,10,curr,white);
write.queue(22,12,white,q1,10);
cause(1,new.entity(1,1),uniform(60,140,1));
end;
procedure patient2.arrives: { operation } { B2 }
begin
give.tail(q2,curr);
move.h(14,2,10,curr,white);
write.queue(22,14,white,q2,20);
cause(2,new.entity(2,1),uniform(24,48,2));
end;
procedure end.hospital.stay: { B3 }
begin
return(bed,1);
move.h(12,40,70,curr,white);
dis.entity(curr);
end;
procedure end.pre-operative.stay: { B4 }
begin
curr.col:=yellow;
give.tail(q3,curr);
move.v(30,14,20,curr,white);
move.h(20,30,50,curr,white);
write.queue(60,20,white,q3,30);
end;
procedure end.operation: { B5 }
begin
thear:available:=true;
gotoxy(63,21);write(‘.’);
mov.e(30,4,10,curr,white);
give.tail(q4,curr);
end;
procedure end.post-operative.stay: { B6 }
begin
return(bed,1);
move.h(12,40,70,curr,white);
dis.entity(curr);
end;
procedure open.theatre: { B7 }
begin
thear:open:=true;
gotoxy(63,20);write(‘OPEN ’);
cause(8,curr,8);
end;
procedure close.theatre: { B8 }
begin
thear:open:=false;
gotoxy(63,20);write(‘CLOSED’);
cause(7,curr,40);
end;
procedure start.hospital.stay: { C1 }
begin
while (bed.num:avail>0)
and (not empty(q1)) do
begin
acquire(bed,1);
cause(3, take.top(q1),uniform(20,40,3));
write.queue(22,12,white,q1,20);
end;
end;
procedure start.pre-operative.stay: { C2 }
begin
while (bed.num:avail>0)
and (not empty(q2)) do
begin
acquire(bed,1);
cause(4, take.top(q2),uniform(8,15,4));
write.queue(22,14,white,q2,20);
end;
end;
procedure start.operation: { C3 }
begin
while theatre.open and theatre.available
523
and (not empty(q2)) do
begin
theatre.available:=false;
cause(5, take.top(q3), 1);
gotoxy(63, 21); write('IN USE');
write.queue(60, 20, white, q3, 30);
end;
end;
procedure start_post_operative_stay; { C4 }
begin
while not empty(q2) do
begin
cause(6, take.top(q4), uniform(5, 10, 6));
end;
end;
procedure display;
begin
gotoxy(30, 12); write(bed.number-bed.num.avail:1);
delay:delay;
for i:=1 to trunc((tim-old.tim)/2) do delay;
old.tim:=tim;
goxy(1, 1); writeln(tim:7:2);
goxy(1, 1);
end { display };
procedure run(duration:real; max.C:cardinal);
var i:cardinal;
begin
running:=true;
repeat
if calendar=calendar-.next then running:=false
else begin
display;
tim:=calendar-.next-.item-.time;
if duration*tim then running:=false
else begin
while (calendar<calendar-.next-and (tim<calendar-.next-.item-.time)) do
begin
case current-.next.B of 0::
1:patient1.arrives;
2:patient2.arrives;
3:end.hospital.stay;
4:end.pre.operative.stay;
5:end.operation;
6:end.post.operative.stay;
7:open.theatre;
8:close.theatre;
end;
end;
for c:=1 to max.C do
begin
start.hospital.stay;
end;
end;
end { run };
procedure initialize;
begin
make.sim;
make.streams;
make.bin(bed, 4);
make.queue(q1); make.queue(q2);
make.queue(q3); make.queue(q4);
{ create theatre }
theatre:=new.entity(3, 1);
theatre.open:=true;
theatre.available:=true;
cause(5, theatre, 6);
end { initialize };
procedure picture;
var i:cardinal;
begin
enter.class(1, 'm', 'blue');
enter.class(2, 'o', 'blue');
clear.screen;
write.block(28, 10, 32, 14, 'magenta');
write.block(60, 18, 70, 23, 'magenta');
set.foreground('yellow');
goxy(4, 11); write('Hospital only');
goxy(4, 18); write('Operation');
goxy(32, 8); write('Bed in use');
goxy(60, 15); write('Operating');
goxy(60, 16); write('Theatre');
reset.colours;
end { picture };
procedure report;
begin
end { report };
begin
initialize;
picture;
cause(1, new.entity(1, 1, 0));
cause(2, new.entity(2, 1, 0));
old.tim:=0;
run(24*30+12, 4);
report;
reset.colours;
end.
AUTHOR'S BIOGRAPHIES
ROBERT M. O'KEEFE is a visiting assistant professor in the Department of Computer Science at Virginia Tech, on leave from the Board of Studies in Management Science at the University of Kent at Canterbury, England. He received a B.Sc. in Computer Studies and Operational Research from the University of Lancaster in 1979, and a Ph.D. in Operational Research from the University of Southampton in 1984. Major research interests include Artificial Intelligence and simulation, Visual Interactive Simulation, and the application of expert systems. He is a member of SCS, TIMS, ORS, AAAI and BCS, and a Director of Decision Computing Limited.
Robert M. O'Keefe
Department of Computer Science
Virginia Polytechnic Institute and State University
Blacksburg, VA 24061, U.S.A.
(703) 961-6075
Permanent address: Rutherford College
University of Kent at Canterbury
Canterbury, Kent CT2 7NX, England.
RUTH M. DAVIES has been working on the application of statistics, Operational Research and computing to problems in Health Care for a number of years. A continuing major research interest is the provision of care to patients with end-stage renal failure. She received a B.Sc. in Mathematics from the University of Warwick, and a Ph.D. in Operational Research from the University of Southampton in 1984. Presently a lecturer in Operational Research in the Department of Mathematical Sciences and Computing at the South Bank Polytechnic, London, England, she has also held research positions at the Universities of Reading and Southampton.
Ruth M. Davies
Department of Mathematical Sciences and Computing
South Bank Polytechnic
Borough Road
|
{"Source-Url": "http://www.informs-sim.org/wsc86papers/1986_0080.pdf", "len_cl100k_base": 6931, "olmocr-version": "0.1.48", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 9955, "total-output-tokens": 8498, "length": "2e12", "weborganizer": {"__label__adult": 0.00042176246643066406, "__label__art_design": 0.0005822181701660156, "__label__crime_law": 0.0004787445068359375, "__label__education_jobs": 0.00789642333984375, "__label__entertainment": 0.0001055598258972168, "__label__fashion_beauty": 0.0002560615539550781, "__label__finance_business": 0.0006017684936523438, "__label__food_dining": 0.0005660057067871094, "__label__games": 0.0011186599731445312, "__label__hardware": 0.002910614013671875, "__label__health": 0.00405120849609375, "__label__history": 0.0004301071166992187, "__label__home_hobbies": 0.0002359151840209961, "__label__industrial": 0.0013103485107421875, "__label__literature": 0.0003345012664794922, "__label__politics": 0.00036525726318359375, "__label__religion": 0.0006022453308105469, "__label__science_tech": 0.441162109375, "__label__social_life": 0.00016069412231445312, "__label__software": 0.01690673828125, "__label__software_dev": 0.51806640625, "__label__sports_fitness": 0.00055694580078125, "__label__transportation": 0.000701904296875, "__label__travel": 0.00019788742065429688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31145, 0.03353]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31145, 0.43502]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31145, 0.82774]], "google_gemma-3-12b-it_contains_pii": [[0, 4978, false], [4978, 9443, null], [9443, 13708, null], [13708, 16839, null], [16839, 22091, null], [22091, 24900, null], [24900, 27471, null], [27471, 29465, null], [29465, 31145, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4978, true], [4978, 9443, null], [9443, 13708, null], [13708, 16839, null], [16839, 22091, null], [22091, 24900, null], [24900, 27471, null], [27471, 29465, null], [29465, 31145, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31145, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31145, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31145, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31145, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31145, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31145, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31145, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31145, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31145, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31145, null]], "pdf_page_numbers": [[0, 4978, 1], [4978, 9443, 2], [9443, 13708, 3], [13708, 16839, 4], [16839, 22091, 5], [22091, 24900, 6], [24900, 27471, 7], [27471, 29465, 8], [29465, 31145, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31145, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
f02b66493d83fdb853028b3b166e9dbd92012d61
|
CSE 331
Software Design & Implementation
Hal Perkins
Autumn 2012
Design Patterns I
(Slides by Mike Ernst and David Notkin)
Outline
• Introduction to design patterns
• Creational patterns (constructing objects)
• Structural patterns (controlling heap layout)
• Behavioral patterns (affecting object semantics)
What is a design pattern?
• A standard solution to a common programming problem
– a design or implementation structure that achieves a particular purpose
– a high-level programming idiom
• A technique for making code more flexible
– reduce coupling among program components
• Shorthand for describing program design
– a description of connections among program components (static structure)
– the shape of a heap snapshot or object model (dynamic structure)
A few simple examples....
Example 1: Encapsulation (data hiding)
- **Problem:** Exposed fields can be directly manipulated
- Violations of the representation invariant
- Dependences prevent changing the implementation
- **Solution:** Hide some components
- Permit only stylized access to the object
- **Disadvantages:**
- Interface may not (efficiently) provide all desired operations
- Indirection may reduce performance
Example 2: Subclassing (inheritance)
- **Problem:** Repetition in implementations
- Similar abstractions have similar components (fields, methods)
- **Solution:** Inherit default members from a superclass
- Select an implementation via run-time dispatching
- **Disadvantages:**
- Code for a class is spread out, and thus less understandable
- Run-time dispatching introduces overhead
Example 3: Iteration
- **Problem:** To access all members of a collection, must perform a specialized traversal for each data structure
- Introduces undesirable dependences
- Does not generalize to other collections
- **Solution:**
- The implementation performs traversals, does bookkeeping
- The implementation has knowledge about the representation
- Results are communicated to clients via a standard interface (e.g., `hasNext()`, `next()`)
- **Disadvantages:**
- Iteration order is fixed by the implementation and not under the control of the client
Example 4: Exceptions
• Problem:
– Errors in one part of the code should be handled elsewhere.
– Code should not be cluttered with error-handling code.
– Return values should not be preempted by error codes.
• Solution: Language structures for throwing and catching exceptions
• Disadvantages:
– Code may still be cluttered.
– It may be hard to know where an exception will be handled.
– Use of exceptions for normal control flow may be confusing and inefficient.
Example 5: Generics
• Problem:
– Well-designed data structures hold one type of object
• Solution:
– Programming language checks for errors in contents
– `List<Date>` instead of just `List`
• Disadvantages:
– More verbose types
Why design patterns?
• Advanced programming languages like Java provide lots of powerful constructs – subtyping, interfaces, rich types and libraries, etc.
• By the nature of programming languages, they can’t make everything easy to solve
• To the first order, design patterns are intended to overcome common problems that arise in even advanced object-oriented programming languages
• They increase your vocabulary and your intellectual toolset
When (not) to use design patterns
• Rule 1: delay
– Get something basic working first
– Improve it once you understand it
• Design patterns can increase or decrease understandability
– Add indirection, increase code size
– Improve modularity, separate concerns, ease description
• If your design or implementation has a problem, consider design patterns that address that problem
Why should you care?
• You could come up with these solutions on your own
– You shouldn't have to!
• A design pattern is a known solution to a known problem
Whence design patterns?
- The Gang of Four (GoF) – Gamma, Helm, Johnson, Vlissides
- Each an aggressive and thoughtful programmer
- Empiricists, not theoreticians
- Found they shared a number of “tricks” and decided to codify them – a key rule was that nothing could become a pattern unless they could identify at least three real examples
Patterns vs. patterns
- The phrase “pattern” has been wildly overused since the GoF patterns have been introduced.
- “pattern” has become a synonym for “[somebody says] X is a good way to write programs.”
- And “anti-pattern” has become a synonym for “[somebody says] Y is a bad way to write programs.”
- A graduate student recently studied so-called “security patterns” and found that very few of them were really GoF-style patterns.
- GoF-style patterns have richness, history, language-independence, documentation and thus (most likely) far more staying power.
An example of a GoF pattern
• Given a class C, what if you want to guarantee that there is precisely one instance of C in your program? And you want that instance globally available?
• First, why might you want this?
• Second, how might you achieve this?
Possible reasons for Singleton
- One `RandomNumber` generator
- One graph model object
- One `KeyboardReader`, etc…
- Make it easier to ensure some key invariants
- Make it easier to control when that single instance is created – can be important for large objects
- …
Several solutions
class Singleton {
private static final Singleton instance = new Singleton(); // Private constructor prevents instantiation from other classes
private Singleton() { }
public static Singleton getInstance() {
return instance;
}
}
class Singleton {
private static Singleton instance;
private Singleton() { }
public static synchronized Singleton getInstance() {
if (instance == null) {
instance = new Singleton();
}
return instance;
}
}
GoF patterns: three categories
• **Creational Patterns** – these abstract the object-instantiation process
– Factory Method, Abstract Factory, **Singleton**, Builder, Prototype, …
• **Structural Patterns** – these abstract how objects/classes can be combined
– Adapter, Bridge, **Composite**, Decorator, Façade, Flyweight, Proxy, …
• **Behavioral Patterns** – these abstract communication between objects
– Command, Interpreter, **Iterator**, Mediator, **Observer**, State, Strategy, Chain of Responsibility, Visitor, Template Method, …
• Blue = ones we’ve seen already
Creational patterns
• Constructors in Java are inflexible
– Can't return a subtype of the class they belong to
– Always return a fresh new object, never re-use one
• Problem: client desires control over object creation
• Factory method
– Hides decisions about object creation
– Implementation: put code in methods in client
• Factory object
– Bundles factory methods for a family of types
– Implementation: put code in a separate object
• Prototype
– Every object is a factory, can create more objects like itself
– Implementation: put code in clone methods
Motivation for factories: Changing implementations
- Supertypes support multiple implementations
- interface Matrix { ... }
- class SparseMatrix implements Matrix { ... }
- class DenseMatrix implements Matrix { ... }
- Clients use the supertype (Matrix)
- Still need to use a SparseMatrix or DenseMatrix constructor
- Switching implementations requires code changes
Use of factories
• Factory
class MatrixFactory {
public static Matrix createMatrix() {
return new SparseMatrix();
}
}
• Clients call createMatrix, not a particular constructor
• Advantages
– To switch the implementation, only change one place
– Can decide what type of matrix to create
Example: bicycle race
class Race {
// factory method for bicycle race
Race createRace() {
Bicycle bike1 = new Bicycle();
Bicycle bike2 = new Bicycle();
...
}
}
}
Example: Tour de France
class TourDeFrance extends Race {
// factory method
Race createRace() {
Bicycle bike1 = new RoadBicycle();
Bicycle bike2 = new RoadBicycle();
...
}
}
}
Example: Cyclocross
class Cyclocross extends Race {
// factory method
Race createRace() {
Bicycle bike1 = new MountainBicycle();
Bicycle bike2 = new MountainBicycle();
...
}
}
}
Factory method for Bicycle
```java
class Race {
Bicycle createBicycle() { ... }
Race createRace() {
Bicycle bike1 = createBicycle();
Bicycle bike2 = createBicycle();
...
}
}
```
- Use a factory method to avoid dependence on specific new kind of bicycle in `createRace()`
Code using Bicycle factory methods
class Race {
Bicycle createBicycle() { ... }
Race createRace() {
Bicycle bike1 = createBicycle();
Bicycle bike2 = createBicycle();
...
}
}
class TourDeFrance extends Race {
Bicycle createBicycle() {
return new RoadBicycle();
}
}
class Cyclocross extends Race {
Bicycle createBicycle(Frame) {
return new MountainBicycle();
}
}
Factory objects/classes
encapsulate factory methods
class BicycleFactory {
Bicycle createBicycle() {... }
Frame createFrame() {... }
Wheel createWheel() {... }
...
}
class RoadBicycleFactory extends BicycleFactory {
Bicycle createBicycle() {
return new RoadBicycle();
}
}
class MountainBicycleFactory extends BicycleFactory {
Bicycle createBicycle() {
return new MountainBicycle();
}
}
Using a factory object
class Race {
BicycleFactory bfactory;
// constructor
Race() { bfactory = new BicycleFactory(); }
Race createRace() {
Bicycle bike1 = bfactory.createBicycle();
Bicycle bike2 = bfactory.createBicycle();
...
}
}
class TourDeFrance extends Race {
// constructor
TourDeFrance() { bfactory = new RoadBicycleFactory(); }
}
class Cyclocross extends Race {
// constructor
Cyclocross() { bfactory = new MountainBicycleFactory(); }
}
Separate control over bicycles and races
class Race {
BicycleFactory bfactory;
// constructor
Race(BicycleFactory bfactory)
{
this.bfactory = bfactory;
}
Race createRace()
{
Bicycle bike1 = bfactory.completeBicycle();
Bicycle bike2 = bfactory.completeBicycle();
...
}
}
// No special constructor for TourDeFrance or
// for Cyclocross
Now we can specify the race and the bicycle separately:
new TourDeFrance(new TricycleFactory())
DateFormat factory methods
DateFormat class encapsulates knowledge about how to format dates and times as text
- Options: just date? just time? date+time? where in the world?
- Instead of passing all options to constructor, use factories.
- The subtype created doesn't need to be specified.
```java
DateFormat df1 = DateFormat.getDateInstance();
DateFormat df2 = DateFormat.getTimeInstance();
DateFormat df3 = DateFormat.getDateInstance(DateFormat.FULL, Locale.FRANCE);
Date today = new Date();
System.out.println(df1.format(today)); // "Jul 4, 1776"
System.out.println(df2.format(today)); // "10:15:00 AM"
System.out.println(df3.format(today)); // "jueudi 4 juillet 1776"
```
Prototype pattern
• Every object is itself a factory
• Each class contains a clone method that creates a copy of the receiver object
```java
class Bicycle {
Bicycle clone() { ... }
}
```
• Often, Object is the return type of clone
– clone is declared in Object
– Design flaw in Java 1.4 and earlier: the return type may not change covariantly in an overridden method
• i.e., return type could not be made more restrictive
• This is a problem for achieving true subtyping
Using prototypes
```java
class Race {
Bicycle bproto;
// constructor
Race(Bicycle bproto) { this.bproto = bproto; }
Race createRace() {
Bicycle bike1 = (Bicycle) bproto.clone();
Bicycle bike2 = (Bicycle) bproto.clone();
...
}
}
```
Again, we can specify the race and the bicycle separately:
```java
new TourDeFrance(new Tricycle())
```
Dependency injection
Change the factory without changing the code
With a regular in-code factory:
```java
BicycleFactory f = new TricycleFactory();
Race r = new TourDeFrance(f);
```
With external dependency injection:
```java
BicycleFactory f = ((BicycleFactory) DependencyManager.get("BicycleFactory"));
Race r = new TourDeFrance(f);
```
plus an external file:
```xml
<service-point id="BicycleFactory">
<invoke-factory>
<construct class="Bicycle">
<service>Tricycle</service>
</construct>
</invoke-factory>
</service-point>
```
+ Change the factory without recompiling
- Harder to understand
- Easier to make mistakes
Sharing
Recall the second weakness of Java constructors:
Java constructors always return a **new object**, never a pre-existing object.
- **Singleton**: only one object exists at runtime
- Factory method returns the same object every time (we’ve seen this already)
- **Interning**: only one object with a particular (abstract) value exists at runtime
- Factory method returns an existing object, not a new one
- **Flyweight**: separate intrinsic and extrinsic state, represent them separately, and intern the intrinsic state
- Implicit representation uses no space
Interning pattern
- Reuse existing objects instead of creating new ones
- Less space
- May compare with `==` instead of `equals()`
- Permitted only for immutable objects
Interner mechanism
- Maintain a collection of all objects
- If an object already appears, return that instead
```java
HashMap<String, String> segnames; // why not Set<String>?
String canonicalName(String n) {
if (segnames.containsKey(n)) {
return segnames.get(n);
} else {
segnames.put(n, n);
return n;
}
}
```
- Java builds this in for strings: `String.intern()`
- Two approaches:
- create the object, but perhaps discard it and return another
- check against the arguments before creating the new object
java.lang.Boolean does not use the Interning pattern
```java
public class Boolean {
private final boolean value;
// construct a new Boolean value
public Boolean(boolean value) {
this.value = value;
}
public static Boolean FALSE = new Boolean(false);
public static Boolean TRUE = new Boolean(true);
// factory method that uses interning
public static Boolean valueOf(boolean value) {
if (value) {
return TRUE;
} else {
return FALSE;
}
}
}
```
• Javadoc for \texttt{Boolean} constructor:
– Allocates a \texttt{Boolean} object representing the value argument.
– \textbf{Note: It is rarely appropriate to use this constructor.} Unless a new instance is required, the \texttt{static factory \texttt{valueOf}} \texttt{(boolean)} is generally a better choice. It is likely to yield significantly better space and time performance.
• Josh Bloch (JavaWorld, January 4, 2004):
– The \texttt{Boolean} type should not have had public constructors. There's really no great advantage to allow multiple \texttt{true}s or multiple \texttt{false}s, and I've seen programs that produce \texttt{millions of true}s and \texttt{millions of false}s, creating needless work for the garbage collector.
– So, in the case of immutables, I think factory methods are great.
Flyweight pattern
• Good when many objects are mostly the same
– Interning works only if objects are entirely the same (and immutable!)
• Intrinsic state: same across all objects
– Technique: intern it (interning requires immutability)
• Extrinsic state: different for different objects
– Represent it explicitly
– Advanced technique: make it implicit (don’t even represent it!)
• Making it implicit requires immutability (or other properties)
Example without flyweight: bicycle spoke
class Wheel {
FullSpoke[] spokes;
...
}
class FullSpoke {
int length;
int diameter;
bool tapered;
Metal material;
float weight;
float threading;
bool crimped;
int location; // rim and hub holes this is installed in
}
Typically 32 or 36 spokes per wheel
but only 3 varieties per bicycle.
In a bike race, hundreds of spoke varieties, millions of instances
Alternatives to FullSpoke
class IntrinsicSpoke {
int length;
int diameter;
boolean tapered;
Metal material;
float weight;
float threading;
boolean crimped;
}
This doesn't save space: it's the same as FullSpoke
class InstalledSpokeFull extends IntrinsicSpoke {
int location;
}
This saves space
class InstalledSpokeWrapper {
IntrinsicSpoke s; // refer to interned object
int location;
}
... but flyweight version uses even less space
class FullSpoke {
// Tension the spoke by turning the nipple the
// specified number of turns.
void tighten(int turns) {
... location ... // location is a field
}
}
class Wheel {
FullSpoke[] spokes;
void align() {
while (wheel is misaligned) {
// tension the \textit{i}th spoke
... spokes[i].tension(numturns) ...
}
}
}
What is the value of the \textit{location} field in \texttt{spokes[i]}?
class IntrinsicSpoke {
void tighten(int turns, int location) {
... location ... // location is a parameter
}
}
class Wheel {
IntrinsicSpoke[] spokes;
void align() {
while (wheel is misaligned) {
// tension the i\textsuperscript{th} spoke, which affects the wheel
... spokes[i].tighten(numturns, i) ...
}
}
}
Flyweight discussion
- What if \texttt{FullSpoke} contains a \texttt{wheel} field pointing at the \texttt{Wheel} containing it?
- What if \texttt{FullSpoke} contains a \texttt{boolean} broken field?
- Flyweight is manageable only if there are very few mutable (extrinsic) fields.
- Flyweight complicates the code.
- Use flyweight only when profiling has determined that space is a \textit{serious} problem.
\texttt{Wheel} methods pass this to the methods that use the \texttt{wheel} field.
Add an array of \texttt{bools} in \texttt{Wheel}, parallel to the array of \texttt{Spokes}.
|
{"Source-Url": "http://courses.cs.washington.edu/courses/cse331/12au/lectures/13-patterns-1.pdf", "len_cl100k_base": 4151, "olmocr-version": "0.1.50", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 66364, "total-output-tokens": 5969, "length": "2e12", "weborganizer": {"__label__adult": 0.0004336833953857422, "__label__art_design": 0.0004138946533203125, "__label__crime_law": 0.0003337860107421875, "__label__education_jobs": 0.0014200210571289062, "__label__entertainment": 5.2809715270996094e-05, "__label__fashion_beauty": 0.0001608133316040039, "__label__finance_business": 0.00016689300537109375, "__label__food_dining": 0.0003714561462402344, "__label__games": 0.0003843307495117187, "__label__hardware": 0.0004813671112060547, "__label__health": 0.0003333091735839844, "__label__history": 0.00020122528076171875, "__label__home_hobbies": 0.00010448694229125977, "__label__industrial": 0.0003192424774169922, "__label__literature": 0.0002307891845703125, "__label__politics": 0.000301361083984375, "__label__religion": 0.00048470497131347656, "__label__science_tech": 0.001430511474609375, "__label__social_life": 0.00013136863708496094, "__label__software": 0.0023517608642578125, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.0003924369812011719, "__label__transportation": 0.0005393028259277344, "__label__travel": 0.0002570152282714844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18119, 0.00424]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18119, 0.70423]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18119, 0.79462]], "google_gemma-3-12b-it_contains_pii": [[0, 124, false], [124, 311, null], [311, 807, null], [807, 1214, null], [1214, 1607, null], [1607, 2176, null], [2176, 2655, null], [2655, 2896, null], [2896, 3343, null], [3343, 3734, null], [3734, 3894, null], [3894, 4235, null], [4235, 4802, null], [4802, 5058, null], [5058, 5328, null], [5328, 5856, null], [5856, 6436, null], [6436, 7011, null], [7011, 7389, null], [7389, 7698, null], [7698, 7899, null], [7899, 8114, null], [8114, 8331, null], [8331, 8640, null], [8640, 9072, null], [9072, 9509, null], [9509, 10018, null], [10018, 10519, null], [10519, 11199, null], [11199, 11689, null], [11689, 12072, null], [12072, 12717, null], [12717, 13291, null], [13291, 13466, null], [13466, 14015, null], [14015, 14552, null], [14552, 15365, null], [15365, 15824, null], [15824, 16262, null], [16262, 16724, null], [16724, 17156, null], [17156, 17534, null], [17534, 18119, null]], "google_gemma-3-12b-it_is_public_document": [[0, 124, true], [124, 311, null], [311, 807, null], [807, 1214, null], [1214, 1607, null], [1607, 2176, null], [2176, 2655, null], [2655, 2896, null], [2896, 3343, null], [3343, 3734, null], [3734, 3894, null], [3894, 4235, null], [4235, 4802, null], [4802, 5058, null], [5058, 5328, null], [5328, 5856, null], [5856, 6436, null], [6436, 7011, null], [7011, 7389, null], [7389, 7698, null], [7698, 7899, null], [7899, 8114, null], [8114, 8331, null], [8331, 8640, null], [8640, 9072, null], [9072, 9509, null], [9509, 10018, null], [10018, 10519, null], [10519, 11199, null], [11199, 11689, null], [11689, 12072, null], [12072, 12717, null], [12717, 13291, null], [13291, 13466, null], [13466, 14015, null], [14015, 14552, null], [14552, 15365, null], [15365, 15824, null], [15824, 16262, null], [16262, 16724, null], [16724, 17156, null], [17156, 17534, null], [17534, 18119, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18119, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 18119, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18119, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18119, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18119, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18119, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18119, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18119, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18119, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18119, null]], "pdf_page_numbers": [[0, 124, 1], [124, 311, 2], [311, 807, 3], [807, 1214, 4], [1214, 1607, 5], [1607, 2176, 6], [2176, 2655, 7], [2655, 2896, 8], [2896, 3343, 9], [3343, 3734, 10], [3734, 3894, 11], [3894, 4235, 12], [4235, 4802, 13], [4802, 5058, 14], [5058, 5328, 15], [5328, 5856, 16], [5856, 6436, 17], [6436, 7011, 18], [7011, 7389, 19], [7389, 7698, 20], [7698, 7899, 21], [7899, 8114, 22], [8114, 8331, 23], [8331, 8640, 24], [8640, 9072, 25], [9072, 9509, 26], [9509, 10018, 27], [10018, 10519, 28], [10519, 11199, 29], [11199, 11689, 30], [11689, 12072, 31], [12072, 12717, 32], [12717, 13291, 33], [13291, 13466, 34], [13466, 14015, 35], [14015, 14552, 36], [14552, 15365, 37], [15365, 15824, 38], [15824, 16262, 39], [16262, 16724, 40], [16724, 17156, 41], [17156, 17534, 42], [17534, 18119, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18119, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
cdfa3ce0fcd31d95e2e0615d206727374b19cd1f
|
The Impact of Open Source Software on an Educational Business Model
V. L. Plantamura¹, A. Marengo² and A. Pagano³
¹ University of Bari/Computer Science Department, Bari, Italy
² University of Bari/Computer Science Department, Bari, Italy
³ University of Bari/Computer Science Department, Bari, Italy
Abstract— The Free and OSS movement has had phenomenal impact on the industry’s evolution, in fact most of the companies today, makes extensive use of Open Source software and technologies. The research communities, are engaged in the study of the open software-related development in order to highlight the advantages and disadvantages in terms of technology, reuse and economic impact. Nowadays there are no methodological best practices that set standards to determine the cost/benefit analysis for this approach. After a deep analysis of the impact that the implementation of an educational Open Source software has in workplaces, it’s possible to develop a characterization model harmonizing the variables and critical factors used in business and market contexts, and assess its effectiveness empirically. As stated in literature, the OSS phenomenon has promoted research, and everyone knows that educational field is a permanent research laboratory and when it is supported by computers (CSCL), the challenge is to find the right mix between didactic methodology and technological resources. Open Source approach stimulates viral innovation. Despite the identification of factors and variables that contribute to weigh this choice, it is not possible to define a model of characterization that can help to address a sensitive junction yet (even because there’s not enough empirical data). It need to define a model that could harmonize the largest number of variables, in order to give a valid aid to corporate management. The research team involved in this field, is experimenting on empirical basis, a model of characterization for the selection of educational software tools belonging to the different needs. The case of study that could be suitable to be showed is the Osel project (Open Source e-learning) experience, in which this model is applied and that brought successful results in terms of educational effectiveness and power of software.
Index Terms— e-learning, Education, Open Source, Software.
I. INTRODUCTION
Nowadays the tools to deliver e-Learning courses can facilitate distance-learning activities. In the open source field, although the existing LMS (Learning Management System) have all the features required to deliver on-line courses (registration of students, management of training contents, evaluation of the knowledge, etc.), they don’t have a system of intelligent tutoring that can help both the teacher that the student in the development of dynamic courses, starting from a set of learning goals.
The main topic is to analyze the features, the standards, and the structure to be used for the implementation of Web Intelligent Agent (LIS – Learning Insight System) that can interact with the existing open source LMS, expanding their traditional features with the innovation of intelligent tutoring.
The aim of this research project is to develop an innovative approach in the learning field and evaluate the impact on the learners and teachers.
This project started from the necessity to create a flexible and integrated system, based on Open Source software, in which an Intelligent Web Agent will be integrated to manage the Learning Objects repository and LCMS. This approach taken provides high scalability and versatility for the system and streamlines the upgrading process. It aims at meeting some changeable requirement in the field of distance learning (yearly or even monthly innovation). The modular structure and flexibility provided by the portal developed makes this project adaptable to any SCORM compatible LCMS. Specifically, the OSEL’s repository is the integration between Learning repository software LeMill, with OSEL Taxonomy for the classification of Learning Objects (OSEL stand for Open Source E-Learning and it’s a research project of University of Bari).
Integration between software and learning structure will improve portal features and teaching/learning process. The flexible and modular structure of our server and our Learning Environment based on Zope (Zope is a free and open-source, object-oriented web application server), Plone (Plone is a free and open source content management system built on top of the Zope application server), and Fle (Fle is a Web-based learning environment or virtual learning environment) with integration of LeMill (Web community for finding, authoring and sharing learning resources) repository, provide an environment suitable for Intelligent Web Agent applications. Artificial Intelligence could rewrite the rules of learning online.
An Open Source approach is not necessarily a no-cost solution, so what concerns might a company have that would adopt/develop an Open Source framework like this, when considering its business model? What’s factors should management analyze before choosing an Open Source solution?
This chapter describes the technological structure of OSEL LIS as the first step of our research framework. Then the research team tries to analyze the open source business model that guided crucial choice in adoption/development of the same framework from the point of view of a research-oriented company.
II. THE PROJECT
There has been much debate recently about the use and benefits of virtual learning platforms, however they generally fail to actively support users on an individual
basis to actually help them learn at a pace that is appropriate to the learner and that can identify knowledge or skill gaps and address them dynamically.
The aim of the “OSEL 2.0” project is to develop such an approach and evaluate the impact on the speed and success of learning given specific learning outcomes (skills/knowledge gained by the student and new methodologies applied by the teacher).
The research framework proposed will include not only a methodological and theoretical research on an “intelligent” LCMS and LO repository, but also its technological project and its integration on an extended platform.
The research framework started on January 2008 with the definition and implementation of the technological structure of the server, based on Open Source software as Zope, Plone and Fle. It will move on this path having as target the implementation of the, so called OSEL LIS - Learning Insight System that will be completely web-based and its innovative distinguishing marks will be:
Automatic or assisted building of learning paths, starting from the learning goals;
Automatic customization of each course of study based on the knowledge of the student and his learning likes (preferences, ways to learn...);
Monitoring and automatic evaluation of student’s knowledge related to acquired information and cognitive skills;
Content management through the use of ontologies, compliant to most important international standards on Knowledge modeling (Devedzic, 2006).
The modular structure and flexibility provided by LIS will make this system adaptable to any kind of educational and/or academic situation, allowing also the development in step with innovative and specific Web technologies.
Updating ability and flexibility towards the necessity of implementing innovation into technology as well as educational and meta-cognitive methodologies were the basic criteria for choosing software, which is used to develop the framework.
Furthermore, this project will develop such an approach using semantic web technology and reasoning support and evaluate the impact of this new approach against rival techniques/systems identified in the literature. A usability study is set up to evaluate this.
III. FRAMEWORK STRUCTURE
One of the most important topics in Open Source development, today, is the integration of already developed components. The critical topic is to choose the software (or pieces of software) to integrate for the final product.
- Linux Server (Gentoo with xen virtualization).
- Client (any OS) + Browser.
- Zope
- Plone.
- Fle.
- LeMill.
- LIS
The software structure (Figure 1) is modular / onion-skin, in fact the core is the programming language (Python), then we have Zope, which works as a strong and stable web server and then, at the upper level, we can find Plone and Fle which are the effective user interface. LeMill is the Learning Object repository and LIS is the Intelligent Agent that structure data in ontology ways and build personalized learning paths.
IV. AGENT SYSTEM IN LEARNING ENVIRONMENT
According to Dolonen, Chen, and Mørch and their DoCTA project, implementing their thought experiment, we can develop an intelligent software agent that could work in Learning Environment FLE (Dolonen, Chen, & Mørch, 2003).
In the process of collaborative knowledge building, it is usually difficult for students to be aware of others’ activities and for instructors to overview the process and to regulate the collaboration In order to facilitate collaborative knowledge building, intelligent agents were developed to support the awareness and regulate the collaboration (Palade, Howlett, & Jain, 2003)
Instead of letting the agent contact the students directly, which can be inappropriate and annoying, the Intelligent Web Agent was designed and developed to assist the instructor in giving feedback to students. This agent would instead present statistical information and advice to the instructors to inform them about the collaboration process. Then an instructor could, if judged appropriate, forward the feedback and decide to engage in a dialog with the student. In this way, the instructor retains a role in the success of collaborative learning. However, to accomplish this role the instructor will need specific tools for monitoring interactions that are distributed in time and space. The design of these tools is very important for CSCL research (Dillenbourg, 1999).
The Intelligent Web Agent is such a tool for enhancing the facilitator’s ability to monitor and regulate the process of collaboration and knowledge building. (Dolonen et al., 2003)
V. THE ROLE OF LEARNING REPOSITORY
New publishing methods require new approaches to traditional copyright laws: all resources are freely usable by anyone in any context (we can imagine Youtube videos or Slideshare slides). All the content in LeMill platform is released under Creative Commons Attribution-ShareAlike 2.5 (Creative Commons, 2009).
The emerging questions related to Learning Repositories are: What are the success factors and obstacles for collaborative authoring of learning resources by communities of practice? What are the emerging patterns in social software that support collaborative authoring of learning resources?
The repository implemented has the function to catalogue and to research the Learning Objects (LO) at the user’s disposal, i.e., a sort of warehouse of LO similar to a database in which LO are registered and classified. The SCORM standard and the flexibility of Python language makes the didactic materials re-usable in different situations and in different platforms.
VI. FUTURE DEVELOPMENT AND FUTURE LEARNING ENVIRONMENT INTEGRATION WITH LE MILL REPOSITORY.
Integration between FLE and LeMill is the next step for improving platform features. Using a web agent similar to DoCTA with advanced AI algorithm, users will easily
access to proper LeMill contents, directly from FLE environment. We could imagine a system that provides personalized set of learning contents for each user, depending on his own skill level.
VII. WEB INTELLIGENT AGENT (LIS) AS SUPPORT FOR E-LEARNING COURSES
LIS (Learning Insight System) will be based on formal models of domain able to represent knowledge and didactic experience, structured through the use of ontologies and encoded with the standard OWL (Web Ontology Language) by W3C (World Wide Web Consortium).
The Web Ontology Language (OWL) is a family of knowledge representation languages for authoring ontologies and is endorsed by the World Wide Web Consortium. OWL is considered one of the fundamental technologies underpinning the Semantic Web, and has attracted both academic and commercial interest.
LIS will perform a logical inference on ontologies, using a typical behavior of intelligent systems based on knowledge (Knowledge Based System).
It will use a repository of Standard Learning Object compatible with SCORM. The Sharable Content Object Reference Model (SCORM) is a collection of standards and specifications for web-based e-learning. It defines communications between client side content and a host system called the run-time environment (commonly a function of a learning management system). SCORM also defines how content may be packaged into a transferable file.
This approach will facilitate the interoperability of training materials with all the systems that support this standard.
The agent will be developed with a structure based on services (SOA) by displaying a public interface through the use of Web Services. The services will be described through WSDL standard and the messages will be coded according SOAP standard.
Using standards for the services exposure allows interoperability of LIS with all the e-Learning platforms; to use all the features of intelligent agent, it will be enough for each e-Learning platform to implement an additional module (plug-in) that will provide an interface for the user and it will communicate with the staff calling services through Web Service. Additional modules will be developed for open source e-Learning platforms such as Moodle and FLE.
It is easy to deduce that LIS, designed as knowledge-based system, uses structured ontologies following OWL standard and exporting services through WSDL standard, can be considered a system ready for Semantic Web (Web 3.0).
VIII. FUNCTIONAL REQUIREMENTS
Below are described the functional requirements of Intelligent Web Agent as support to e-Learning platforms through the UML diagram of Use Cases.
A. Use Case: Add Knowledge through Ontologies – Upload Learning Object
The Learning Object writer reaches the user interface made available by LIS, and following a set of guided steps, proceeds to build the ontologies (with selection of individuals) through OWL language and the following upload of Learning Objects SCORM linked (Figure 2).
B. Use case: Adaptive access to learning object repository
In the creation stage of training course in the LMS (e.g. Moodle or FLE), LIS intervenes through its plug-in installed in e-Learning platform and drawing from the repository, offers to the Teacher a choice of appropriate Learning Object, to use in case together with the other Learning Objects chosen (autonomously) by the Teacher.
C. Use case: Making courses starting from educational targets (case 1)
In the creation stage of training course in the LMS (e.g., Moodle), after the Teacher sets the didactic goals, LIS intervenes through its plug-in (with questionnaires and forms) and, drawing from its own repository of ontologies and Learning Object, suggests to the Teacher all the Learning Objects useful to the creation of the course.
D. Use case: Intelligent suggestion of extra educational contents
During the attendance of on-line course in the LMS, the student may have the need to deepen his knowledge through additional contents.
LIS in this case provides a virtual tutor able to offer additional training materials (Learning Object, web searches, etc.) consistent with didactic contents of (related to) the course.
E. Use case: Making courses starting from educational targets (case 2)
This scenario involves a self-taught student who has learning goals to achieve, but who does not know how to structure an appropriate learning path.
In this context, LIS, through its plug-in in the LMS, will perform inference on ontologies put in its repository and will propose to the student an appropriate learning path in order to achieve his training goals.
F. Use case: System monitoring and supervision
This scenario involves system administrators, systems analysts, and developers who are responsible for maintaining and monitoring the system.
IX. LIS INTELLIGENT WEB AGENT STRUCTURE
The following proposed deploy diagram (Figure 3) displays the structure of intelligent Web Agent and how it communicates with the client and with the LMS:
The deploy diagram shows that the intelligent agent, based on J2EE platform, will be installed on a dedicated server.
Learning Object, ontologies, and all supporting data that are essential for the right functioning of LIS will reside in internal specific repositories and database and represent the knowledge on which to base logical inference. Furthermore, the agent will communicate with the repository of SCORM compliant learning object (Lemill) to obtain additional materials.
The LMS, which are installed on separate servers, can use the features of an intelligent agent communicating through web services interfaces.
LIS features will be presented in a transparent way to the final user through a plug-in installed in the LMS used by the user. In fact, the final user (teacher and student) who wants use LIS features will just need a web browser to join his open source LMS; it will be the plug-in to communicate through web services with LIS. Finally, the creator of Learning Object will direct join LIS through HTML interface specially developed that will help him to enter Learning Objects and knowledge.
X. ONTOLOGY AND TAXONOMY
The necessity to find a flexible taxonomy for the LO had inevitably led to issues related to ontology.
The introduction of ontologies in the computer science world gives a valid tool to the learning process. Above all, in the A.I. context the use of them is actually increasing for the significant role they have in information systems, in the semantic Web, and in the systems based on knowledge, as for instance a neural net. The recent attention the A.I. community is paying to ontologies focuses on the theories about content more than those about mechanism. Chandrasekaran, Josephson, and Benjamins (1999) suggest that, although mechanisms are important for the functioning of intelligent machines, they are useless without a good theory of content on which mechanisms must rule. Furthermore once a good theory of content is available, different mechanisms can be used to implement efficient systems ruling on the same content.
Thus the ontologies become theories of content as they contribute to identify specific sets of objects and relationships that exist in a specific domain of knowledge.
XI. OSEL TAXONOMY
Starting from the awareness of the lack of a universally recognized taxonomic classification, this research has been oriented to study not only the structural characteristics of any single LO but also their interoperability with the users participating, or not participating, in a group activity.
The OSEL Taxonomy is based on the two most significant taxonomies, known all over the world. The first is Wiley’s taxonomy, called “Preliminary Taxonomy of Learning Object Types” (Wiley, 2000). The second taxonomy is based on the “Educational Taxonomy for Learning Objects” (Redeker, 2003). It focuses above all on the didactics aspects related to the LO.
The aim of the OSEL Taxonomy is to classify the LO that can be used within a LCMS platform and, thus, just those LOs that can be re-used.
The OSEL Taxonomy classifies the LO both through an ontological definition related to their domain of competence and through the relationship that could eventually exist among them and the learners without delegating subjective opinions to the author. The extremely accurate construction of a glossary, based upon the re-usability concept as ontological requirement of the LO, makes the OSEL Taxonomy particularly efficient for the classification of the LOs that are used within the LCMS platforms (Convertini, Albanese, Marengo, Marengo, & Scalera, 2006).
XII. AI AND NEW OSEL APPROACH
The next step of development is to improve the interoperability between learning environment and learning object repository, implementing OSEL Ontology with new networked taxonomy. We can imagine an Artificial Intelligence web agent guided by Description Logics and OWL based algorithm, which could adapt learning path to every student.
An important topic for this research is represented by interoperability between platforms. This is reached with full standardization and full modular structure, starting from software, ending to logical approach. We can imagine three level of Knowledge Model:
- Learning Objects;
- Metadata;
- Ontology.
This approach gives to the web agent more detailed information about didactic material, but even deeper information about the relationship between Learning Objects, metadata, and student’s skill level.
With this kind of approach the Intelligent Agent could select Learning Object from repository to build a personalized learning path.
The development of AI module for our platform is just began, we think that it could be developed as Zope module (written in Python) to include it easy in our learning environment and to maintain high versatility, modularity and suitable for frequent updates, bugfix, and codes improvement.
XIII. OPEN SOURCE SOFTWARE IMPACT ON EDUCATIONAL BUSINESS MODELS
Open Source approach is not a no-cost solution, so what about a company that would adopt/develop an Open Source framework like presented one, in its business model? What’s the analysis that the management should do before choosing an Open Source solution?
A. The Open Source Business Model in Research Oriented Firms
The Free and Open Source Software movement has had phenomenal impact on the industry’s evolution, in fact most of the companies today make extensive use of Open Source software and technologies. The research communities, academic and professional, are engaged in the study of the open software-related development in order to highlight the advantages and disadvantages in terms of technology, reuse, and economic impact.
Below are described and analyzed the experience in the decisional process during the research & development of the Open Source e-learning framework with LIS.
Educational environments based on technology, could be considered as a real information systems. An organization that would adopt a Learning Environment, needs to face the knowledge management problem.
As a result of recent research, there is now a growing understanding of the drivers of information systems development and performance, and methods are evolving to improve the delivery of appropriate learning environments that return real knowledge benefits to the learners (Remenyi, White, & Shervood Smith, 1997).
Nowadays there are only a few methodological best practices that set standards to determine the cost / benefit analysis for this approach. Today’s research has an ambitious aim: harmonizing the variables and factors critical in implementing an Open Source model and placed in a model built on the experience of previous business and the environment inside and outside the company.
Similarly to the SWOT (analysis of Strengths, Weakness, Opportunities, Threats) approach, after analysis of the impact that the implementation of an Open Source software has in a company, it is possible to develop a characterization model used in business and market contexts and assess its effectiveness empirically.
The Open Source software component used to develop the framework is adopted for both ideological and purely pragmatic reasons (Ven, Verelst, & Mannaert, 2008), but it is necessary that the strategic management focus the attention on some critical topics that may influence this choice: a decision not weighted, can cause damage to the project or make the research team miss opportunities in terms of profitable results and development. The variables involved are numerous and not always immediately visible at first analysis.
B. An Open Source Research Project Work in Free Market
The framework developed is suitable for many environments and probably its suitable place is free market. The question is: what about a company that would adopt (or even develop) this Open Source Framework?
The Company needs to consider an important factor that makes it sensitive in evaluating investment: the cost-benefit analysis. Often, the OS software seems free, and this could bring the management to wrong decision in economy and market developing fields.
The OS software is, in most cases, distributed for free, but it is wrong to think that the company don’t have to pay for its adoption, but it is useful to point out all the costs necessary to use as a real economic asset for enterprise business. Not all OSS is free, so OSS might not be less expensive than proprietary software. To estimate the costs involved in introducing OSS, an organization can calculate the TCO (total cost of ownership) (Ven et al., 2008)
The free availability of source code can create advantages or disadvantages in the technical environment and in the economical environment. Depending on the context, activity, and the type of target market, the educational organization can be classified into three categories:
Open code indifferent: In this scenario, the source code availability is neither an advantage nor a disadvantage for the organization. OSS serves as a black box and its advantages or disadvantages are comparable to proprietary packaged software.
Open code scholar: In this scenario, the organization considers the source code availability to be an advantage, but doesn’t use it to study or customize the program. Some organizations choose OSS because they feel that the program is less likely to contain hidden features or bugs, and that in case of bug discovery, it will be fixed quickly. With this kind of approach the use of OSS implies a learning process, in which the organization gains experience and skills (and this could represent a profitable investment)
Open code developer: In this scenario, OSS serves as a white box (Weinstock & Hissam, 2005). Organizations use the source code to study the software’s inner workings or to adapt the software to their own (or their clients) needs. This is primarily interesting for software houses developing OSS-based applications.
This third kind of approach introduces the theme of OSS reuse (Ebert, 2008). Adopting Open source development practices can make an organization pay less attention to strategic planning, detailed requirements elicitation, testing, and organized support (Spinellis & Syzoperski, 2004).
In the framework case described, the choice to use Open Source Software is obvious. The cost that a company or a research group may pay is represented by TCO (Total Cost of Ownership). The reuse of already developed content is cheaper than the development from scratch. In e-learning field the Open Source Software is mature and efficient, so the integration of developed pieces of OSS is the best choice to research and develop starting from a solid base point.
C. Open Source Software Reuse
Open Source solutions in educational environment cover standard company needs. Some software has the suitable features and tools for many didactic situations.
The large amount of open source software makes the reuse practices the quickest way to develop and add the needed feature for some specific needs (for example with the sloodle plugin, you can integrate Moodle with Second Life (Simulation Linked Object Oriented Dynamic Learning Environment, 2009).
Even in case of Open Source Software Reuse, apparently, the adoption of OSS code seems always advantageous. In reality the costs are hidden; in addition to the adaptation of portions of code (that could be more or less complex), organizations have to include cost for the selection of modules to be integrated and for updating and maintenance.
Software reuse possibilities open up on three axes: what to reuse, how to reuse it, and where to reuse it. Movement along these three axes increases the breadth of software reuse opportunities in any development effort. Source code’s availability lets the community perpetually improve, fix, and support the reused elements. In some cases, by incorporating the source code of a reused element into the system being built, developers can achieve tight integration and a system can be maintained as a whole (Spinnellis & Szyperski, 2004).
However, as educational services are becoming more important in the software sector (both for OSS and proprietary software), the perceived quality concerns the quality of the educational offered services and methodologies, not necessarily the quality of the software. However, in OSS there is an upside most often not present in proprietary software: anyone can verify the quality of the software code because the source code is available for everyone (Pykalainen, 2007).
On the other hand, high degree of dependence on libraries and external portions of code reflects the problem of dependence on the final product. It means that it need to bring the same libraries and external portion of code to let the software work perfectly.
In addition, the reuse of open source software can generate “isolation.” The problem of isolation is reflected by slow bug fixes and the risk of including in the final product unnecessary portions of dead code.
Those risks may affect final product quality with the inevitable economical impact. A possible solution may be the API (application programming interfaces) approach that allows a good level of abstraction between software layers (Madannmohan & De’, 2004).
In the specific case shown some of the described problems are solved. First of all the most important topic is to choose the software to reuse. A good community of users and developers, great maturity, and commercial use experience are important variables that could influence the choice. All the tools (re)used and integrated are all written in Python language and this helps a lot in developing new compatible modules for the whole system. All technical specs are Open so everyone could develop his own extension.
The LIS module is developed as external application using SOA – SOAP architecture with a public interface through the use of Web Services. The services will be described through WSDL standard and the messages will be coded according SOAP standard.
The use of standards for the services exposure allows interoperability of LIS with all e-Learning platforms and with the whole framework system.
According to this analysis, the choice done is (again) Open Source oriented.
D. OSS Promote Research
As stated in literature, the OSS phenomenon has promoted research (Krogh, & Spaeth, 2007). Everyone knows that the educational field is a permanent research laboratory and, when it is supported by computers (CSCL), the challenge is to find the right mix between didactic methodology (e.g., collaborative learning, collectivism, connectivism, etc.) and technological resources. Open source stimulates viral innovation. OSS models have changed development processes. “Sprint” development approach is rising speed and quality of code (A sprint is a time-boxed period of software development focused on a given list of goals. Sprints have become popular events among some Open Source projects). The most important benefit of sprints is allowing people to meet and collaborate in person (Goth, 2007). OSS pushed proprietary license models to their limits, until market pressures spawned entire new models such as packaging or value-added services.
The framework proposed is a research project and is continuously under development. Open Source model is the best choice for this kind of work.
E. Migration Issues
A possible cost that every company should consider concerns the migration. A company needs to evaluate every situation separately. Migration from Open Software to another Open Software will be quite transparent with no great cost in training or in hardware adjustment (open software are often standard compliant). Migration from proprietary software to an open one could generate relevant costs in training for total renewal of technical know how and to rebuild some contents according to new software specifications.
An organization that would adopt OSS as educational tool has to consider the quality of Open Source Software that will be implemented. This topic is at the center of last research debate because the quality concept is quite abstract and it depends on some factors:
Security: Intrinsic quality of software is the transparency of source code. Everyone could correct bugs or find malware.
Development and support community: A software project is evaluated as mature considering the following indicators: quantity and quality of documentation, frequency of the release, efficiency of the bug report system and speed in bugfix.
The high degree of maturity of a project often creates commercial sponsorship with companies involved in promoting and leading the development of the project and offer commercial support and consulting (e. g., docebo or Moodle).
Many companies nowadays have adopted the OSS system integrator business, consisting in OSS component integration covering OSS lacks (such as support, brand, organization and verticalization of software according to specific needs).
XIV. CONCLUSIONS
As showed in this experience of research and development of innovative AI-e-learning supported project, it is clear that everyone can assert that the adoption of OSS software in educational environment is not free of cost or threats.
Even if there is a great probability of an innovative product (with high investments on R&D), an organization has to deeply analyze internal (e.g., ideological and technical issues) and external factors (e.g., software
selection) before choosing its own educational tools and the business model suitable to supporting it.
Despite the identification of factors and variables that contribute to do a deep evaluation about this choice, it is not possible to define a model of characterization that can help to address a sensitive junction since the empirical data collected is not enough to demonstrate the theory.
In this field, the research groups are trying to define a model that could harmonize the largest number of variables, in order to give a valid aid to corporate management.
REFERENCES
[11] Learning Environments for Progressive Inquiry Research Group - UAH Media Lab, University of Art and Design Helsinki - In cooperation with Centre for Research on Networked Learning and Knowledge Building, Department of Psychology, University of Helsinki. (2006). http://fle3.uah.fi/ (Not cited in the paper unless the one cited on page 9. This URL takes you to a page titled Fle3 > Future Learning Environment. If this is the work you are citing, the title should be shown here.) This is the link that explain who’s the Fle3 team
AUTHORS
V. L. Plantamura is Full Professor of Fundamental Computer Science at the Faculty of Mathematical, Physical and Natural Science of the University of Bari since 1975, area INF/01. He was charged with several teaching assignments such as Programming and Information System.
Dean of the Degree in Computer Science and Digital Communication at the Faculty of Mathematical, Physical and Natural Science of the University of Bari; Chief-coordinator of the “University Bechelor in Computer Science” at the Faculty of Mathematical, Physical and Natural Science of the University of Bari; University of Bari Rector’s Delegate for Technology innovation;
The research interests and the relative activities have always been focused on analyses and implementation of computer science systems with particular reference to their performance evaluation. Presently, his research is involved in defining learning models for the project of engineering training paths.
(e-mail plantamura@di.uniba.it)
A. Marengo Assistant Professor in Faculty of Economics at University of Bari. His research activity takes place primarily on didactic methodologies implemented by the use of ICT tools, particularly the development of e-learning web-based platforms that compete to introduce the technologies of distance learning in traditional institutional campus courses and activities.
(e-mail: marengo@di.uniba.it)
A. Pagano is a Phd student in Computer Science at University of Bari. His research topic is about Open Source impact on Enterprise Software Development. He was a researcher for the “research 60%” focused on “planning methods and implementation of didactic models and artifacts, focused on learner and focused on knowledge construction communities. He is ICT Manager for Osel Consulting srl – SpinOff of University of Bari. He is a LUGBari member (Linux User Group). He is an Open Source philosophy supporter
(e-mail: alessandropagano@di.uniba.it).
Manuscript received 31 March 2010.
This work was supported in part by the U.S. Department of Commerce under Grant BSI23456 (sponsor and financial support acknowledgment goes here).
|
{"Source-Url": "https://www.icelw.org/proceedings/2010/Papers/Plantamura%20_Marengo_Pagano.pdf", "len_cl100k_base": 7172, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24399, "total-output-tokens": 8805, "length": "2e12", "weborganizer": {"__label__adult": 0.0005583763122558594, "__label__art_design": 0.001277923583984375, "__label__crime_law": 0.0008749961853027344, "__label__education_jobs": 0.283935546875, "__label__entertainment": 0.00022232532501220703, "__label__fashion_beauty": 0.0003237724304199219, "__label__finance_business": 0.00264739990234375, "__label__food_dining": 0.0008292198181152344, "__label__games": 0.0011777877807617188, "__label__hardware": 0.0011138916015625, "__label__health": 0.0011491775512695312, "__label__history": 0.0007672309875488281, "__label__home_hobbies": 0.00030994415283203125, "__label__industrial": 0.0008044242858886719, "__label__literature": 0.00095367431640625, "__label__politics": 0.0006303787231445312, "__label__religion": 0.0009369850158691406, "__label__science_tech": 0.03546142578125, "__label__social_life": 0.0004837512969970703, "__label__software": 0.0626220703125, "__label__software_dev": 0.60107421875, "__label__sports_fitness": 0.00047206878662109375, "__label__transportation": 0.0008859634399414062, "__label__travel": 0.0005197525024414062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40559, 0.01517]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40559, 0.5713]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40559, 0.92107]], "google_gemma-3-12b-it_contains_pii": [[0, 5625, false], [5625, 11522, null], [11522, 16323, null], [16323, 21440, null], [21440, 27741, null], [27741, 34032, null], [34032, 40559, null], [40559, 40559, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5625, true], [5625, 11522, null], [11522, 16323, null], [16323, 21440, null], [21440, 27741, null], [27741, 34032, null], [34032, 40559, null], [40559, 40559, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40559, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40559, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40559, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40559, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40559, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40559, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40559, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40559, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40559, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40559, null]], "pdf_page_numbers": [[0, 5625, 1], [5625, 11522, 2], [11522, 16323, 3], [16323, 21440, 4], [21440, 27741, 5], [27741, 34032, 6], [34032, 40559, 7], [40559, 40559, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40559, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
6c4a18c80c59ebb4d9f1d49fdca33743b65f6250
|
Modern Web Protocols Part 2
Where we left off...
A History of Web Protocols
• HTTP/1.x has lots of problems
• Standard request / response paradigm + HoL blocking made it difficult to scale to support increasingly complex modern websites
• Solutions to this problem (e.g., HTTP pipelining) ran into serious deployment challenges which never let it take off
• HTTP/2 was created to solve lots of these problems
• Implemented a new abstraction (e.g., byte streams, frames, messages) and fixed lots of challenges
• But still suffers from HoL blocking, just at the TCP layer instead of the HTTP layer
Lingering Questions
• Q1: How many HPACK tables are there at once?
• There is a pair of static / dynamic tables generated per HTTP/2 connection.
• Size restraints make this feasible (but are tunable by client / server in settings frames)
• If two mutually distrustful clients are using the same HTTP/2 connection, they can probe dynamic table state (and potentially leak client information)
• Q2: Can Server Push be used as a notification system?
• No, the browser doesn’t expose server push in JavaScript, see Push API instead
A History of Web Protocols
HTTP/0.9 1991
HTTP/1.0 1996
HTTP/1.1 1997
HTTP/2 1997-2015
QUIC 2015
HTTP/3 2021
A core problem with HTTP up to this point is a fundamental limitation of reliable transport over TCP.
We want to have reliability guarantees, but the way this is implemented in the layering model (e.g., in TCP) makes it such that applications don’t have flexibility to define what reliability means!
We could try to change TCP?
But that requires updating every router in the world. Way too hard.
QUIC idea: What if we re-envisioned what we needed from lower network layers?
QUIC
A New Transport Layer
HTTP/2
TLS
TCP
IP
The current world
QUIC
A New Transport Layer
The current world
A QUICer world
QUIC
A New Transport Layer
The current world
A QUICer world
This is all user space!!!
QUIC
Design Goals
• A new, reliable transport layer
• Easily deployable and evolvable
• Make this something that exists in userspace and something that doesn’t require us to update every router ever
• Security by default
• Build in encryption, integrity checks, and authentication into the transport layer itself
• Reduce unnecessary delays imposed by strict layering
• Handshake delays (e.g., TLS handshake), HoL blocking (HTTP, TCP)
QUIC
Establishing a Connection
• The first time a client wants to communicate with a server, it send an *inchoate client hello* in cleartext, which will initiate a REJ (reject) from the server
• The server will send back a number of details, including a certificate chain (for server authentication), long term keying materials, and other server metadata
• The client will then use the server information provided to send a *complete client hello*, and immediately start sending encrypted data with non forward-secure keys
• Server sends back *server hello*, with ephemeral forward-secure public keying material
• Client *caches* server details (based on origin), so for any future connection, the client can simply use the server block data to send encrypted messages moving forward. This is known as a **0-RTT protocol.**
QUIC
Two Types of Headers
Figure 1: QUIC Long Header
QUIC
Two Types of Headers
Figure 2: QUIC Short Header
QUIC
Encrypt as much as possible
HTTP w/ TLS + TCP
<table>
<thead>
<tr>
<th>source port</th>
<th>destination port</th>
</tr>
</thead>
<tbody>
<tr>
<td>sequence number</td>
<td></td>
</tr>
<tr>
<td>acknowledgement number</td>
<td></td>
</tr>
<tr>
<td>hlen</td>
<td>flags</td>
</tr>
<tr>
<td>checksum</td>
<td>urgent pointer</td>
</tr>
<tr>
<td>[options]</td>
<td></td>
</tr>
<tr>
<td>type</td>
<td>version</td>
</tr>
<tr>
<td>length</td>
<td></td>
</tr>
</tbody>
</table>
application data
(HTTP headers and payload)
HTTP w/ QUIC
<table>
<thead>
<tr>
<th>source port</th>
<th>destination port</th>
</tr>
</thead>
<tbody>
<tr>
<td>length</td>
<td></td>
</tr>
<tr>
<td>checksum</td>
<td></td>
</tr>
<tr>
<td>01SRRKPP</td>
<td>[dest connection id]</td>
</tr>
<tr>
<td>packet number</td>
<td></td>
</tr>
</tbody>
</table>
application data
(HTTP headers and payload)
Slide stolen from: https://www.youtube.com/watch?v=31J8PoLW9IM&t=9104s
QUIC
Encrypt as much as possible
HTTP w/ TLS + TCP
<table>
<thead>
<tr>
<th>source port</th>
<th>destination port</th>
</tr>
</thead>
<tbody>
<tr>
<td>sequence number</td>
<td></td>
</tr>
<tr>
<td>acknowledgement number</td>
<td></td>
</tr>
<tr>
<td>hlen</td>
<td>flags</td>
</tr>
<tr>
<td>checksum</td>
<td>urgent pointer</td>
</tr>
</tbody>
</table>
[options]
<table>
<thead>
<tr>
<th>type</th>
<th>version</th>
<th>length</th>
</tr>
</thead>
</table>
HTTP w/ QUIC
<table>
<thead>
<tr>
<th>source port</th>
<th>destination port</th>
</tr>
</thead>
<tbody>
<tr>
<td>length</td>
<td>checksum</td>
</tr>
</tbody>
</table>
| 01SRRKPP | [dest connection id] |
| packet number |
application data
(HTTP headers and payload)
Slide stolen from: https://www.youtube.com/watch?v=31J8PoLW9IM&t=9104s
QUIC
Encrypt as much as possible
HTTP w/ TLS + TCP
<table>
<thead>
<tr>
<th>source port</th>
<th>destination port</th>
</tr>
</thead>
<tbody>
<tr>
<td>sequence number</td>
<td></td>
</tr>
<tr>
<td>acknowledgement number</td>
<td></td>
</tr>
<tr>
<td>hlen</td>
<td>flags</td>
</tr>
<tr>
<td>checksum</td>
<td>urgent pointer</td>
</tr>
</tbody>
</table>
[options]
<table>
<thead>
<tr>
<th>type</th>
<th>version</th>
<th>length</th>
</tr>
</thead>
</table>
HTTP w/ QUIC
<table>
<thead>
<tr>
<th>source port</th>
<th>destination port</th>
</tr>
</thead>
<tbody>
<tr>
<td>length</td>
<td></td>
</tr>
<tr>
<td>checksum</td>
<td></td>
</tr>
</tbody>
</table>
01S [dest connection id]
Slide stolen from: https://www.youtube.com/watch?v=31J8PoLW9IM&t=9104s
QUIC
Maintaining the Stream Abstraction
• QUIC uses the idea of a stream (with a stream_id) as a baseline abstraction for sending data between two endpoints, similar to HTTP/2
QUIC
Maintaining the Stream Abstraction
- QUIC uses the idea of a stream (with a stream_id) as a baseline abstraction for sending data between two endpoints, similar to HTTP/2
QUIC
Maintaining the Stream Abstraction
- QUIC uses the idea of a stream (with a stream_id) as a baseline abstraction for sending data between two endpoints, similar to HTTP/2
TCP vs. QUIC
Recovering from Losses
• TCP uses sequence numbers + acknowledgement numbers to identify whether or not a packet has been lost, and needs to be retransmitted
• Unfortunately, sequence numbers mean two things: reliability and the order at which the bytes are supposed to be delivered to the receiver
• On top of this, TCP retransmissions use the same sequence number, so it becomes very hard to know whether an ACK was sent for first transmission or a retransmission
• TCP conflates transmission ordering AND delivery ordering in one number
TCP vs. QUIC
Recovering from Losses
• QUIC decouples transmission and delivery ordering through its use of streams
• Each packet contains a packet number, which is unique and monotonically increasing, even on retransmission
• Clients will ACKNOWLEDGE packet numbers, and the server can identify if an outstanding packet has not been acknowledged… you can find the details at the link below
• Each frame in a stream contains a stream offset, which alerts the client of how to properly reorder the packets on the delivery side
• Enables simpler loss detection than TCP
QUIC
Packetization
• Packets can contain multiple types of frames (e.g., Stream frames, ACK frames, crypto frames)
• Stream frames contain stream IDs and offsets for the receiver to reorder out-of-order packets
• ACK frames contain acknowledgements for the highest packet number we’ve seen so far, and a range for what packets we’ve acked so far
QUIC
Connection Rebinding
• Because QUIC connections are over UDP, they can persist beyond traditional network boundaries, like your home NAT
• No more resetting connection when your underlying network changes
• QUIC does this through the use of several unique variable length Connection IDs to identify the connection, with a protocol in place to verify the connection through a network change
• See RFC for notes on address spoofing + off-path packet attackers (something they’ve considered!)
QUIC
NATs, Middleboxes, Deployment Challenges
• Typically, NATs keep track of TCP connections by using a 5-tuple (src_port, src_ip, dst_port, dst_ip, protocol), and can maintain state because they have access to TCP headers.
• Not all NATs speak QUIC yet, and even if they did, header information is encrypted, so they default to processing UDP packets, which could cause short timeouts and routing issues.
• UDP-based protocols are susceptible to reflection attacks, where attackers use UDP servers with spoofed source ports to amplify their attack, and QUIC can be asymmetric on inchoate client hello.
• This is why QUIC has a REJ packet to start, but this increases the number of round trips required on initial connection. Probably a decent trade off.
QUIC Deployment
QUICly eating the world
• QUIC was officially ratified by the IETF in May 2021 (RFC 9000)
• QUIC support already existed in Chrome for a while, but is now available in Firefox as well
• QUIC is being deployed everywhere
• 6% of websites use QUIC, but will grow post RFC ratification
• Google apps all use QUIC, 75% of Facebook uses QUIC
• Some ISPs have reported that 20% of their packets were over QUIC
• With appropriate tuning in high performance benchmarks, QUIC is so far as good as TLS 1.3 over TCP
https://w3techs.com/technologies/details/ce-quic
https://www.fastly.com/blog/measuring-quic-vs-tcp-computational-efficiency
A History of Web Protocols
HTTP/0.9 1991
HTTP/1.0 1996
HTTP/1.1 1997
STUFF 1997-2015
HTTP/2 2015
QUIC 2021
HTTP/3 2021
HTTP/3 is HTTP over QUIC!
HTTP/3
Building HTTP over QUIC
- Still being iterated on by IETF (no RFC number yet)
- HTTP/3 uses the same abstraction as HTTP/2 (e.g., streams, frames, etc.), except it utilizes these streams as supported by QUIC rather than implementing on top of TCP
- This causes some notable new challenges:
- HPACK, the clever header encryption scheme, cannot be enforced anymore without causing HoL blocking (recall that headers MUST appear before response data in HTTP/2)
- HTTP/2 enjoyed stream prioritization, which is hard to implement in the transport layer on top of everything else
HTTP/3 vs. HTTP/2
Notable Changes?
• HPACK is updated to QPACK, which is designed to allow for out-of-order header data (and updating dynamic tables accordingly)
• Essentially, adds more ability for client to control when to use a dynamic table entry – no need to wait to update an entry or read a table entry before processing a request
• Removed stream prioritization altogether!
• Deemed too challenging to use for clients and offered little guarantees anyway, so it is being discussed independently
Recap
• The web has drastically changed over time, with developers doing more than ever before and websites becoming increasingly complex
• But for a long time, our protocols didn’t match the growing complexity of the world
• New protocols like SPDY, HTTP/2 were useful in working within our paradigm, but there is change afoot!
• People are not liking TCP as much, and companies like Google are starting to throw their weight around in envisioning a new future for layering requirements
• We are redefining “end-to-end” abstractions… let’s see how it goes :)
Web Content
cs249i
Modern Websites
Third Party Resources
• Modern websites rely on many different types of *third-party resources* to provide services to keep their websites functional
• Third party resources are ones served by external parties – so for example, if you are on cnn.com, any resource served from a domain that is NOT cnn.com (e.g., doubleclick.com, google-analytics.com)
• These resources could be anything from static images to JavaScript libraries to analytics, advertising, the list goes on...
Trump escalates January 6 cover-up
The former President is trying to keep the House select committee probing January 6 from seeing a list of documents as he ramps up his political comeback
Brian Stelter's ominous prediction: Imagine it's 2022 and ...
January 6 committee is losing patience with Trump's former chief of staff Mark Meadows as it seeks his testimony
Washington Post report rebuts the January 6 alt-reality that Tucker Carlson promotes
Biden says US 'continuing to suffer' from Trump's decision to pull out of Iran nuclear deal
Astros top Braves 9-5 in World Series Game 5
- Trivia: Can you name the only player to play in all 3 cities that the Braves have called home?
- Analysis: The Braves may win the World Series. But they're striking out with some fans
Students are fed up with raging adults at school board meetings
- A Texas lawmaker is investigating 850 books on race and gender that cause discomfort to students
- Opinion: When parents scream at school board meetings, how can I teach their children?
Southwest launches investigation into pilot reportedly using anti-Biden phrase on flight
- Reportor reveals what Lindsey Graham said during January 6 riot
White House press secretary tests positive for Covid; last saw Biden Tuesday
BREAKING: Japan's Fumio Kishida gives expectations as ruling party keeps majority
Aurora bowels puts on a gorgeous show
- 'Step up or step out': Lawmaker calls out attorney general
Police investigating desecration of Torah scroll at fraternity
COP28 climate talks talk to an ominous start after weak G20 leaders' meeting
- Video shows passengers fleeing knife attack on train
PUBMATIC
Quantcast
RTB House
Rubicon
Salesforce EMP
Scorecard Research
MarinMedia
Embra
SOASTA mPulse
Spicelab
Tapped
TrackDesk
Trump escalates January 6 cover-up
LIVE UPDATES
Astros top Braves 9-5 in World Series Game 5
- **Trivia**: Can you name the only player to play in all 3 cities that the Braves have called home?
- **Analysis**: The Braves win the World Series. But they’re striking out with some fans
Southwest launches investigation into pilot reportedly using anti-Biden phrase on flight
- Reportor reveals what Lindsey Graham said during January 6 riot
Students are fed up with raging adults at school board meetings
- A Texas lawmaker is investigating 850 books on race and gender that could cause “discomfort” to students
- **Opinion**: When parents scream at school board meetings, how can I teach their children?
The former President is trying to keep the House select committee probing January 6 from seeing a list of documents as he ramps up his political comeback
- Trump lawyer said ‘courage and the spine’ would help Pence send election to the House in comments before January 6
- Brian Stelter’s ominous prediction: Imagine it’s 2022 and ...
January 6 committee is losing patience with Trump’s former chief of staff Mark Meadows as it teams his testimony
- Washington Post report rebuts the January 6 alt-reality that Tucker Carlson promotes
- Biden says US ‘continuing to suffer’ from Trump’s decision to pull out of Iran nuclear deal
White House press secretary tests positive for Covid, last saw Biden Tuesday
BREAKING: Japan’s Fumio Kishida offers expectations as ruling party keeps majority
Aurora borealis puts on a gorgeous show
- “Step up or step out”: Lawmaker calls out attorney general
- Police investigating desecration of Torsh scroll at fraternity
COP28 climate talks talk off to an ominous start after weak G20 leaders’ meeting
- Video shows passengers fleeing knife attack on train
Modern Websites Analytics
- Many websites rely on analytics on their users to continue to improve their services
- For example, Google provides Google Analytics, which appears on an estimated 70% of the top websites
- As an analytics user, you can see where your clients are connecting from, you can see how long they spent on the page, what devices they’re connecting from, and a ton of other interesting details
- These are typically scoped to a single request, but in recent years, companies have been expanding the scope of what they know about users...
Web Tracking
Cookies and Code
- Major companies typically use *cookies* to offer extended functionality for websites (e.g., keeping you logged in, keeping certain settings stored in your browser, etc.)
GET /index.php HTTP/3, authenticate
Web Tracking
Cookies and Code
- Major companies typically use *cookies* to offer extended functionality for websites (e.g., keeping you logged in, keeping certain settings stored in your browser, etc.)
Web Tracking
Cookies and Code
• Major companies typically use cookies to offer extended functionality for websites (e.g., keeping you logged in, keeping certain settings stored in your browser, etc.)
• Once a cookie is set, the browser attaches a cookie to every subsequent request sent out for that particular domain
• Cookies are by default scoped to the first-party domain that set the cookie
• No other domains can read the cookie value!
• …then how does web tracking work?
Web Tracking
Cookies and Code
GET / HTTP/3
Web Tracking
Cookies and Code
GET / HTTP/3
GET /facebook-like.js HTTP/3
Web Tracking
Cookies and Code
- With this request, companies can link your cookie to your browsing data (e.g., through Referer header, Host headers, Origin, or just JavaScript)
Web Tracking
Browser Fingerprinting
- Websites can also fingerprint you effectively with browser fingerprinting, which is a technique that leverages all your settings to identify you, and stores this in a cookie on your browser.
- [https://iamunique.org](https://iamunique.org)
- So long as JavaScript can run (by third-parties), you run the risk of being “followed” on the web.
```json
{
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:93.0) Gecko/20100101 Firefox/93.0",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8",
"accept-encoding": "gzip, deflate, br",
"accept-language": "en-US,en;q=0.5",
"upgrade-insecure-requests": "1",
"referer": "https://iamunique.org/",
"userAgent-js": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:93.0) Gecko/20100101 Firefox/93.0",
"platform": "MacIntel",
"cookies": "yes",
"timezone": 4289,
"languages-js": "en-US,en",
"ad": "no",
"doNotTrack": "NC",
"navigator_properties": [
"vibrate",
"javaEnabled",
"getGamepads",
"getVRDisplays",
"mozGetUserMedia",
"sendBeacon",
"requestMediaKeySystemAccess",
"registerProtocolHandler",
"taintEnabled",
]
}
```
Web Tracking
Browser Fingerprinting
- Websites can also fingerprint you effectively with browser fingerprinting, which is a technique that leverages all your settings to identify you, and stores this in a cookie on your browser.
- [https://iamunique.org](https://iamunique.org)
- So long as JavaScript can run (by third-parties), you run the risk of being “followed” on the web.
```javascript
{
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:93.0) Gecko/20100101 Firefox/93.0",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8",
"accept-encoding": "gzip, deflate, br",
"accept-language": "en-US,en;q=0.5",
"upgrade-insecure-requests": "1",
"referer": "https://iamunique.org/",
"user-agent-js": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:93.0) Gecko/20100101 Firefox/93.0",
"platform": "MacIntel",
"cookies": "yes",
"timezone": 420,
"languages-js": "en-US,en",
"ua": "no",
"doNotTrack": "NC",
"navigator_properties": [
"vibrate",
"javaEnabled",
"getGamepads",
"getVRDisplays",
"mozGetUserMedia",
"sendBeacon",
"requestMediaKeySystemAccess",
"registerProtocolHandler",
"taintEnabled",
]
}
```
Web Tracking
Prevalence of Major Companies
• Major companies have large presences on the web, and as a result, can see the majority of websites that you visit
• Google appears on 82.2% of the Top 1M (by AS), because of analytics and advertising services
• Facebook appears on 34.1%, to enable social sharing + tracking
<table>
<thead>
<tr>
<th>Company</th>
<th>Prevalence on Top 1M</th>
</tr>
</thead>
<tbody>
<tr>
<td>Google</td>
<td>82.2%</td>
</tr>
<tr>
<td>Facebook</td>
<td>34.1%</td>
</tr>
<tr>
<td>Amazon</td>
<td>32.6%</td>
</tr>
<tr>
<td>Cloudflare</td>
<td>30.7%</td>
</tr>
<tr>
<td>Akamai</td>
<td>20.3%</td>
</tr>
<tr>
<td>MaxCDN</td>
<td>19.0%</td>
</tr>
<tr>
<td>Edgecast</td>
<td>17.9%</td>
</tr>
<tr>
<td>Fastly</td>
<td>15.5%</td>
</tr>
<tr>
<td>SoftLayer</td>
<td>11.8%</td>
</tr>
<tr>
<td>Twitter</td>
<td>11.2%</td>
</tr>
</tbody>
</table>
Web Tracking
Cookie Syncing
- Even if a company is not available on every website, companies often times share cookie information
- “Cookie Synchronization: Everything You Always Wanted to know but were afraid to ask” – WebConf 2019
- Core idea is simple: If you have a collaboration agreement with another third-party, you simply redirect requests to them upon receiving requests
Web Tracking
Cookie Syncing
GET tracker.com/pixel.jpg
Response, Set-Cookie: User=user123
Web Tracking
Cookie Syncing
GET advertiser.com/pixel.jpg
Response, Set-Cookie: User=userABC
Web Tracking
Cookie Syncing
GET tracker.com/pixel.jpg, cookie=user123
Web Tracking
Cookie Syncing
GET tracker.com/pixel.jpg, cookie=user123
REDIRECT, advertiser.com?syncID=user123&publisher=nytimes.com
Web Tracking
Cookie Syncing
GET tracker.com/pixel.jpg, cookie=user123
REDIRECT, advertiser.com?syncID=user123&publisher=nytimes.com
GET syncID=user123, cookie=userABC
Web Tracking
Cookie Syncing
GET tracker.com/pixel.jpg, cookie=user123
REDIRECT, advertiser.com?syncID=user123&publisher=nytimes.com
GET syncID=user123, cookie=userABC
• Third-parties with cookie syncing is enabled on 78% of modern websites :(
tracker.com
advertiser.com
Web Tracking
Cookie Ghostwriting
• Not all first-party cookies *should* be treated the same!
Web Tracking
Cookie Ghostwriting
• Not all first-party cookies should be treated the same!
Web Tracking
Cookie Ghostwriting
- Not all first-party cookies *should* be treated the same!
```
GET tracker.com/script.js
```
```
document.cookie = "user=userABC"
```
Web Tracking
Cookie Ghostwriting
- 42% of identifier cookies are *ghostwritten* in modern websites
```
GET tracker.com/script.js
```
```
document.cookie = "user=userABC"
```
```
tracker.com
```
```
advertiser.com
```
Why is there so much tracking?
Online Advertising
The Best Thing Since Sliced Bread! Available for $4.99 at your local Costco.
- Companies typically track you around the web to build profiles for targeted advertising.
- The more targeted your advertising, the more revenue you can make from advertisers who are potentially willing to give you more money to sell the ad spot.
- Useful for advertisers to know if people with your browsing habits, your properties, your whatever are browsing on the web.
Online Advertising
The Many Internet Players in Advertising
Publishers
- Publishers (e.g., nytimes.com, cnn.com, other websites) often have advertising space that they are hoping to make revenue off of
- In some cases, publishers have explicit agreements with companies and can sell their space that way
Online Advertising
Supply Side Platforms
• If a publisher wants to place the ad spot on the open advertising market, they typically go through an intermediary called a Supply Side Platform (SSP)
• Examples: Pubmatic, Rubicon Project, Verizon Media, etc.
• This aggregates information about the client (through a DMP) and participates in ad exchange
Online Advertising
Demand Side Platforms
- On the other end of the pipeline, you have advertisers
- There are analogous entities called demand side platforms, which participate in Real-Time Bidding, which is a real-time auction for ad space (examples: Google DoubleClick, QuantCast, Criteo, Adform)
- Typically happens in < 100ms
https://upload.wikimedia.org/wikipedia/commons/thumb/d/da/Adservingfull.svg/2880px-Adservingfull.svg.png
Online Advertising
Ad Exchanges
- Advertising exchanges receive spots from supply side, and facilitate real time bidding from the demand side based on properties of the ad spot
- Examples: Google DoubleClick, Facebook Exchange, PubMatic, Microsoft Advertising
https://upload.wikimedia.org/wikipedia/commons/thumb/d/da/Adservingfull.svg/2880px-Adservingfull.svg.png
Online Advertising
Bid Requests
```
"site": {
"id": "1234",
"name": "Example Site",
"domain": "examplesitedomain.com",
"mobile": 1,
"amp": 1,
"pub": {
"id": "0876",
"name": "Example Publisher, Inc.",
"domain": "examplepubdomain.com"
}
},
"user": {
"id": "a8af46c7780845dec108a841baff57c",
"consent": "lcknkhqy8y",
"buyeruid": "fc042924506238256034bdfaf220d9a5892",
"yob": 1990,
"gender": "m",
"ext": {
"consented_providers_settings": {
"consented_providers": [
3, 52, 45, 23
]
}
}
},
"device": {
"type": "4",
"os": "80f6d3f4a100a8b2aaaf32908d6cb1221",
"ip": "1.2.3.4",
"ua": "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.10) Gecko/20100119 Firefox/3.6.10",
"make": "Apple",
"model": "iPhone",
"hwv": "6s",
"os": "13",
"osv": "11.4.1",
"acmcme": "318-005",
"geo": {
```
https://protocol.bidswitch.com/rtb/request-examples.html
Online Advertising
Bid Response
```json
{
"id": "471e107-9879",
"cur": "usd",
"ext": {
"protocol": "6.0"
},
"seatbid": [
{
"seat": "4",
"bid": [
{
"id": "qwerty-098755",
"item": "sdfd-7800",
"price": "1.45",
"cid": "app-raid-campaign-3442",
"url": "https://asserver.com/winnote?impid=102&winprice=${AUCTION_PRICE}"
},
{
"key": "TIMESTAMP",
"value": "1127907134"
}
],
"ext": {
"agency_id": "agency_123",
"advertiser_name": "example advertiser"
},
"media": {
"ad": {
"id": "creative_id_1234",
"domain": {
"example.com",
"example.io"
},
"cat": [
"cat_1",
"cat_2"
],
"lang": "en",
"attr": [
{}
]
}
}
}
]
}
```
Online Advertising
Bidding for Ad Spots
• Real-time bidding is an auction process that is kicked off when a publisher tells an advertising network that they have an open ad-spot with certain properties
• Two most widely used methods of auctioning
• Waterfall bidding
• Header bidding
Online Advertising
Waterfall Bidding
- Publishers would pre-define a hierarchy of advertising networks that they wanted to ask in order (e.g., in a waterfall) about any given advertising spot
- Publishers would then set a floor bid rate that they needed for the ad spot
- The first network to fulfill the floor would win the spot, but floor price goes down with lower priority
- Problems:
- Slow (serial computation)
- Anti-competitive!
- Google had both an SSP and a DSP, which often meant they got first pick at ad spots
Online Advertising
Header Bidding
• Every DSP is offered the auction at the same time, and DSPs are incentivized to provide their true value for the advertising spot (theoretically)
• This typically happens in 100 – 200ms
• Two options:
• Client-side header bidding (happens in JavaScript), potentially makes the page slower, but have finer grained access to cookies
• Server-side header bidding (happens in the SSP), can be faster, but requires cookie syncing, could make things slower
When the business model *is* the privacy violation
APRIL 12, 2018 BY ARVIND NARAYANAN
BRACE YOURSELVES
REGULATION IS COMING
Regulation
GDPR, CCPA
• We’ve seen a big regulation push in the last five years around issues of online privacy and tracking
• General Data Protection Regulation (GDPR), is an EU law on data protection and privacy for the European Economic Area
• California Consumer Privacy Act (CCPA) is a state statute which aims to enhance consumer protections for Californians
• Both of these laws mandate all kinds of rules for the storing of personally identifiable data (e.g., IP addresses, cookies!), how long these things can be stored about users on the server side, etc.
Regulation
Cookie Banners
• If you use cookies, you must:
• Inform users that your site/app uses cookies
• Explain how cookies work and what the site uses them for
• Obtain informed consent prior to storing those cookies on the user’s device
• Need to provide users a clear and easy way to opt-out of cookie-tracking on a website
• Steep fines (4% of annual revenue) if you do not comply
• Unfortunately, cookie-banners are being designed in terrible ways… and consent is broken
|
{"Source-Url": "https://cs249i.stanford.edu/lectures/lecture13.pdf", "len_cl100k_base": 7402, "olmocr-version": "0.1.50", "pdf-total-pages": 70, "total-fallback-pages": 0, "total-input-tokens": 75400, "total-output-tokens": 9899, "length": "2e12", "weborganizer": {"__label__adult": 0.000629425048828125, "__label__art_design": 0.0008573532104492188, "__label__crime_law": 0.004055023193359375, "__label__education_jobs": 0.0014324188232421875, "__label__entertainment": 0.0005316734313964844, "__label__fashion_beauty": 0.0002264976501464844, "__label__finance_business": 0.0029315948486328125, "__label__food_dining": 0.0003581047058105469, "__label__games": 0.0012445449829101562, "__label__hardware": 0.0039215087890625, "__label__health": 0.0005116462707519531, "__label__history": 0.0007200241088867188, "__label__home_hobbies": 0.00017893314361572266, "__label__industrial": 0.0006136894226074219, "__label__literature": 0.0007672309875488281, "__label__politics": 0.0061492919921875, "__label__religion": 0.0005383491516113281, "__label__science_tech": 0.1455078125, "__label__social_life": 0.00030612945556640625, "__label__software": 0.2237548828125, "__label__software_dev": 0.603515625, "__label__sports_fitness": 0.00026988983154296875, "__label__transportation": 0.0008616447448730469, "__label__travel": 0.0002872943878173828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28361, 0.02923]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28361, 0.14291]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28361, 0.87313]], "google_gemma-3-12b-it_contains_pii": [[0, 28, false], [28, 605, null], [605, 1143, null], [1143, 1266, null], [1266, 1744, null], [1744, 1809, null], [1809, 1871, null], [1871, 1960, null], [1960, 2406, null], [2406, 3233, null], [3233, 3369, null], [3369, 3506, null], [3506, 4112, null], [4112, 4687, null], [4687, 5186, null], [5186, 5363, null], [5363, 5540, null], [5540, 5717, null], [5717, 6275, null], [6275, 6909, null], [6909, 7258, null], [7258, 7757, null], [7757, 8517, null], [8517, 9174, null], [9174, 9301, null], [9301, 9327, null], [9327, 9914, null], [9914, 10426, null], [10426, 10991, null], [10991, 11011, null], [11011, 11508, null], [11508, 13283, null], [13283, 15086, null], [15086, 15649, null], [15649, 15889, null], [15889, 16092, null], [16092, 16577, null], [16577, 16621, null], [16621, 16696, null], [16696, 16874, null], [16874, 18167, null], [18167, 19400, null], [19400, 20190, null], [20190, 20574, null], [20574, 20666, null], [20666, 20761, null], [20761, 20833, null], [20833, 20967, null], [20967, 21137, null], [21137, 21414, null], [21414, 21508, null], [21508, 21600, null], [21600, 21772, null], [21772, 21995, null], [21995, 22026, null], [22026, 22497, null], [22497, 22557, null], [22557, 22803, null], [22803, 23157, null], [23157, 23597, null], [23597, 23966, null], [23966, 24904, null], [24904, 25851, null], [25851, 26141, null], [26141, 26677, null], [26677, 27175, null], [27175, 27262, null], [27262, 27301, null], [27301, 27871, null], [27871, 28361, null]], "google_gemma-3-12b-it_is_public_document": [[0, 28, true], [28, 605, null], [605, 1143, null], [1143, 1266, null], [1266, 1744, null], [1744, 1809, null], [1809, 1871, null], [1871, 1960, null], [1960, 2406, null], [2406, 3233, null], [3233, 3369, null], [3369, 3506, null], [3506, 4112, null], [4112, 4687, null], [4687, 5186, null], [5186, 5363, null], [5363, 5540, null], [5540, 5717, null], [5717, 6275, null], [6275, 6909, null], [6909, 7258, null], [7258, 7757, null], [7757, 8517, null], [8517, 9174, null], [9174, 9301, null], [9301, 9327, null], [9327, 9914, null], [9914, 10426, null], [10426, 10991, null], [10991, 11011, null], [11011, 11508, null], [11508, 13283, null], [13283, 15086, null], [15086, 15649, null], [15649, 15889, null], [15889, 16092, null], [16092, 16577, null], [16577, 16621, null], [16621, 16696, null], [16696, 16874, null], [16874, 18167, null], [18167, 19400, null], [19400, 20190, null], [20190, 20574, null], [20574, 20666, null], [20666, 20761, null], [20761, 20833, null], [20833, 20967, null], [20967, 21137, null], [21137, 21414, null], [21414, 21508, null], [21508, 21600, null], [21600, 21772, null], [21772, 21995, null], [21995, 22026, null], [22026, 22497, null], [22497, 22557, null], [22557, 22803, null], [22803, 23157, null], [23157, 23597, null], [23597, 23966, null], [23966, 24904, null], [24904, 25851, null], [25851, 26141, null], [26141, 26677, null], [26677, 27175, null], [27175, 27262, null], [27262, 27301, null], [27301, 27871, null], [27871, 28361, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28361, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28361, null]], "pdf_page_numbers": [[0, 28, 1], [28, 605, 2], [605, 1143, 3], [1143, 1266, 4], [1266, 1744, 5], [1744, 1809, 6], [1809, 1871, 7], [1871, 1960, 8], [1960, 2406, 9], [2406, 3233, 10], [3233, 3369, 11], [3369, 3506, 12], [3506, 4112, 13], [4112, 4687, 14], [4687, 5186, 15], [5186, 5363, 16], [5363, 5540, 17], [5540, 5717, 18], [5717, 6275, 19], [6275, 6909, 20], [6909, 7258, 21], [7258, 7757, 22], [7757, 8517, 23], [8517, 9174, 24], [9174, 9301, 25], [9301, 9327, 26], [9327, 9914, 27], [9914, 10426, 28], [10426, 10991, 29], [10991, 11011, 30], [11011, 11508, 31], [11508, 13283, 32], [13283, 15086, 33], [15086, 15649, 34], [15649, 15889, 35], [15889, 16092, 36], [16092, 16577, 37], [16577, 16621, 38], [16621, 16696, 39], [16696, 16874, 40], [16874, 18167, 41], [18167, 19400, 42], [19400, 20190, 43], [20190, 20574, 44], [20574, 20666, 45], [20666, 20761, 46], [20761, 20833, 47], [20833, 20967, 48], [20967, 21137, 49], [21137, 21414, 50], [21414, 21508, 51], [21508, 21600, 52], [21600, 21772, 53], [21772, 21995, 54], [21995, 22026, 55], [22026, 22497, 56], [22497, 22557, 57], [22557, 22803, 58], [22803, 23157, 59], [23157, 23597, 60], [23597, 23966, 61], [23966, 24904, 62], [24904, 25851, 63], [25851, 26141, 64], [26141, 26677, 65], [26677, 27175, 66], [27175, 27262, 67], [27262, 27301, 68], [27301, 27871, 69], [27871, 28361, 70]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28361, 0.08525]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
a51586a7952b74ea68550eaffc64804a496769c7
|
Usage of JSF framework and EJB technology in the creation of corporate applications
Przemysław Dębski∗, Barbara Gocłowska†
Institute of Computer Science, Maria Curie Sklodowska University,
pl. M. Curie-Sklodowskiej 1, 20-031 Lublin, Poland.
Abstract – In the following article we describe the architecture of an online store project, which is an application utilising Java Enterprise Edition. Our project is based on a customer expectations model. The choice of technology has been due to its easy expandability by additional modules as well as its functionality which does not require reorganising the existing code.
1 Introduction
Over the last years the Internet has become one of the most important media. It has become a means for both static websites based on ordinary HTML and for the powerful applications. Creation of the latter may be a complex and time-consuming process. The implementation of IT based systems across the society has triggered a demand for a variety of tools for convenient creation of complex corporate applications.
There is a number of ready-made solutions available on the IT market. For instance, Enterprise Resource Planning applications support the management of a number of tasks performed by a company or groups of co-operating companies through gathering data or through making it possible to perform operations on the gathered data. Currently, the highest demand for Internet applications is particularly notable in companies (ranging from sole traders to powerful corporations).
∗przemekdebski@gmail.com
†gocbar@gmail.com
There are groups of companies, such as online stores, with certain features in common. They have a similar physical structure of a company which can be easily described by means of appropriate tools. One of such tools is the Java Enterprise Edition platform and its component technologies.
1.1 Why Enterprise Java Bean?
Enterprise Java Beans is a technology of creating components which perform certain tasks for the benefit of customers. It is a server-based technology, which makes it possible to transfer a significant share of business logic from client applications to server components. Thanks to this it is easier to make changes in the application and maintain it at the same time. EJB is a very complex technology comprising many modules responsible for, among others, data security or storage, providing tools for creating large, solid and portable business applications such as online stores.
Apart from the typical customer service, the technology streamlines financial management and staff management of a company. Moreover, it reminds the user about payment deadlines (Java Messenger Service, e-mail, a very convenient Timer interface) and makes it possible to use Web Services. Web Services can accelerate the speed and reduce the cost of integration with various internal applications and systems, and therefore they are particularly desirable in such applications [1].
1.2 Why Java Server Faces?
Java Server Faces is a framework which considerably simplifies and accelerates creation of large web applications. It has been created on the basis of Java Servlet Specification, which has entirely changed the attitude towards web application programming. In its structure JSF resembles window applications such as Swing. Its key features include: view creation based on the tree of ready-made user interface components, event-driven programming and data validation as well as conversion services. The programmer can focus on the implementation of an application’s business logic thanks to ready-made mechanisms generating responses, checking data integrity as well as intercepting events.
1.3 Facelets or how to simplify the usage of JSF?
JSF can be associated with different technologies for creating view. The basic one is Java Server Pages which was created for different purposes before JSF. Consequently, JavaServer Faces cannot be fully used. An alternative solution is offered by a new Facelets framework created specifically for JSF. Its main advantage is the ability to create view based on templates, which cannot be achieved with JSP.
1.4 Session Bean as JSF Servlet event listeners
Session beans may serve as event listeners in a JSF-based application. Event listener methods perform certain tasks, such as adding goods to the shopping cart by using
EntityManager from Enterprise JavaBeans 3.0 in order to gain access to the database and return a text value which suggests which view should be generated in response.
2 Application of Java Enterprise Edition platform to create utility programs
The Internet is widely used as a medium for data presentation by online stores, among others. Files are dynamically generated after receiving a query from the server, which is linked with the necessity of choosing the right technology. The choices developers make are often based on their own preferences and past experiences. Nevertheless, literature on the subject comprises comparative studies which may be helpful in making such choices. Yet, the choices are not always favourable against the results of such studies. For example, the results of study [2] on “the cost” of using a given web application within various technologies have shown that Java Servlets are one of the most expensive (compared to CGI/C++ among others). The authors of the study have named several reasons for this and one of the most significant is the servlet interpretation by Java Virtual Machine. On the other hand, the authors point to advantages of using servlets, especially with applications which are not highly complex. Moreover, if this is accompanied by the use of JSF and Facelets technologies, thanks to which the work of a team creating an application may be efficiently organised, the choice seems to be justified. It is also possible to look for convenient but faster solutions such as Portlets [3, 4] or even to use code generating frameworks such as SEAM or Ruby on Rails. Some [5] use the UML tool to accelerate the process of creating applications. However, the ability to use metadata as well as the CRUD mechanism of entity beans EJB 3.0 has eliminated such necessity.
3 Online store application
Designing a user friendly online store is the most important issue. As [6] indicates “online shopping Web Sites contain a lot of irrelevant information related to new types of products or reduced items”. Customers get confused by details and become impatient being unable to intuitively find relevant information. Another common problem is posed by bad navigation due to which it is easy to get lost in cyberspace [7].
Whereas in creating educational platforms [8] the main focus is on tailoring them to the customer’s cognitive requirements, in the implementation of service application the main focus is on the customer’s expectations, which has been proved by empirical study [9].
1. Product categorisation – choice of category.
2. In the case of a large number of categories, requiring more attention than just a glance, for instance in a drop down menu – grouping and sub-categorisation.
3. Exclusion of the possibility to get lost in cyberspace.
4. “Memorising” the customer’s preferences.
5. Avoiding information overload on the website.
(6) Good timing for loading another navigated webpage (according to research [10] if it takes longer than a minute for a webpage to load, the application user will not browse it).
(7) The website has to provide detailed product information “on request”.
We have tried to utilise the above-mentioned customer expectations model in the application described below. To deal with the issue of speed we have used the rendering of single or multiple UI components, which has reduced the number of loaded pages and complied with point 5 above.
3.1 Application’s functionality
The function of the project is an online photographic equipment store. The application is divided into customer and administrator modules. Each of them comprises a separate web interface created on the basis of JSF Framework + Facelets. The customer module’s functionality allows customers to browse the catalogue, add products to the shopping cart, order selected items, express opinions on them as well as browse their shopping history. The administrator’s interface makes it possible to create product categories, add or edit items.
The application has been designed in a way which makes it possible to change the store’s profile. Each catalogued product comprises items belonging to a given category. Each category has a set of attributes which are typical of the items belonging to it. The administrator can define them, thanks to which the application is universal and may be used for cataloguing products from a variety of fields.
3.2 Design and Application
The application’s architecture is represented by the picture below. Its core comprises the EJB module which serves as a link between web clients and the database. It can be divided into two basic groups: session beans and entity beans. Web clients only gain access to session beans which make use of the entity beans and these, in turn, serve as a database abstraction.
3.2.1 Entity beans
Entity beans are used to translate the database tables into Java objects. The application comprises 12 entity classes.
AdminUser – store administrator.
Customer – store customer.
Category – item category (Item) or product category (Product).
Item – single item (e.g. a camera or lens); items comprising a catalogued product.
Attribute – an item’s attribute (e.g. pixel count for the camera sensor); each category has a given set of attributes.
AttributeValue – attribute value of a particular item; value is of the the kind java.lang.String, and its interpretation is expressed by AttributeValueType.
AttributeValueType – logical type of attribute value; four types have been defined: INTEGER, FLOAT, BOOLEAN, STRING.
Producer – producer of an item.
Product – catalogued product, it has its price as well as quantity in stock, comprises one main item (of the same category as the product) and optional additional items, e.g. a camera (main item) + lens (optional additional item).
LineItem – item in an order; it comprises the item, its price at the moment of purchase as well as quantity ordered.
Purchase – order; it comprises items (LineItem), information on the customer, payment method, price and shipping address.
Review – the customer’s opinion on a product.
### 3.2.2 Session beans
Session beans are responsible for the application’s business logic and serve as an interface between the client application and the database. Certain operations multiplied in entity classes have been abstracted to a stateless session bean EntityManagerBean.
This bean comprises a set of typical methods used in applications utilising a database. Each of the methods performs a calculation whose result is independent of other methods’ results, which justifies the usage of the stateless session bean.
The other session beans comprise methods dedicated to particular entity beans. The methods include, among others: searching according to specific attributes of a given entity, saving preceded by checking whether or not the uniqueness of a certain attribute will be impaired (e.g. producer’s name should be unique) etc. The method ItemManagerBean.search() may serve as a good example. Its task is to search all items whose attributes’ values have been defined in a template.
```java
public List<Item> search(Category category, Map<Attribute, Object> attrval) {
<Item> items = findByCategory(category);
if (attrval == null || attrval.isEmpty())
return items;
List<Item> result = new ArrayList<Item>();
for (Item i : items) {
boolean match = true;
for (Attribute a : attrval.keySet()) {
boolean attrFound = false;
Object o = attrval.get(a);
for (AttributeValue avi : i.getAttributes()) {
if (!avi.getAttribute().equals(a))
continue;
attrFound = true;
if (o instanceof String) {
String av = (String)o;
if (!av.trim().equalsIgnoreCase(avi.getValue()))
match = false;
} else if (o instanceof StringRange) {
StringRange av = (StringRange)o;
if (avi.getAttribute().getValueType().getType().equals("INTEGER")) {
Integer avfrom = null, avto = null, avival = null;
if (!av.getFrom().isEmpty())
avfrom = new Integer(av.getFrom());
if (!av.getTo().isEmpty())
avto = new Integer(av.getTo());
avival = new Integer(avi.getValue());
if (avfrom != null && avfrom.compareTo(avival) > 0) {
match = false;
break;
}
if (avto != null && avto.compareTo(avival) < 0) {
match = false;
break;
}
} else if (o instanceof StringRange) {
StringRange av = (StringRange)o;
if (avi.getAttribute().getValueType().getType().equals("INTEGER")) {
Integer avfrom = null, avto = null, avival = null;
if (!av.getFrom().isEmpty())
avfrom = new Integer(av.getFrom());
if (!av.getTo().isEmpty())
avto = new Integer(av.getTo());
avival = new Integer(avi.getValue());
if (avfrom != null && avfrom.compareTo(avival) > 0) {
match = false;
break;
}
if (avto != null && avto.compareTo(avival) < 0) {
match = false;
break;
}
}
}
}
}
}
}
return result;
}
```
The method returns arguments comprising a category and a map of attributes with values according to which items should be searched. Attribute values are saved in the form of a string but each of them takes one logical value of the following four types INTEGER, FLOAT, BOOLEAN or STRING. In the case of last two, the value passed to the map is of type String. In the case of INTEGER and FLOAT the value is of type StringRange.
The reason for using it is the fact that for digital attributes it is possible to set a range of values circumscribing the values of such attributes.
Initially all items belonging to a given category are searched. If the map of template values is empty the method returns the items found and becomes inactive. Otherwise, each of the items is analysed by comparing its attributes with template attributes. For attributes of type BOOLEAN and STRING the analysis involves searching for identical values. For digital types the analysis checks whether their value is higher or equal to the value of from field of an object of type StringRange and whether it is lower or equal to the value of to field. If an item does not have value set for an attribute from the template map or its value does not match the value from the template, the analysis of a given item stops and proceeds to another one. If the analysis of an item is successful, it is copied to a list which will be returned as a search result.
CustomerManagerBean.add() is a method of persisting a new entity bean (in this case Customer) in the database with a prior examination whether the persistent entity will not interfere with uniqueness limit (limits are imposed on email field in Customer entity, among others).
merge() operation of EntityManager persistence service only checks the uniqueness of the primary key. When an attempt is made to persist an entity interfering with the limitations
imposed on a different column, the database will return an error. Java Persistence 1.0 specifications do not define a separate exception for such a situation, therefore in case of failure persistence service will not indicate its cause. The solution offered by add() method is to perform a query checking whether the record with the value of the indicated column and uniqueness limit exists in the database. If the search result is positive the method generates the exception UniqueException, otherwise the entity becomes persistent.
3.2.3 Customer applications
Customer modules are in the form of web applications. We have created two separate ones: one for customers and the other for store administrators. They have been built on the basis of JavaServer Faces technology combined with Facelets framework and may be divided into management and presentation layers.
The management layer comprises managed beans, also referred to as controllers (figure 1). Initializing particular controllers is performed declaratively by means of appropriate XML implementation descriptor elements faces-config.xml.
Controllers implement all operations which the application offers its users. Access to those operations is given by means of links and forms on the viewed website. The controller object uses session beans injected by @EJB annotations, thus gaining access to data gathered in the database.
Each controller is an extension of the abstract class AbstractController. This class comprises references to the session component’s interface EntityManagerLocal used in all the controllers. Another feature of the class is view which is a String type object storing information on view. AbstractController has one getEntityFromRequest() method to get an entity object according to the HTTP query parameter (parameter value should constitute the primary key of searched entity).
```java
public abstract class AbstractController {
@EJB
protected EntityManagerLocal em;
private String view;
public String getView() { return view; }
public void setView(String view) { this.view = view; }
public <T> T getEntityFromRequest(Class<T> objtype, String param) {
String p = FacesContext.getCurrentInstance().getExternalContext().getRequestParameterMap().get(param);
if (p != null) {
int id = Integer.parseInt(p);
obj = em.get(objtype, id);
}
return obj;
}
}
```
ProducerController from the store administrator application may serve as an example of a controller. This class comprises a number of methods utilised in the InvokeApplication phase of the JSF query processing cycle. Their function is to initiate parameter states so that their values may be displayed or set by means of forms on the viewed webpages. For instance, the createSetup() method has to perform the following tasks:
1. it creates a new object of type Producer whose attributes will be set by means of a form on the viewed webpage,
2. it sets the value of view parameter to CREATE, which means displaying the webpage with the form in order to add information concerning the producer,
3. it returns the value of type String, it is navigational information whose task is to define the page returned as a search result.
After filling in the form, sending it to the server, passing all the validation phases and being converted in the InvokeApplication phase, the method save() is performed. The method does the following:
1. by invoking the method save() of an entity bean, it tries to save a new record, reflecting the producer entity, in the database,
2. in case of failure it generates a message to the user and returns an empty writing informing JSF about displaying the same view,
3. if the operation of persisting succeeds, the parameter view will assume the value LIST – i.e. displaying the list of all the producers – and the navigated value will be returned.
Facelets framework has been used to create the presentation layer because it allows for creating view in a convenient way based on templates. XHTML has served as a standard for creating each component file adding up to the ultimate appearance of a webpage. Each page makes use of a template defined in the layout.xhtml file which is presented in the code snippet below (certain elements responsible for the appearance of the webpage have been removed for the sake of clarity):
```xml
<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:ui="http://java.sun.com/jsf/facelets">
<body>
<!-- elementy definiujace uklad i wygląd strony -->
<div id="center_content">
<ui:insert name="mainbody"/>
```
The document describes general appearance and common elements for each page (menu, footnote) as well as separates a variable section with the help of `<ui:insert name="mainbody"/>` whose content varies from page to page. The documents describing separate pages define the appearance and content of this section only. Thanks to the usage of templating mechanism it is much easier to maintain the code and to make changes. If we wanted to, for instance, change the position of certain elements on the webpage, we would only have to edit the template file and the changes would be automatically introduced in all the subpages utilising it. We would be deprived of such a possibility if we had used the standard way of creating view in JSF, namely JSP.
The content itself varies on the majority of pages and it depends on the user’s action. In such cases the main document of a webpage – depending on the view parameter of the suitable controller – imports the document defining the content of the webpage. An extract from the file producers.xhtml from the administrator application may serve as an example:
```xml
<ui:composition template="/WEBINF/templates/layout.xhtml">
<ui:define name="mainbody">
<h1>Katalog :: Produenci</h1>
<c:if test="#{producerController.view == 'LIST'}">
<ui:include src="/WEBINF/templates/producers_list.xhtml"/>
</c:if>
<c:if test="#{producerController.view == 'CREATE' || producerController.view == 'EDIT'}">
<ui:include src="/WEBINF/templates/producers_edit.xhtml"/>
</c:if>
</ui:define>
</ui:composition>
```
The element `<ui:composition>` points to the template used by the webpage. `<ui:define>` points to the variable section of the template. Its content is the same as the content of the section. In this case the content is not directly set in `<ui:define>` element but rather divided into two separate files. If the requested view is the product list (LIST), the file producer_list.xhtml is imported, if the administrator wants to create a new product (CREATE) or make changes in an existing one (EDIT), the file producers_edit.xhtml is imported.
4 Summary
The application has been designed and implemented in a way which enables it to be used as a template for convenient creation of online stores. We have achieved this, among others, thanks to utilising general entity classes which we have transposed into the tables of a relation database. Such indispensable elements of successful online retailing as a shopping trolley service, credit card validation service and many others are also present. However, the most important element is strict adherence to “desirable expectations” according to the store customer model. Data protection is also a notable aspect. We have not used typical technologies such as Java Authentication and Authorization Service [11], but focused on the methods offered by the PhaseListener interface: beforePhase() and afterPhase(), which, combined with the roles assigned to particular groups of application users, supervise redirecting unlogged and unregistered users to the webpages they are permitted to view. The architecture of the application allows for easy extension by an accounting module or online store staff management module as well as to create such modules as a service module, etc.
5 Future work
Further work on the application will concern utilising the rule system for giving discounts to customers, dealing with their interests and including external data if customers are interested in equipment which is out of stock.
We are also considering adding customer support while shopping. For instance, it could include suggestions to increase the practical value of a product purchased.
Due to our concern about the effectiveness of online retailing, we are of the opinion that it is necessary to conduct a survey on customer “satisfaction” based on the model described in section 3.
References
Usage of JSF framework and EJB technology in the creation . . .
|
{"Source-Url": "http://journals.umcs.pl/ai/article/download/3284/2478", "len_cl100k_base": 4884, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 25115, "total-output-tokens": 6056, "length": "2e12", "weborganizer": {"__label__adult": 0.0002460479736328125, "__label__art_design": 0.0003521442413330078, "__label__crime_law": 0.0001875162124633789, "__label__education_jobs": 0.0004320144653320313, "__label__entertainment": 4.249811172485352e-05, "__label__fashion_beauty": 9.834766387939452e-05, "__label__finance_business": 0.0005674362182617188, "__label__food_dining": 0.00023674964904785156, "__label__games": 0.0002675056457519531, "__label__hardware": 0.0004725456237792969, "__label__health": 0.00017309188842773438, "__label__history": 0.00010985136032104492, "__label__home_hobbies": 4.6253204345703125e-05, "__label__industrial": 0.00024080276489257812, "__label__literature": 0.00010406970977783204, "__label__politics": 0.0001361370086669922, "__label__religion": 0.0002218484878540039, "__label__science_tech": 0.0024242401123046875, "__label__social_life": 4.0590763092041016e-05, "__label__software": 0.006961822509765625, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.0001443624496459961, "__label__transportation": 0.0002980232238769531, "__label__travel": 0.00015485286712646484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26791, 0.0134]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26791, 0.26951]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26791, 0.86633]], "google_gemma-3-12b-it_contains_pii": [[0, 1569, false], [1569, 4354, null], [4354, 7246, null], [7246, 9897, null], [9897, 10977, null], [10977, 14281, null], [14281, 16167, null], [16167, 18599, null], [18599, 20984, null], [20984, 23141, null], [23141, 25745, null], [25745, 26791, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1569, true], [1569, 4354, null], [4354, 7246, null], [7246, 9897, null], [9897, 10977, null], [10977, 14281, null], [14281, 16167, null], [16167, 18599, null], [18599, 20984, null], [20984, 23141, null], [23141, 25745, null], [25745, 26791, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26791, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26791, null]], "pdf_page_numbers": [[0, 1569, 1], [1569, 4354, 2], [4354, 7246, 3], [7246, 9897, 4], [9897, 10977, 5], [10977, 14281, 6], [14281, 16167, 7], [16167, 18599, 8], [18599, 20984, 9], [20984, 23141, 10], [23141, 25745, 11], [25745, 26791, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26791, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
4df5cfe730d1edbcc052ce5027edd023578c0716
|
Writing clean and safe UDFs in Delphi
Gregory H. Deatz
Introduction
This article makes some assumptions about the reader's knowledge of Delphi. The reader should understand Delphi well enough to know how to create a DLL--We will demonstrate how to create UDFs, not DLLs. That being said, much of the article depends heavily on the study of FreeUDFLib, a free UDF library (hence the name) distributed with this article, so all source code for a fully functional UDF library is provided!
We will begin by demonstrating a very simple UDF (Modulo), and from there we will discuss returning values by reference. Returning values by reference (strings and dates must be returned by reference) leads into a discussion of issues surrounding dynamic allocation of memory and appropriate cleanup.
This presentation will provide the developer with all necessary tools for making good decisions about the design and implementation of solid UDF libraries for InterBase.
Writing a UDF
What is a UDF?
Quite simply put, a UDF in InterBase is a function in a DLL! This simple use of shared libraries provides the developer with virtually unlimited amounts of power and flexibility. Virtually any function that can be exposed through a DLL can be used by InterBase. This comment, however, should be taken with a grain of salt--The intent of a UDF is to perform some small operation that is not available in SQL or InterBase's DSQL language.
An example of a UDF is
Function: Integer **Modulo** (Integer Numerator, Integer Denominator)
Divide Numerator by Denominator and return the remainder. This function is essential in many routines, but it is not available in InterBase's DSQL language.
Writing the first UDF
Open the `FreeUDFLib/FreeUDFLib.dpr` project distributed with this article, and take a look at the `FreeUDFLib/MathFncs.pas` unit. Among a few other declarations, the reader will find a declaration reading
```pascal
function Modulo(var Numerator, Denominator: Integer): Integer; cdecl; export;
```
The first thing to note is that *all* arguments in an InterBase UDF are passed by reference. The second thing to note is that this function uses the cdecl calling convention. A discussion of calling conventions is beyond the scope of this article, but the reader can see some documentation by looking up cdecl in Delphi's on-line help. Beyond that, the reader should take it on a certain degree of faith that this is how InterBase wants its UDFs written. The keyword `export` tells Delphi that this function will be exported.
Further examination of the actual project source (click "View, Project Source") will show code similar to this:
```pascal
exports
Modulo;
The exports clause in the project source tells Delphi that this function is exported.
Now, just for fun, we will list the code for `Modulo`:
```pascal
function Modulo(var Numerator, Denominator: Integer): Integer;
begin
{$ifdef FULDebug}
Writeln('Modulo() - Enter');
{$endif}
if (Denominator = 0) then
result := 0
else
result := Numerator mod Denominator;
{$ifdef FULDebug}
Writeln('Modulo() - Exit');
{$endif}
end;
```
Inspection of the code will reveal that `Modulo` returns 0 when `Denominator` is 0 (this is better than crashing), and in all other situations it simply returns `Numerator mod Denominator`.
**Using the UDF in InterBase**
A precompiled version of FreeUDFLib is distributed with the article (`FreeUDFLib/FreeUDFLib.dll`). Place this DLL in InterBase's "bin" directory, which is usually
```pascal
c:\Program Files\InterBase Corp\InterBase\Bin.
```
Now, run the `WISQL32.exe` utility. It is located in InterBase's "bin" directory. Create a test database ("File", "Create database"), and once attached to the new test database, type the following DDL command:
```sql
declare external function f_Modulo
integer, integer
returns
integer by value
entry_point "Modulo" module_name "FreeUDFLib.dll"
```
For a full description of the syntax for declaring UDFs in InterBase, see the [Programmer's](#)
Guide.
Now, run the above statement and commit. To run this UDF from InterBase, type the following statement:
```sql
select f_Modulo(4, 3) from rdb$database
```
*RDB$DATABASE* is a one-row table in all InterBase databases, so after running the above command, the following text should be in the results window:
```sql
select f_modulo(4, 3)
from rdb$database
F_MODULO
=========
1
```
Writing UDFs is easy!
**Returning strings**
We've successfully written a Delphi DLL that can be used as a UDF library in InterBase. Wouldn't a great function be
**Function:** PChar *Left* (*PChar sz, Integer Len*)
Return the *Len* leftmost characters of *sz.*
The question is, how do we return a string? Already implied by the above declaration, InterBase doesn't respect Delphi's *String* datatype, so we are forced to use *PChar.*
*PChar*'s versus *String*'s is not a big deal, except that Delphi does not automatically clean up *PChar*’s, and the memory to store a string must be, at some point, explicitly allocated by the developer. Our first UDF returned a scalar value on the function's calling stack, which meant that we did not have to worry about cleanup issues. With any form of string, cleanup issues *must* be addressed.
Here are a few possible solutions to the "returning strings" conundrum. We will explain why certain solutions are desired, and why others will simply not work.
**Solution #1: Global static memory**
There is an obvious way to accomplish returning strings: Maintain a global *PChar* that has some amount of space allocated to it. Stuff this string with the desired return value, and return the global *PChar.* The only problem with this is that InterBase is multi-threaded, and if a Delphi DLL does this, InterBase will surely crash in a multi-user environment.
It seems that solution #1 is no solution at all...
Solution #2: Thread-local static memory
Instead of maintaining a single global variable, maintain a thread-local variable.
Whenever a UDF wishes to return a string, it simply copies the string into the string referenced by the thread-local PChar, and returns it.
This solution certainly sounds elegant, and it certainly lends itself to being clean, but... How do we manage thread-local variables in Delphi? Also, this solution certainly sounds like it will work, but will it? InterBase is a multi-threaded environment, surely, but how does it handle the scheduling of its calls to UDFs? (see section A discussion of Solution #2 { anchor link to discussion of Solution #2, this document })
Solution #3: Returning dynamically allocated strings
Another obvious solution: Every time a function returning a string is ready to return the string, allocate a bit of memory for the string and return a PChar to it. InterBase won't crash--at least not right away. It should be clear that this presents a nasty memory leak. The Delphi function will keep allocating memory, but nobody is cleaning it up.
In InterBase 5.0, a new keyword call free_it was introduced. This keyword is used like this:
```
declare external function f_Left
cstring(254), integer
returns
cstring(254) free_it
entry_point "Left" module_name "FreeUDFLib.dll";
```
If the developer chooses to return strings in the "memory-leaky" fashion, then the UDF should be declared with the free_it keyword, just like above. This allows the developer of the UDF to be sloppy, and it forces InterBase to do the housekeeping.
This is a reasonable solution if the developer of the UDF wants to be sloppy; however, it is the author's contention that all functions, especially third-party functions should do their own housekeeping, or they should do nothing to "dirty the house" to begin with. It is considered bad form to write a function that is known to be leaky, only to "pass the buck" to the calling application.
Another problem with the free_it solution is that it only works with InterBase 5 and up. If the developer intends to write functions for use with InterBase 4.x or lower, this solution simply won't work. (see section A discussion of Solution #3 { anchor link to discussion of Solution #3, this document })
Solution #4: Making InterBase do the work
An under documented feature of InterBase allows a UDF declaration to specify a particular parameter as the assumed return value of the function. By implementing a UDF in this way, the UDF developer forces InterBase to pass it valid spaceholders for strings. In other words, the UDF developer won't have to worry about dynamic memory issues because this problem is dealt with entirely in the InterBase engine.
As was indicated before, third-party routines should either do their own housekeeping, or they should do nothing to "dirty the house" to begin with. This method is a simple and elegant way to avoid messing up InterBase's house. (see section A discussion of Solution #4 {anchor link to discussion of Solution #4, this document}).
Notes on Delphi's memory manager
Delphi is a derivative product of the old days, when Borland Pascal was called Borland Pascal, and Borland was Borland, and boys were boys, and men were men, and multi-threading was just plain unavailable to the DOS world. When Delphi moved into the Win32 world with version 2.0, Inprise discovered that the memory manager wasn't thread-safe.
To solve the thread-safety concerns, they wrapped their memory management routines in critical sections, thus making the memory manager thread-safe. Critical sections are beyond the scope of this article. Suffice it to say that they ensure orderly access to a shared resource.
The odd trick Inprise played, though, is that the critical sections are used only if a not-so-well-known system variable, IsMultiThread is set to True. (IsMultiThread is defined in the unit `System.pas', which is implicitly used by all Delphi units.)
The basic gist of this story is as follows: Delphi is thread-safe, but only when the developer tells it to be. Whenever an application or library knows that it may be dealing with multiple threads it should guarantee that IsMultiThread is set to True; otherwise, the application or library is not thread safe. (Important note: IsMultiThread is set implicitly if the developer uses a TThread object.)
It cannot be stressed enough that IsMultiThread must be set to True in multi-threading environments.
A discussion of Solution #2
In our introduction to this solution, we asked the question, "Will thread-locals work?" The answer is a resounding yes! InterBase is a multi-threaded architecture, and any number of different queries can be running in a given thread. InterBase is guaranteed to execute a UDF and process its results within a single atomic action, thus thread-locals are perfectly safe for returning strings. (For a more in-depth conversation, visit IB's web site, or talk with the author after the lecture).
Thread-local variable's are extremely easy to work with, and Delphi makes it even easier through the use of the threadvar construct. Let's examine how to manage thread-local variables.
Thread-local variables the Delphi way
The simplest way to deal with thread-local variables is through the use of Delphi's keyword threadvar. The developer acts as if a global variable is being declared, but instead of using the var keyword, the keyword threadvar is used. For example, the following code snippet declares a thread-local variable called szMyString:
```
threadvar
```
The keyword `threadvar` can only be used at the unit level. In other words, a function cannot have local variables declared as thread-local. The reasoning behind this is clear: Local variables are intrinsically local to the thread in which they were called. The only time a "thread-local" variable needs to be used is when a sharable resource is being discussed, and sharable resources are declared outside the scope of procedures and functions.
**Thread local variables the API way**
In the section on threading (see section Thread-level initialization and finalization [anchor link to Thread-level initialization and finalization, this document ]), we point out some problems with using `threadvar`, so it is important to note how Windows handles thread-local access.
There are four routines involved in managing thread-local variables:
**Function: DWORD TlsAlloc**
Allocate a thread-local index, this index is used to access a thread-local variable. Allocating an index is basically equivalent to declaring a `threadvar`.
It returns $FFFFFFFF when an error occurs; otherwise, it returns a valid thread-local index.
**Function: BOOL TlsFree (DWORD dwTlsIndex)**
Free up a thread-local index. This is used when the thread-local variable referenced by the thread-local index is no longer needed.
It returns `True` when a thread-local index is successfully freed.
**Function: Pointer TlsGetValue(DWORD dwTlsIndex)**
Return the thread-local 32-bit value indexed by `dwTlsIndex`. The value `dwTlsIndex` must have been previously allocated using `TlsAlloc`.
**Function: Bool TlsSetValue(DWORD dwTlsIndex, Pointer lpvTlsValue)**
Set the 32-bit value indicated by this thread-local index to the value specified in `lpvTlsValue`.
The below code snippet shows how these functions work together:
```pascal
var
hTLSValue: DWORD;
...
hTLSValue = TlsAlloc;
if (hTLSValue = $FFFFFFFF) then
(* raise an exception or something *)
...
TlsSetValue(hTLSValue, Pointer(100));
```
...ShowMessage(IntToStr(Integer(TlsGetValue(hTLSValue))));...
TlsFree(hTLSValue);
In the next section (see section Thread-level initialization and finalization {anchor link to Thread-level initialization and finalization, this document }), we will show how FreeUDFLib uses the Windows API to manage its thread-local variables, so we will wait until then to illustrate any examples.
**Thread-level initialization and finalization**
In general, DLLs do not create threads, and in the case of building UDFs, this is no exception; however, InterBase *does* create threads, and it is essential that the DLL knows when a thread is created and when a thread is closed.
Initial inspection of Delphi indicates that the initialization and finalization sections of a Delphi unit are prime candidates for thread-level initialization and finalization. Further inspection reveals that these sections are only fired when the library is loaded and when it is freed, respectively. Good try, but not good enough.
Delphi defines a variable `DllProc`. `DllProc` is a procedure pointer, and by assigning a procedure to `DllProc`, the DLL can perform actions whenever an attached application creates or destroys a thread.
A DLL entry-point procedure is declared like this:
```pascal
procedure LibEntry(Reason: Integer);
```
The actual name of the procedure is irrelevant. It is merely important to note that a library entry procedure gets a single argument, `Reason`, which indicates why the procedure is being called.
In Delphi, there are three possible `Reason`'s for a `DllProc` to be called:
1. **Reason** = `DLL_THREAD_ATTACH`. Whenever a thread is created in an attached application, `DllProc` will be called with this reason. This gives the DLL an opportunity to initialize any thread-local variables.
2. **Reason** = `DLL_THREAD_DETACH`. Whenever a thread is being closed in an attached application, `DllProc` will be called with this reason. This gives the DLL an opportunity to free up any resources used by thread-local variables. Take care! Suppose that an application starts some threads; it then loads the DLL. The DLL is never explicitly told that those threads are executing (`DllProc` will never be called with the `DLL_THREAD_ATTACH` argument); however, if those threads exit gracefully, the DLL will be informed that they are closing. This means that the DLL is potentially responsible for cleaning up uninitialized data.
3. **Reason** = DLL_PROCESS_DETACH. Whenever the calling application unloads a library, DllProc will be called with this reason. This is exactly equivalent to the finalization section of a Delphi unit, so it is irrelevant to our discussions.
Let's study some examples:
Open up the project `Playing with threads/Dll1.dpr`, and take a look at `Playing with threads/Dll1Code.pas`. Towards the bottom of the file is the following code:
```plaintext
procedure DllEntry(Reason: Integer);
begin
case Reason of
DLL_THREAD_ATTACH: begin
tlObject := TTestObject.Create;
DllShowMessage(tlObject.ObjectName);
end;
DLL_THREAD_DETACH: begin
(* Uninitialized data is guaranteed to be nil. *)
if (tlObject = nil) then
DllShowMessage('Object is nil.');
else
(* and we've guaranteed that initialized data has an object *)
DllShowMessage(tlObjectObjectName);
tlObject.Free;
end;
end;
initialization
IsMultiThread := True;
DllProc := @DllEntry;
tlObject := TTestObject.Create;
finalization
(* Uninitialized data is guaranteed to be nil. *)
if (tlObject = nil) then
DllShowMessage('Object is nil.');
else
(* and we've guaranteed initialized data has an object *)
DllShowMessage(tlObjectObjectName);
tlObject.Free;
end.
```
As this code snippet shows, Dll1 can easily respond to the creation of threads in a calling application.
To further the reader's understanding of these entry point functions, and to demonstrate a problem with Delphi's `threadvar` construct, the reader should study `Playing with threads/Dll1.dpr`, `Playing with threads/Dll2.dpr` and `Playing with threads/Example.dpr`, all distributed with this article. The two DLL projects are identical, with the exception that `Playing with threads/Dll1.dpr` demonstrates the use of
threadvar, and `Playing with threads/Dll2.dpr' demonstrates the use of the direct Windows API thread-local storage system calls.
The application `Playing with threads/Example.dpr' allows the user to load either `Playing with threads/Dll1.dll', or `Playing with threads/Dll2.dll', and fidget with threads.
Try this example:
1. Run `Playing with threads/Example.exe,'.
2. Click on the "Load library function". Since, by default, the first radio button ("Delphi's threadvar") is checked, `Playing with threads/Dll1.dll' will load.
3. Click "Create new thread".
4. Click "Create new thread" again.
5. Click on the first thread listed, and click "Close selected thread".
6. Hmmm... Access violation?
7. Exit the application, and reload it. Go through the exercise all over again, but this time, ensure that the second radio button is checked, (ensuring that `Playing with threads/Dll2.dll' will be loaded). Access violations using the Windows API calls?
This example illustrates two things. First of all, it demonstrates how a DLL can respond automatically to the creation and destruction of threads. Second, it shows that the use of threadvar isn't entirely safe, but that directly using Windows API calls resolves the problem.
Before moving much further, the reader should take care that the concepts illustrated in each of these projects (`Playing with threads/Dll1.dpr', `Playing with threads/Dll2.dpr', and `Playing with threads/Example.dpr') are well understood.
A discussion of Solution #3
As was mentioned before, InterBase 5.0 introduces the free_it keyword, thus allowing the UDF developer to use dynamically allocated memory for the return of strings and dates.
Aside from the author's contention that this is sloppy, and that it won't work with versions of InterBase previous to 5.0, this is a fully supported and "sponsored" technique for returning strings to InterBase. (see section Solution #3: Returning dynamically allocated strings {anchor link to Solution#3, this document}) So, sloppy or not, we must "face the music", and explore returning dynamically allocated strings to InterBase.
Memory allocation issues
Oddly enough, the Windows version of InterBase is compiled using Microsoft's C-compiler (MSVC). Without getting into a discussion as to why they chose this compiler, suffice it to say that InterBase expects dynamically allocated memory to be allocated using MSVC's malloc routine.
MSVC's `malloc` routine handles memory allocation in a manner "all its own". That is, we can't rightly infer how it manages memory, but it certainly does not allocate memory in the same fashion as Delphi. So, a Delphi function that tries to dynamically allocate memory using `GetMem` or the Windows system call `GlobalAlloc` will most certainly cause problems with InterBase if used in conjunction with the `free_it` keyword.
This problem is resolved by making use of the fact that MSVC applications must be distributed with the run-time MSVC library, `msvcrt.dll`. If InterBase or the InterBase client is installed on a system, then this DLL is installed on your system as well.
By making the following declaration in your Delphi UDF library,
```
function malloc(Size: Integer): Pointer; cdecl; external 'msvcrt.dll';
```
you will allow Delphi to make use of MSVC's `malloc` routine, so that the `free_it` keyword can be used.
**Working through an example**
Take a look at the project `UDF Test 1/UDFTest1.dpr`, and open `Funcs.pas`.
`Funcs.pas` declares the `malloc` routine, and it implements a very silly function called `CopyString`. Let's take a look at `CopyString`:
```
function CopyString(sz: PChar): PChar; cdecl; export;
var
szLen: Integer;
begin
szLen := 0;
while (sz[szLen] <> #0) do Inc(szLen);
Inc(szLen);
result := malloc(szLen);
Move(sz^, result^, szLen);
end;
```
Quite simple, `CopyString` allocates enough space for the passed string (sz) plus the null terminator, and it copies the string.
The declaration for `CopyString` is as follows:
```
declare external function CopyString
cstring(64)
returns cstring(64) free_it
entry_point 'CopyString' module_name 'UDFTest1.dll';
```
Open InterBase's WISQL tool, and connect to `UDF1.gdb`. After connecting, do the following:
1. Execute the above "declare external function", and commit.
2. Execute the following query:
```
select CopyString('Hello world') from rdb$database
```
A discussion of Solution #4
It is a bit frustrating to think that we can write a UDF library that doesn't support as many versions of InterBase as desired. And, if you agree with the authors that the "sloppy" approach just won't do, then this section might be for you.
As was briefly alluded to above, a UDF should do its own housekeeping, and when possible, it should probably also try to avoid "dirtying the house" at all. InterBase's external function declaration syntax includes the ability for InterBase to pass the result buffer to the UDF, so that UDF can take the "high road", and be a gracious guest, providing information only, but not cluttering up InterBase's house at all.
In addition, this method for declaring functions makes it possible to avoid using the `free_it` keyword, so that UDF libraries built in this way can be used in versions of InterBase previous to version 5.0. (see section Solution #4: Making InterBase do the work {anchor link to Solution#4, this document})
How is this done? Clearly, this is best illustrated through an example. Open the project `UDF Test 2/UDFTest2.dpr`, and open `Funcs.pas`.
In `Funcs.pas` we implement the following function:
```pascal
function CopyString(sz, szRes: PChar): PChar; cdecl; export;
begin
result := szRes;
while (sz^ <> #0) do begin
szRes^ := sz^;
Inc(sz); Inc(szRes);
end;
szRes^ := sz^;
end;
```
Now, in InterBase, we declare it as follows:
```sql
declare external function CopyString
cstring(64),
cstring(64)
returns parameter 2
entry_point 'CopyString' module_name 'UDFTest2.dll'
```
And finally, in `UDF2.gdb`, we can test this example by declaring the external function and using it in a silly select statement:
```sql
select CopyString('Hello World') from rdb$database
```
Conclusions
It is now time to turn our attention to a working example of a UDF library--FreeUDFLib. FreeUDFLib implements its UDFs using the thread-local method described above. With the change of a simple compiler "define", FreeUDFLib can also behave like `free_it` wants it to behave.
The method mentioned above, which allows the UDF developer to avoid all issues of memory allocation and deallocation is also quite elegant.
In conclusion, the `free_it` keyword makes it possible to create fully supportable UDFs that will run cleanly (if declared in InterBase correctly) and safely in InterBase's multi-threaded environment.
FreeUDFLib demonstrates the use of MSVC's `malloc` to dynamically allocate `free_it`able memory, and along the way it provides some very convenient functions.
Once again, (and for the last time) the author contends that the `free_it` approach is sloppy, and given the two proposed solutions of either returning thread-local memory or pushing data into passed parameters, it should be unnecessary.
About the author
Gregory Deatz is a senior programmer/analyst at Hoagland, Longo, Moran, Dunst & Doukas, a law firm in New Brunswick, NJ. He has been working with Delphi and InterBase for approximately two and a half years and has been developing under the Windows API for approximately five years. His current focus is in legal billing and case management applications. He is the author of FreeUDFLib, a free UDF library for InterBase written entirely in Delphi, and FreeIBComponents, a set of native InterBase components for use with Delphi 3.0. Both of these tools can be found at http://www.interbase.com/download. He can be reached via e-mail at gdeatz@hlmdd.com, by voice at (732) 545-4717, or by fax at (732) 545-4579.
This document was generated on 6 August 1998 using the texi2html translator version 1.51a.
|
{"Source-Url": "http://www.ibase.ru/files/articles/sql/udf.pdf", "len_cl100k_base": 5975, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 25422, "total-output-tokens": 6681, "length": "2e12", "weborganizer": {"__label__adult": 0.00026535987854003906, "__label__art_design": 0.00014901161193847656, "__label__crime_law": 0.0002770423889160156, "__label__education_jobs": 0.00022554397583007812, "__label__entertainment": 3.24249267578125e-05, "__label__fashion_beauty": 6.830692291259766e-05, "__label__finance_business": 0.00012552738189697266, "__label__food_dining": 0.00019037723541259768, "__label__games": 0.0003066062927246094, "__label__hardware": 0.00041747093200683594, "__label__health": 0.00013697147369384766, "__label__history": 9.614229202270508e-05, "__label__home_hobbies": 3.886222839355469e-05, "__label__industrial": 0.00018143653869628904, "__label__literature": 0.00010061264038085938, "__label__politics": 0.00011712312698364258, "__label__religion": 0.00019991397857666016, "__label__science_tech": 0.0017118453979492188, "__label__social_life": 4.4465065002441406e-05, "__label__software": 0.007293701171875, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.00012886524200439453, "__label__transportation": 0.00019633769989013672, "__label__travel": 0.00011080503463745116}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25692, 0.00879]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25692, 0.42268]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25692, 0.87988]], "google_gemma-3-12b-it_contains_pii": [[0, 2007, false], [2007, 4022, null], [4022, 5878, null], [5878, 8615, null], [8615, 11447, null], [11447, 13435, null], [13435, 15866, null], [15866, 17675, null], [17675, 20091, null], [20091, 22063, null], [22063, 24132, null], [24132, 25692, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2007, true], [2007, 4022, null], [4022, 5878, null], [5878, 8615, null], [8615, 11447, null], [11447, 13435, null], [13435, 15866, null], [15866, 17675, null], [17675, 20091, null], [20091, 22063, null], [22063, 24132, null], [24132, 25692, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25692, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25692, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25692, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25692, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25692, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25692, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25692, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25692, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25692, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25692, null]], "pdf_page_numbers": [[0, 2007, 1], [2007, 4022, 2], [4022, 5878, 3], [5878, 8615, 4], [8615, 11447, 5], [11447, 13435, 6], [13435, 15866, 7], [15866, 17675, 8], [17675, 20091, 9], [20091, 22063, 10], [22063, 24132, 11], [24132, 25692, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25692, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
e30f1c623aa3733a541ecee7ad581d32a29c7729
|
A Collaborative Approach to Teaching Software Architecture
Arie van Deursen, Maurício Aniche, Joop Aué, Rogier Slag, Michael de Jong, Alex Nederlof, Eric Bouwers
Delft University of Technology
ABSTRACT
Teaching software architecture is hard. The topic is abstract and is best understood by experiencing it, which requires proper scale to fully grasp its complexity. Furthermore, students need to practice both technical and social skills to become good software architects. To overcome these teaching challenges, we developed the Collaborative Software Architecture Course. In this course, participants work together to study and document a large, open source software system of their own choice. In the process, all communication is transparent in order to foster an open learning environment, and the end-result is published as an online book to benefit the larger open source community.
We have taught this course during the past four years to classes of 50-100 students each. Our experience suggests that: (1) open source systems can be successfully used to let students gain experience with key software architecture concepts, (2) students are capable of making code contributions to the open source projects, (3) integrators (architects) from open source systems are willing to interact with students about their contributions, (4) working together on a joint book helps teams to look beyond their own work, and study the architectural descriptions produced by the other teams.
CCS Concepts
• Applied computing → Collaborative learning;
Keywords
software architecture, software engineering education, open learning, collaborative book writing.
1. INTRODUCTION
In computer science curricula, software architecture is a key component of a student’s software engineering education. Software architecture refers to the high level structures of a software system, the discipline of creating such structures, and the documentation of these structures [18]. Documenting software architecture facilitates communication between stakeholders, captures early decisions about the high-level design, and allows reuse of design components between projects [18, 8, 2].
To support teaching software architecture, lecturers can choose from a range of text books [2, 18, 4, 21]. Nevertheless, a course on software architecture has to overcome a number of challenges:
C1 The theory of software architecture (design principles, tradeoffs, architectural patterns, product lines, etc) is often very abstract and therefore hard for a student to master.
C2 The problems of software architecture are only visible at scale, and disappear once small example systems are used.
C3 A software architect needs a combination of technical and social skills: software architecture is about communication between stakeholders, and the architect needs to be able to achieve and explain consensus.
To address these challenges, we have designed a graduate course on software architecture based on the following principles:
P1 Embrace open source: Students pick an open source system of choice and study its architecture. Students use it to learn how to apply architectural theories to realistic systems (C1, C2).
P2 Embrace collaboration: Students work in teams of four to study one system in depth (C3).
P3 Embrace open learning: Teams share all of their work with other students. Furthermore, students share their main result with the open source community: their architectural description is published as a chapter in an online book resulting from the course (C3).
P4 Interact with the architects: Students are required to offer contributions (in the form of GitHub pull requests) to the open source projects, which will expose them to feedback from actual integrators and architects of the open source projects (C1, C2, C3).
P5 Combine breadth and depth: Students dive deeply in the system they analyze themselves, and learn broadly from the analyses conducted and presented by other teams (C1, C3).
In this paper, we describe the resulting Collaborative Software Architecture Course (CSAC) which has been taught in the past four years (2013-2016) to classes of 50-100 students each.¹ We start the paper by outlining the course objectives and its contents (Section 2). We then present the results of teaching this course, covering course outcomes and student evaluations (Section 3). Furthermore,
¹See the resulting book Delft Students on Software Architecture [23], the https://github.com/delftswa2016 GitHub organization, and our 2013 blogpost [22].
we discuss possibilities to the underlying course ideas to other disciplines, as well as additional ideas for further strengthening the course (Section 4). We conclude by summarizing related work and the key contributions of this paper.
2. COURSE DESIGN
2.1 Educational Objectives
The Collaborative Software Architecture Course aims at offering students a chance to learn and experience the concepts of designing, modeling, analyzing and evaluating software architectures. In terms of Bloom’s taxonomy [3], the following educational objectives can be distinguished.
On the knowledge level, the course aims at enabling students to familiarize themselves with key concepts in software architecture, such as architectural views, perspectives, styles, design principles, software product lines, technical debt, and Conway’s law.
On the application level, the course aims at enabling students to apply these theories to concrete, existing systems that are maintained by a team of people and used around the world.
On the evaluation level, the course aims at enabling students to assess and discuss the effect of architectural decisions made by (open source) projects. Furthermore, the course aims at enabling students to assess and discuss the relevance of certain architectural theories for a given system.
Constraints The course takes 10 weeks and is worth 5 credit points (ECTS), corresponding to 5 * 28 = 140 hours of work per student. Each week, there are two lectures of 90 minutes each. The course is a graduate level master course for students who have completed a bachelor in computer science or related field.
2.2 Method
In order to achieve its educational objectives, CSAC adopts two central ideas. The first is to let students “adopt” an open source system. They use this system to apply and evaluate architectural theories, thus bridging the knowledge, application, and evaluation levels. To deal with the complexity of realistic systems, students work in teams of four.
The second key idea of the course is to open up all communication, so that students can learn as much as possible from each other as well as from the broader open source community. Thus, throughout the course, groups can see the work of other groups, and are encouraged to help each other. Furthermore, results from the course are shared publicly as much as possible, allowing for feedback from and interaction with the broader open source community.
On a high level, each week consists of a theoretical lecture that students apply to their own systems in the next week. In this way, each week the students describe certain aspects of the architecture of their system under study, which eventually forms the input for their book chapter.
The course follows Nick Rozanski’s and Eoin Woods’ book “Software Systems Architecture: Working With Stakeholders Using Viewpoints and Perspectives” [18]. Based on this book, students conduct, e.g., a stakeholder analysis, and create architectural documentation covering at least a context view, development view, architectural patterns, and an evolution perspective. Furthermore, the course covers selected additional topics in software architecture. In the past years, we have included material on architectural metrics [5], technical debt [12], the use of design sketches for communication [14], and software product lines [1]. For each of these topics, students apply theories presented to their systems under study.
The course also includes guest lectures from software architects working in industry. These lectures typically cover the role of the architect in a complex organization. Students usually do not directly apply these lessons from industry to the systems they study; instead, the guest lectures serve to illustrate how the topics covered are relevant outside the scope of open source systems as well.
In the following, we present the most important aspects of our methodology, and relate them back to the five principles P1–P5 formulated in the introduction.
Group formation and project selection (P1, P2). In the first week of the course, students themselves form groups of 4. We recommend students to form diverse groups so that they can benefit most from their varying cultural and technical backgrounds.
In addition, students must choose a project that serves as case study throughout the entire course. Students select a medium to large open source project hosted on GitHub that is still active (developers are working on it every day) and open to external contributions. Such projects typically will have several pull requests from external contributors merged per day. Furthermore, students should be confident that they are able to make a contribution to this project. Although these rules are not strict constraints, each group needs to submit their project for approval. This proposal contains the name of the project, link to the repository, and a paragraph on why they chose this project. Two different teams are not allowed to work on the same project.
To help students find a project, we provide a list of the most popular GitHub projects, extracted using GHTorrent [10]. We also suggest systems that we personally believe are interesting (in the 2016 edition, for example, we suggested Ruby on Rails, Tensorflow, or SonicPi).
The use of Git and GitHub (P2, P3). Students are required to use Git and GitHub from the start of the course. Even the course contents, schedule, and assignments are made available in a GitHub repository. We set up a dedicated GitHub organization for the course, hosting all repositories used in the course.
In the first lecture, we introduce students to Git. We add all students as collaborators to the relevant GitHub repositories. The student repositories are only available to members of the organization, and not to the outside world. Students can choose themselves to make certain results publicly available.
After a team chooses their project, we create two repositories: one empty repository to work on the assignments and the book chapter, and a public fork of their chosen project. GitHub’s issues, pull requests, code review comments, milestones, and releases are used for inter and intra-team communication, and for distributing finalized assignments.
We highlight the fact that students have access to the repositories of all other teams. We encourage students to take a look at what other students are doing as well as what other teams have done in the past (which was already published in an online book). Students are allowed to “reuse an idea” that belongs to other groups as long as they explicitly mention it in their assignments.
Use of Slack for Communication (P2, P3). We introduce Slack as a tool for students to communicate among themselves and with the teachers. Within Slack, we used different channels for various course wide discussions, announcements for important messages from the teachers, and to other technical points, such as Git.
We also created one channel per group, named after the project studied by that group. We encourage groups to use their channels for all internal group communication, as this enables the teachers
to understand their way of working and effort. In case of questions, students can involve teachers (or other students) in their group channel simply by mentioning them in their channel. Again, all student channels are open to all students, allowing students to learn from and help each other.
**Student presentations (P2, P5).** As an exercise in communicating architectural decisions and trade-offs, students present their group’s progress to the full class at two occasions. The first presentation is a project “pitch” around the middle of the course. In the pitch, students should present their project to other students as well as their current findings. Each group has 3 minutes of presentation plus 2 minutes of Q&A from both other students and teachers.
The second set of presentations happens at the end of the course. Each group has 15 minutes of presentation plus 10 minutes of questions. In this presentation, groups show all their findings as well as the system contributions they have made throughout the course. All presentations together usually take the entire day (from 9am to 5pm) and may be divided (depending on the number of teams) in two (parallel) sessions.
### 2.3 Assignments
Students face four main assignments which are part of our method as well: 1) applying theory to practice, 2) contributing to the system, 3) integrating their architectural views and perspectives into a single chapter and 4) providing feedback to other students.
**Applying theory to practice (P5).** After each theoretical lecture, students apply what they learned to their system. As an example, one of the first assignments is to conduct a stakeholder analysis: understanding who has an interest in the project, what their interest is, and which possibly conflicting needs exist.
To do this, the students follow the approach to identify and engage stakeholders from Rozanski and Woods [18]. They distinguish various stakeholder classes, and recommend looking for stakeholders who are most affected by architectural decisions, such as those who have to use the system, operate or manage it, develop or test it, or pay for it.
To find the stakeholders and their architectural concerns, the student teams analyze any information they can find on the web about their project. Besides documentation and mailing lists, this includes an analysis of recent issues and pull requests as posted on GitHub, in order to see what the most pressing concerns of the project are and which stakeholders play a role in these discussions and the decision to integrate a change [11].
Students deliver the results of their analysis in a readable text file via GitHub, and receive complete feedback (including the grade) from the teachers within one or two weeks after the submission. Thus, students can improve for their next deliverable.
**Contributing to the system (P1, P4).** As a parallel task, students contribute to the system they are analyzing. This helps them in understanding the implications of key architectural decisions, and in establishing contact with the architects and integrators of the system they study.
We do not prescribe the number of contributions each team should do. To help students, we teach them how open source development and pull requests work at GitHub. We also suggest to start with something small, such as fixing simple issues or contributing to the documentation. Some open source projects provide explicit open issues that are suitable for newcomers, which also provide a good starting point.
**Writing a Book Chapter (P3, P5).** Inspired by the book series covering the “Architecture of Open Source Applications” [6, 7], the main goal of each team is to compose a chapter describing the architecture of the system they study. At the end of the course, these chapters are bundled into a book [23].
Each group should integrate views, perspectives, assignment results, and their experience in contributing to the system into a single chapter. Each chapter should be around 5000 words, and students are encouraged to include as many diagrams or images as needed. We do not require a prescribed chapter structure, but many teams follow the views and perspectives created in the earlier assignments.
The target audience for the chapter are system stakeholders and fellow students. Students can opt to make their chapter public, implying that their chapter should appeal to a wider audience. To control quality, we make it clear to students that chapters will be published only if the group’s chapter grade is higher than 7 (out of 10).
To facilitate integrating, sharing, versioning, and reviewing the various chapters and the underlying drafts, all students (and teachers) use Markdown⁵ for any document they create in the course. This year, we created the final book using Gitbook⁴, which offers an easy way to generate an online (HTML, EPUB, PDF) book from a GitHub repository containing Markdown sources.
**Providing feedback to other students (P2, P3).** The course embraces collaboration. As one of their assignments, students review a chapter from another group. We take this opportunity to teach students how scientific papers are evaluated and simulate the process with them. Using the conference management system EasyChair⁵, students identify their conflicts, bid to chapters they are comfortable to review, and submit a full review of the chapter. In the end, each group needs to evaluate their received feedback and improve their chapter accordingly.
### 2.4 Grading
A team grade is based on the following items:
1. Series of intermediate deliverables corresponding to dedicated assignments on, e.g., stakeholder analysis, code metrics, particular views, or design sketches. Each partial result is evaluated using rubrics reflecting content, depth, writing, and originality. We provide the 2016 rubrics in our online appendix [24].
2. The final report (book chapter) of each team, providing the relevant architectural documentation created by the team, is graded according to the same rubrics.
3. Team presentations that were evaluated by both the teachers and students in the audience (by means of an online questionnaire).
The individual grades are additionally based on:
1. The personal reviews to some other group. Students that have been more critical as well as constructive receive more points.
2. Active participation in the lectures. Students consistently asking good questions or initiating useful discussions during the class receive extra points. In addition, we allow students to recommend other students that have done a great job during the lectures.
---
³https://easychair.org/
⁴https://www.gitbook.com/
⁵https://easychair.org/
3. Their workload. Students are required to keep a weekly journal of their activities. In this journal, we expect to see which activities each student performed as well as the amount of time each one required. The effort of each student in a team should amount to the prescribed 140 hours allocated for this course.
The latter point implies that all students are required to make a similar time investment in this course, regardless of their background. This reflects the idea that an architect never stops learning.
3. RESULTS OF THE 2016 COURSE
We performed a survey with the 104 students of the 2016 edition. As the survey was optional, we obtained 48 answers (response rate of 46%). Students had to answer questions in a Likert scale from 1 (no/I don’t agree at all) to 5 (yes/I completely agree). Thus, whenever we mention that students agree or believe with a statement, it means that more than half of them answered a 4 or a 5 in that question. Due to space constraints, the protocol of the survey as well as full answers and charts can be found in our online appendix [24].
3.1 Participants’ profile
We have a diverse group of students when it comes to their experience. There are both students with and without industry and programming experience.
In numbers, 10% of the respondents have less than one year of programming experience, while 45% have 5 or more years of experience. 23% do not have experience in industry, 25% already have more than 3 years of experience.
The vast majority (81%) has never contributed to open source before. Half of the students claim to have good knowledge of Git, while the other half believes to know the basics.
3.2 P1: Embrace open source
Selecting an appropriate project can be difficult. Nevertheless, more than a half of the students were happy about their choice of open source project. Many said that their projects were “relevant”, “fun”, “interesting”, and “with a welcome community”.
This might explain the fact that all students were able to submit at least one pull request to their projects, and two thirds of the participants performed 1 to 3 pull requests.
Some students also believe that projects were happy with the achieved results (45.8%) and that the project was open to external contributors (58.3%).
3.3 P2: Embrace collaboration
The majority of students affirm they learned much from their own team mates. They provide varying reasons, such as the different levels of experience among team members which fostered discussions, or the learning of technical skills, such as Git and Java, from their peers. We quote a student:
“Everyone has something to teach, I was very happy to listen to the constructive criticism of my team mates.”
On the other hand, 3 participants did not learn enough (they chose 2 on the scale). One student indicates that there was some friction among the team members, and another complained about a team mate that did not work enough.
Concerning collaboration, Slack improves communication among team members, according to 77% of students. In addition, most students (79%) state that Slack helps them to get answers to their questions quickly, from either teachers or fellow students.
The usage of Git and GitHub (and its collaborative features such as issues and pull requests) also helps to improve student productivity. Some advantages mentioned by students are that Git and GitHub make it easier for other students to review their work. On the other hand, a few students indicate that more visual (WYSIWYG) document editors, such as Google Documents, can be better for document collaboration as opposed to the combination of Git and Markdown.
3.4 P3: Embrace open learning
Most students consider the chapter reviews they received from other students as useful, although 25% thought they were not. Students with positive feedback confirm that reviews helped them to identify flaws as well as to make the document more intuitive and interesting. Some other students indicate that reviews were superficial while others believe that the reviewer did not read the entire text.
Interestingly, most students find it useful to write reviews for another groups — and no student disagreed with it at all:
“I liked reviewing them, as it gave me the opportunity to see what other groups were doing, and giving me the opportunity to help them out.”
Watching presentations about other architectures is also considered useful by a large group of students. Negative points are frequently related to the strict and tight timing (students had 3 minutes to present their work) as well as with the lack of preparation and presentation skills of some groups (to which students had to listen).
Publishing a book at the end of the course was well received by the students: 70% of them were very proud of their chapter. In their opinion, it serves as an excellent motivational factor and inspired them to work better:
“It’s a must have experience and you learn a lot and it brings responsibility as your work is open and public.”
3.5 P4: Interact with the architects
Most students believe that contributing to the project helped them to better understand the system they were analyzing. Only 8 students disagreed.
As teachers, we suggest students to submit a pull request before trying to talk/interview the architects. This mostly worked well: 40% of the participants believe this was a good strategy for helping them to get in contact with the senior architects of the project.
3.6 P5: Combine breadth and depth
In most lectures, students affirm that they learned much from applying the theoretical concepts to their projects. In Figure 1, we present the results for each theoretical lecture we gave in this edition. The score average is 4 (out of 5), with the exception of the variability topic, for which the median is 3. We highlight a quote from a student:
“I learned a lot about how the open source team approach different problems (technical debt analysis) and how the project interacts with it’s environment and all involved entities around (stakeholder analysis and context view).”
On the other hand, some students complained about difficulties in putting the theory to practice. As an example, a student thinks that some concepts are not generalizable, and thus, hard to be applied in their projects:
Students express that they spent more time in the course.
3.7 Other findings
Lectures used is a parameter of the course. The theoretical texts in a number of ways: “how real world works”. As one of our former teaching assistants says: “it is their chance to put hands on real applications that are not greenfield, and learn about what we can improve. As an example, some students believe the course could be more technical. Indeed, our textbook treats software architecture in a highly conceptual way:
“It was hoping to focus more on architectural aspects of the software than these general exercises that just describe the application in a very broad sense.”
4. DISCUSSION
As demonstrated by the above, the current mix of teaching tools and techniques works well for our course on software architecture. As one of our former teaching assistants says: “it is their chance to put hands on real applications that are not greenfield, and learn how real world works”. We believe these ideas could be extended or applied in other contexts in a number of ways:
Lectures used is a parameter of the course. The theoretical topics that we present to students during lectures can be replaced by other architectural topics of interest, such as more emphasis on design patterns or system scalability.
Mix with industry systems. Although we only made use of open source systems up to now, the course may also use projects from companies (that are most likely to be closed source). This partnership might be good for both students and companies: students can get to know more about the company, and the company can get a complete analysis of its software. On the other hand, teachers and universities have to deal with the arrangements, such as confidentiality agreements.
Collaborative book writing and publishing. This feature is clearly not attached to a software architecture course, and can thus easily be applied to any other course. As we presented before, this was one of the points which students were happy and felt motivated about. Gitbook also facilitated the generation of the final book in different versions (PDF, EPUB). Therefore, we suggest other educators to experiment collaborative book writing and publishing in their courses.
Contributions to open source. Our students were able to meet real software architects and learn from them. This relationship was initiated by these contributions and the consequent discussions (common in GitHub’s pull requests) with the architects. Thus, this strategy can be used in other related courses where students could benefit from real and more experienced developers, such as software testing courses.
5. RELATED WORK
Lago and Van Vliet [13] distinguish two approaches to teaching architecture, one focusing on “programming in the large”, and the other emphasizing the communication aspects of software architecture to a variety of stakeholders. Our course proposes a way to blend these two approaches in a single course.
De Boer et al. propose a community of learners approach to teaching software architecture [9]. Students collaborate on the design of a single complex system, and learn from each other. Through its openness, our course also creates a community of learners, yet student teams work on different systems.
Pedroni et al. [16] discuss leveraging open source projects to expose students to real life systems. As in our course, they require students to make contributions to open source projects. The course focuses on programming skills as well as on the need to get socially involved with other developers. The authors recommend providing clear instructions on how to contribute – which we indeed cover in the lectures of our course, and which these days are often also provided in contribution guidelines of projects on, e.g., GitHub. Smith et al. [19] discuss challenges and guidelines for selecting open source projects for use in software engineering education. Marmorstein [15] discusses experiences in letting students contribute to open source systems in their class project.
GitHub plays a central role in our course: The teams use it to collaborate, to write their book chapter, and to contribute to open source projects. This emerging role of the GitHub platform as a general collaborative tool in education is further discussed by Zagalysk et al [25].
Our student based book series [23] was directly inspired by the Architecture of Open Source Applications [6, 7] initiated by Brown and Wilson. Based on these books, Robillard and Medvidovic provide an analysis of the dissemination processes in open source architectures [17]. A description of the architectural beauty of (open source) systems was provided by Spinellis and Gousios [20].
6. CONCLUSIONS
Teaching software architecture should be practical and challenging at the same time. Towards this goal, we propose a course structure that follows five main principles: embrace open source, embrace collaboration, embrace open learning, interact with the architects, and combine breadth and depth.
We have applied these ideas in four editions of our Software Architecture course, and students’ feedback have always been positive. In this paper, we report the results of the evaluation with our students in the most recent (2016) edition.
Our experience suggests that (1) open source systems can be successfully used to let students gain experience with key software architecture concepts; (2) students are capable of making meaningful code contributions to the open source projects; (3) software architects from open source systems are willing to interact with students about their contributions; (4) working together on a joint book helps teams to look beyond their own work, and study the architectural descriptions produced by the other teams.
Thanks to the open nature, results of the course (such as the online book, and contributions to the open source systems made by the students) are available in our online appendix [24]. Based on a blog post covering the first edition of CSAC [22], similar courses have emerged at various universities in Canada, Israel, France, and US. Moreover, we anticipate that our collaborative approach makes sense not only for software architecture courses, but to any other topic in which practice and theory should walk together.
Acknowledgments We would like to thank Felienne Hermans and Nicolas Dintzner (TU Delft) for repeatedly offering guest lectures in this course, the various guest speakers from industry and academia, all students participating in the courses, and the open source developers who welcomed our student’s contributions.
7. REFERENCES
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/11788266/sigcse17.pdf", "len_cl100k_base": 6183, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 21015, "total-output-tokens": 8140, "length": "2e12", "weborganizer": {"__label__adult": 0.0011472702026367188, "__label__art_design": 0.0032596588134765625, "__label__crime_law": 0.0010862350463867188, "__label__education_jobs": 0.2191162109375, "__label__entertainment": 0.0002689361572265625, "__label__fashion_beauty": 0.0006341934204101562, "__label__finance_business": 0.0009093284606933594, "__label__food_dining": 0.00128936767578125, "__label__games": 0.001898765563964844, "__label__hardware": 0.0013532638549804688, "__label__health": 0.0015115737915039062, "__label__history": 0.0010347366333007812, "__label__home_hobbies": 0.0003654956817626953, "__label__industrial": 0.0010175704956054688, "__label__literature": 0.0012454986572265625, "__label__politics": 0.0009551048278808594, "__label__religion": 0.0017843246459960938, "__label__science_tech": 0.01535797119140625, "__label__social_life": 0.0004732608795166016, "__label__software": 0.00933074951171875, "__label__software_dev": 0.73291015625, "__label__sports_fitness": 0.0011625289916992188, "__label__transportation": 0.0014400482177734375, "__label__travel": 0.0006794929504394531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35840, 0.02696]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35840, 0.47147]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35840, 0.94543]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 4544, false], [4544, 11728, null], [11728, 18426, null], [18426, 24739, null], [24739, 29464, null], [29464, 35840, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 4544, true], [4544, 11728, null], [11728, 18426, null], [18426, 24739, null], [24739, 29464, null], [29464, 35840, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35840, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35840, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35840, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35840, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35840, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35840, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35840, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35840, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35840, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35840, null]], "pdf_page_numbers": [[0, 0, 1], [0, 4544, 2], [4544, 11728, 3], [11728, 18426, 4], [18426, 24739, 5], [24739, 29464, 6], [29464, 35840, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35840, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
0860e2e4506c42faa91609ced250e3d0963863f4
|
Title: MRML: A Communication Protocol for Content-Based Multi Media Retrieval
Status: Proposal
Source: Computer science department of the University of Geneva, Monash University (Melbourne, Australia), EPFL (Lausanne, Switzerland), DKFZ Heidelberg (Germany), CWI Amsterdam
Authors: Wolfgang Müller, Henning Müller, Stéphane Marchand-Maillet, Thierry Pun, David McG. Squire, Zoran Pecenovic, Christoph Giess, Arjen P. de Vries
1 Abstract
In this paper we propose the Multimedia Retrieval Markup Language (MRML), a communication protocol based on an XML DTD. MRML was designed to allow standardized communication between Content Based Image Retrieval Systems (CBIRSS) and their clients.
MRML separates the query shipping task both from the content representation and the actual query formulation. Because of its flexibility we suggest the integration of MRML into the XM software. In our opinion this would facilitate experiments on different query paradigms within the MPEG-7 XM.
The following sections will motivate the need for MRML, and they will emphasize the differences in scope and realization between the content description MPEG-7 and the query markup language MRML. We consider that it's these differences that make MRML a useful tool for the XM.
We have published MRML parsing code under Perl, Java and C++. We are currently designing an automatic CBIRS benchmark based on MRML.
2 Introduction
Almost every content-based image retrieval system (CBIRS) is a hard-wired connection between an interface and the functional parts of a program. Some programs provide easy-to-use web interfaces [1], while others need to be installed locally [2] and may be specific to particular operating systems. The reuse of components in CBIR, e.g. user interfaces, is thus very sparse. This is not only a time-consuming problem, since everything needs to be developed anew for each system, but it makes the sharing of user data and the comparison of system performances difficult.
In order to address these problems, Y.-C. Chang et al. [3] proposed a query taxonomy for multimedia databases. They proposed an initial formulation of the requirements for a system enabling communication between multimedia databases and clients. However, this approach is not yet translated into an extensible protocol.
In this paper we present the Multimedia Retrieval Markup Language (MRML): an XML-based markup language for multimedia queries. MRML was designed to facilitate a bottom-up development approach, which separates the communication problem from the search for the best query language for multimedia databases. In other words, not only it is designed to fulfill the short-term needs of the image database research community, but it is also designed to cater for its long-term needs.
The development of standard query languages, together with standard methods for transmitting queries and data, can improve the interoperability of CBIRs and thus increase the use and usefulness of multimedia databases. SQL and ODBC are examples of such developments for relational databases. The aim of MRML, however, is more similar to that of the DICOM protocol [4], which promoted the interoperability of medical imaging systems from different vendors. In summary, we address the urgent need for common tools which will facilitate the development and evaluation of multimedia database systems. By this means, we aim to facilitate the development of common benchmarks for CBIRS performance, similar those used for textual information retrieval [5].
The query-by-example (QBE) paradigm with relevance feedback (including browsing) is the search paradigm employed by most current CBIRs. We therefore provide an extensible QBE facility within MRML. Further, some MRML-compliant tools have been developed and made freely available under the GNU Public License [21,24]. These are described briefly in Section 3, and include a CBIR search engine (Viper), which acts as a server, and an interface (SnakeCharmer), which acts as a client. Scripts (mostly Perl scripts) have also been made available, which provide a basis for the creation of standard CBIRS benchmarks. An overview of various evaluation methods is given in [6], where the use of freely-available annotated image collections (such as [7]) as test datasets is advocated.
In order to be useful for research, MRML needs to be a "living standard": research groups will need to be able to test and use extensions without having to ask a committee for approval. We therefore employ a development model which permits phases of independent growth with subsequent code merging. In Section 4, we present the main features of MRML and, in Section 5, we show an example of how MRML can be extended to suit particular needs while staying coherent with the common standard.
We found that MRML’s properties lend themselves to rapid prototyping of multimedia retrieval systems. This allows experimenting with different query formulation approaches, and the mixing of different query languages. We believe that all these properties can make MRML a useful tool for the MPEG-7 XM, notably in experimenting on different query and interaction types using MPEG-7 and the relation between MPEG-7 and different indexing and querying methods.
3 Viper, CIRCUS and SnakeCharmer
MRML was initially designed to facilitate cooperation between research groups. The main programs for our testbed originate from the Ecole Polytechnique Fédérale de Lausanne (CIRCUS and SnakeCharmer) and from the University of Geneva (Viper). In this testbed, we use MRML to link a single interface (SnakeCharmer) to two different CBIRS (CIRCUS and Viper).
Viper\(^1\) is an image search engine based on techniques commonly used in text retrieval and thus offers efficient access to a very large number of possible features (more than 80,000 simple colour and texture features, both local and global). Each image contains only a subset of these features. Access to images containing given features is provided by an inverted file, a standard access technique in text retrieval. The emphasis in Viper is on adapting the system response according to interaction with a user - positive and negative relevance feedback is accepted over several steps. Detailed descriptions of Viper may be found in [8, 9].
CIRCUS\(^2\) is a server framework supporting multiple image retrieval methods and algorithms. Currently four methods are implemented. The first applies an adaptation of Latent Semantic Indexing [10] to image features describing local and global colour and texture, as well as global layout and optional keywords. The second is a texture/layout-specific method based on wavelet maxima moments. It extracts a set of contours from the image at various levels of detail, invariant to scale, translation, and partially to
\(^1\) http://viper.unige.ch/
\(^2\) http://leavwww.epfl.ch/CIRCUS
illumination changes. The third approach is texture-specific, it describes textures by computing the parameters of a Hidden Markov Model governing the coefficients of a wavelet decomposition of a textured image. The similarity is evaluated using the Kulbach-Leibler distance between two distributions. The last method is a fast, wavelet packet-based, approximation of the Principal Component Analysis, based on the features used by the other methods. It is the most scalable and fastest of the implemented methods. SnakeCharmer (figure 1) is an MRML-compliant client application. It is written in JAVA for portability and offers query by multiple positive and negative examples, query history, multiple collection and algorithm selection, a scatter plot of the results according to various aspects of similarity and a basket for user-selected images.

**FIG 1:** The SnakeCharmer JAVA interface for Viper and CIRCUS.
## 4 Multimedia Retrieval Markup Language
MRML\(^3\) is formally specified in [11]. It provides a framework that separates the query formulation from the actual query shipping. It is designed to markup multi-paradigm queries for multimedia databases. MRML enables the separation of interface and query engine and thus eases their independent development.
MRML can be embedded into an existing system with little effort. First, it is XML-based, meaning that standard parsers can be used to process the communication messages. Further, the code for an example MRML-compliant CBIR system is freely-available and provides the basic implementation of both ends of an MRML-based communication toolkit. MRML is currently in a testing phase at several universities and further applications based on this protocol such as benchmark systems and meta-query engines are under development. MRML is designed to allow extension by independent groups. By this means, it provides a research platform for extensions which later may become a part of common MRML.
### 4.1 Design goals of MRML
It is important for the following sections to keep in mind the priorities which we took into account during the design of MRML.
---
\(^3\) [http://www.mrml.net/](http://www.mrml.net/)
**Interoperability:** interoperability is an obvious short-term need of the CBIRS community. The fact that the interface between CBIRS client and server is not specified hampers research. Topics that could benefit from interoperability include:
- Meta-query engines query several "normal" query engines and assemble the results [12]. Constructing a meta-query engine would require defining a protocol abstraction layer corresponding to each of the different query embedded in the system. Using a common protocol would save a substantial amount of work.
- Human-computer-interaction aims at comparing the impact of different user interfaces on the performance of identical query engines, or test several engines with the identical interfaces. In this context, by ensuring the compatibility between engines and interfaces, MRML would ease this type of evaluation.
- Evaluation of query engines: Thanks to MRML, one can design a benchmark package that connects to a server, sends a set of queries and evaluates the results.
**Extensibility without administration overhead:** it was our goal to provide a communication protocol which can be extended without having to ask a standardization body for permission. MRML enables independent development of extensions. As we will describe in Section 4.1 we invite MRML users to render their extensions accessible at [22]. Later, stable extensions can be added to new common versions of MRML.
**Common log file format:** The whole area of CBIRS is craving for ground truth or other user data. MRML provides a common, human readable, easy to analyse format for logging communication between CBIRS client and server. MRML contains a maximum of data which might be of interest for computer learning purposes. If needed, extensions of MRML can be designed in order to send additional data.
**Simplicity of implementation:** everything was designed so as to minimize the implementation overhead incurred when using MRML, while keeping a maximum of flexibility. MRML only uses a subset of the features of XML in order to maximise the number of tools that can use MRML.
### 4.2 Features of MRML
MRML-based communications have the structure of a remote procedure call: the client connects to the server, sends a request, and stays connected to the server until the server breaks the connection. The server shuts down the connection after sending the MRML message which answers the request. This connectionless protocol has the advantage of easing the implementation of the server. To limit the performance loss caused by frequently reconnecting, it is possible to send several requests as part of a single MRML message. The extension of MRML to a protocol permitting the negotiation of a permanent connection is also planned.
MRML, in its current specification (and implementation) state, supports the following features:
- request of a capability description from the server,
- selection of a data collection classified by query paradigm; it is possible to request collections which can be queried in a certain manner,
- selection and configuration of a query processor, also classified by query paradigm; MRML also permits the configuration of meta-queries during run time,
- formulation of QBE queries,
- transmission of user interaction data.
The final feature reflects our strong belief that affective computing [13] will soon play a role in the field of content-based multimedia retrieval. MRML already supports this by allowing the logging of some user interaction data. In particular, this is the case for the history-forward and history-backward functionalities of the SnakeCharmer interface.
**Why XML and not CORBA?** There are important reasons for using XML rather than a communication framework such as CORBA as a basis for the implementation of MRML. The first is that when using XML no large communication framework is necessary, as it is for CORBA. Secondly, MRML offers a common human-readable format for log files. Having a simple common format for user data will make it
easier for research groups to share this type of data. Together with common free image collections, MRML-compliant systems will form a powerful tool for collecting and sharing CBIR user interaction data.
Another reason for the use of XML as a basis for MRML is the large number of free XML tools available such as parsers and tools to evaluate files in XML format (XML Query Language). XML is about to become the main description language for all kinds of meta data thus ensuring the long-term support of its specifications.
4.3 Graceful degradation: independent development on a common base
Graceful degradation is the key to successful independent extension of MRML. The basic principles can be summarised as follows:
- servers and clients which do not recognize an XML element or attribute encountered in an MRML text should completely ignore its contents,
- extensions should be designed such that all the standard information remains available to the generic MRML user (see examples in Section 6).
These principles provide guidelines for independent extensions of MRML.
To avoid conflicts between differing extensions of MRML, we plan to maintain or promote a central database for the registration and documentation of MRML extensions. This would also facilitate the "translation" between user logs which contain extended MRML.
4.4 Logging onto a CBIR server
An MRML server listens on a port for MRML messages on a given TCP socket. When connecting, the client requests the basic properties of the server, and waits for an answer. Skipping standard XML headers, the MRML code looks like this:
```xml
<mrml>
<get-server-properties />
</mrml>
```
The server then informs the client of its capabilities. This message is empty in the current version of MRML, but it allows for the extension of the protocol:
```xml
<mrml>
<server-properties />
</mrml>
```
Goal of this tag is to provide a stub for negotiation which influences the whole communication, like e.g. the opening of a permanent, possibly encrypted connection.
Further negotiation between client and server may depend on the user, so before further negotiation we have to open a session for the user:
```xml
<mrml>
<open-session
user-name="A. User" session-name="a session" />
</mrml>
```
which will be answered by an acknowledgement signal containing the ID of the session just opened. We regard the concept of sessions as very important, as it allows multi-user servers and across-session learning. Of course, it is possible to close and rename sessions.
Now one can request a list of collections, which are available on the server:
```xml
<mrml session-id="s-33">
<get-collections />
</mrml>
```
The answer will be a list of collections, with a description of the ways the collection has been indexed, encapsulated in a `query-paradigm-list` tag.
Similarly, the client can request a list of algorithms (i.e. query methods), which can be used on the server. Each of the algorithm tags returned also contains a `query-paradigm-list` describing the way the algorithm can interact with the user, as well as which indexing methods are needed for employing the algorithm.
The user is now able to choose on his client an algorithm/collection combination which suits his needs, and in which the query-paradigm-lists of collection and algorithm have match. The matching of query-paradigm-lists is described in [11].
### 4.5 Interface configuration
The client can then request property sheet descriptions from the server. Different algorithms will have different relevant parameters which should be user-configurable (e.g. feature sets, speed vs. quality). Viper, for example, offers several weighting functions [15] and a variety of methods for, and levels of, pruning. All these parameters are irrelevant for CIRCUS. Thanks to MRML property sheets, the interface can adapt itself to these specific parameters. At the same time, MRML specifies the way the interface will turn these data into XML to send them back to the server. The interested reader is referred to [11] for details.
### 4.6 Query Formulation
The query step is dependent on the query paradigms offered by the interface and the search engine. MRML currently includes only QBE, but it has been designed to be extensible to other paradigms.
A basic QBE query consists of a list of images and the corresponding relevance levels assigned to them by the user. In the following example, the user has marked two images, the image 1.jpg positive (`user-relevance="1"`) and the image 2.jpg negative (`user-relevance="-1"`). All query images are referred to by their URLs.
```xml
<mrml session-id="1" transaction-id="44">
<query-step session-id="1"
result-size="30"
algorithm-id="algorithm-default">
<user-relevance-list>
<user-relevance-element
image-location="http://viper.unige.ch/1.jpg"
user-relevance="1"/>
<user-relevance-element
image-location="http://viper.unige.ch/2.jpg"
user-relevance="-1"/>
</user-relevance-list>
</query-step>
</mrml>
```
The server will then return the retrieval result as a list of images, again represented by their URLs.
Queries can be grouped into transactions. This allows the formulation and logging of complex queries. This may be applied in systems which process a single query using a variety algorithms, such as the split-screen version of TrackingViper [16] or the system described by Lee et al. [17]. It is important in these cases to preserve in the logs the knowledge that two queries are logically related one to another.
5 Extending MRML
This section gives an elaborate example on how to extend MRML. Later we address the topic of the relation from MRML to binary data. In the end of this section we propose a development model that uses a central documentation server to foster the common use of MRML extensions.
5.1 Query by Structured Annotation DS: and Example for Extending MRML
In the following we demonstrate the ease with which MRML can be intermixed with MPEG-7 content descriptions. We describe a demonstration of the Structured Annotation DS (in the following abbreviated as SADS) presented in M5738. Within this demonstration we want to enable the user to formulate queries in the SADS, and then send them to the appropriate query processor. In order to avoid conflicts with existing MRML we encapsulate the SADS in a special tag: mpeg-7-sads. We instructed the SADS query processor to expect SADS as a direct descendant of query-step. This leads to the following solution:
```xml
<mrml session-id="1" transaction-id="44">
<query-step session-id="1"
resultsize="30"
algorithm-id="sads-processor">
<!-- here follows an MRML extension -->
<mpeg-7-sads>
<theme>
<themeObject>
<head>polar bear</head>
<modifier>young</modifier>
</themeObject>
<setting>leafless bushes</setting>
</theme>
<proposition>
<Agent>
<head>polar bear</head>
<modifier>young</modifier>
</Agent>
<setting>leafless bushes</setting>
</proposition>
<proposition>
<Object>
<head>polar bear</head>
<modifier>young</modifier>
</Object>
<setting>leafless bushes</setting>
</proposition>
</mpeg-7-sads>
<!-- here could be some other MRML -->
</query-step>
</mrml>
```
We regard this solution as a good initial solution for our demo. However, in order to maximize the use of the new tag, we are planning to move it lower into the hierarchy, giving the possibility to attach relevance levels to the SADS, giving also the possibility to have a weighted combination of multiple SADS. This gives also the possibility to weight the SADS queries in relation to (possible non-MPEG-7) media items that were given as example:
```xml
<mrml session-id="1" transaction-id="44">
```
<query-step session-id="1"
resultsize="30"
algorithm-id="sads-processor">
<user-relevance-list>
<!--NOTE: Here we have a simple QBE element containing NON-MPEG-Content -->
<user-relevance-element
image-location="http://viper.unige.ch/1.jpg"
user-relevance="1"/>
<!--NOTE: Here we have the same SADS-tag as before, with a relevance attached.
However, we can use it in more flexible ways and weight it against
the user-relevance-element -->
<mpeg-7-sads user-relevance="0.5">
<theme>
<themeObject>
<head>polar bear</head>
<modifier>young</modifier>
</themeObject>
<setting>leafless bushes</setting>
</theme>
<proposition>
<Agent>
<head>polar bear</head>
<modifier>young</modifier>
</Agent>
<setting>leafless bushes</setting>
</proposition>
<proposition>
<Object>
<head>polar bear</head>
<modifier>young</modifier>
</Object>
<setting>leafless bushes</setting>
</proposition>
</mpeg-7-sads>
</user-relevance-list>
</user-relevance>
Please note, that a query processor which knows only standard MRML will simply ignore the document subtree contained in the mpeg-7-sads tag. Obviously this mechanism is very flexible and easily adaptable. Extensions keep existing frameworks intact.
5.2 MRML and Binary data:
MRML's preferred mechanism for transferring binary data is to send the URL where the data can be found. Binary data is then retrieved using the URL. As it is a primary goal of MRML to enable the sharing of logging data we suggest transferring big chunks of data as follows.
Binary data that stays constant over several sessions (i.e. images and other media items contained in the queried collection) should be transferred using their URL, as described above. This keeps communication log files relatively small, yet data is accessible for everyone.
Binary data that changes during the query process (e.g. a file containing an example image for a QBE query that is not accessible by the web) should be transferred using two attributes. One of the attributes should contain the base64-encoded binary data, the other one the corresponding MIME type. However,
in most cases, it is preferable to design proper extensions to MRML which provide the best accessibility and readability of the resulting logs.
5.3 The MRML development model
As it has been stated many times throughout this article, MRML allows each developer to extend MRML to his needs. In particular, these extensions can coexist, and a notification of a central body is not necessary for making these extensions work. However, to maximize the usefulness of MRML the authors are presently setting up a database which contains documentation of extensions to MRML. It is also intended to provide a forum for groups which want to extend MRML into similar directions.
We propose to develop extensions to MRML in the following fashion. Search first the page http://www.mrml.net/extensions/ [23] for documentation of extensions which might already do what you want.
If there exist extensions for your needs,
- implement the existing extension
- double-check with the author of the existing extension that the documentation has been understood in the right way
- add your name and your affiliation to the list of people/groups who are using this extension. This list is kept on www.mrml.net [21].
If not do not exist extensions for your needs,
- implement the extension
- submit documentation for your extension along with your name and your affiliation to www.mrml.net.
The information contained on http://www.mrml.net/extensions/ will be useful both for analysing logs and for merging extensions, once an extension has proven more useful than others.
6 Further use of MRML
In this article, we have presented a stable, extensible and useful framework for the use in CBIRS and other multimedia retrieval systems. In the sequel, we shortly described tools that can easily be implemented using existing features of the MRML framework.
6.1 Meta query engines
We are currently conceiving a meta query engine which queries MRML compliant servers.
Meta query engines running under MRML will start a handshaking procedure, establishing for each of the attached servers the available collections and algorithms. The meta query can then assemble this information into a property sheet that can be presented to the user via a standard MRML interface.
After configuration the meta-query engine will pass arriving queries onto the attached servers, returning an assembled result. We plan to use methods similar to those described in [12].
6.2 Benchmarks
Only preliminary steps towards developing common benchmarks have been taken by the CBIR community - a comparison of evaluation techniques may be found in [6]. We are currently working on a more profound and flexible benchmarking system based on the results of this research.
7 Use of MRML for the MPEG-7 XM
The separation of query-shipping and query formulation/content-shipping that is enabled by MRML has increased our flexibility in trying new algorithms and query formulation methods. In other words, building an MRML-compliant system has provided us with a rapid prototyping environment for CBIRS query processors, as well as CBIRS query languages. The design patterns implied by MRML allowed us to use experimental and more stable query processors together in one application and compare them.
We believe that these properties would be of use for the MPEG-7 XM, which is by definition a software project in motion. Looking at the sources of the XM, we are very optimistic that adding MRML capabilities to the XM will cause an amount of work which is modest when compared to the benefits it could bring: our existing MRML code can be linked to a Perl interpreter. As a consequence, it is very simple to link this code to the XM in a preliminary way and to evaluate the benefits of linking XM with MRML handling code. A full, native C++ XM/MRML link could then be established gradually by soft migration.
Because of its flexibility and its extensible design, we also regard MRML as a useful tool for experimenting parts of MPEG-7 which are at the limit of the scope: what is already an application, and what is still content description?
8 Conclusion
The development of MRML and the first MRML-compliant tools has established a common framework for the fast development of CBIR applications. To our knowledge, MRML is the first general communication protocol for CBIR actually implemented. The source code for the interface and the query engine is freely available. This should help developers of retrieval engines and developers of user interfaces to develop complete systems on the basis of existing components.
Tests have shown the stability of the protocol and our test components. Since MRML free and extensible, the availability of more applications and tools supporting such a protocol will further facilitate the development of CBIR applications supporting a diversity of query paradigms.
More important, in our opinion, is the fact that the adoption of MRML will lead to the possibility of comparing different CBIR applications objectively. It will make it easy to develop common benchmarks for all MRML-compliant systems, similar to those which exist in the database and information retrieval communities.
Furthermore, the possibility of sharing MRML user logs will provide a useful tool for the sharing of user interaction data.
We believe that all these properties, combined with MRML’s ease of use will make MRML a useful tool for the further evolution of the MPEG-7 XM software.
Acknowledgements
This project is supported by the Swiss National Foundation for Scientific Research under grant number 2000-052426.97. 10
9 References
Fig. 2. Demonstration of property sheets in SnakeCharmer. The user has the choice to modify the default settings or not. If he decides to modify the default settings, widgets which enable him to do so pop up, which will be shown in figure 3.
Fig. 3. Demonstration of property sheets in SnakeCharmer. The user has the choice to modify the default settings or not. If he decides to modify the default settings, widgets which enable him to do so pop up. The image shows the state with all widgets shown, figure 2 shows the state, where the user has decided not to modify any default selections.
|
{"Source-Url": "http://users.monash.edu/~davids/publications/postscript/2000/MuellerWMuellerHMarchant-MailletPunSquirePecenovicGiessDeVries_M6099.pdf", "len_cl100k_base": 6143, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 13746, "total-output-tokens": 8151, "length": "2e12", "weborganizer": {"__label__adult": 0.0003745555877685547, "__label__art_design": 0.0005822181701660156, "__label__crime_law": 0.0004274845123291016, "__label__education_jobs": 0.0008349418640136719, "__label__entertainment": 0.00015497207641601562, "__label__fashion_beauty": 0.00019121170043945312, "__label__finance_business": 0.0002803802490234375, "__label__food_dining": 0.0003383159637451172, "__label__games": 0.0004773139953613281, "__label__hardware": 0.001575469970703125, "__label__health": 0.0006871223449707031, "__label__history": 0.00032830238342285156, "__label__home_hobbies": 8.016824722290039e-05, "__label__industrial": 0.0004286766052246094, "__label__literature": 0.0002779960632324219, "__label__politics": 0.00025725364685058594, "__label__religion": 0.0005383491516113281, "__label__science_tech": 0.1185302734375, "__label__social_life": 0.00011914968490600586, "__label__software": 0.0302581787109375, "__label__software_dev": 0.84228515625, "__label__sports_fitness": 0.00028777122497558594, "__label__transportation": 0.0004436969757080078, "__label__travel": 0.0002636909484863281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33676, 0.01654]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33676, 0.39517]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33676, 0.88553]], "google_gemma-3-12b-it_contains_pii": [[0, 2304, false], [2304, 6886, null], [6886, 9106, null], [9106, 13138, null], [13138, 15827, null], [15827, 18721, null], [18721, 21044, null], [21044, 23196, null], [23196, 26452, null], [26452, 29944, null], [29944, 33085, null], [33085, 33327, null], [33327, 33676, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2304, true], [2304, 6886, null], [6886, 9106, null], [9106, 13138, null], [13138, 15827, null], [15827, 18721, null], [18721, 21044, null], [21044, 23196, null], [23196, 26452, null], [26452, 29944, null], [29944, 33085, null], [33085, 33327, null], [33327, 33676, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33676, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33676, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33676, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33676, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33676, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33676, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33676, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33676, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33676, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33676, null]], "pdf_page_numbers": [[0, 2304, 1], [2304, 6886, 2], [6886, 9106, 3], [9106, 13138, 4], [13138, 15827, 5], [15827, 18721, 6], [18721, 21044, 7], [21044, 23196, 8], [23196, 26452, 9], [26452, 29944, 10], [29944, 33085, 11], [33085, 33327, 12], [33327, 33676, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33676, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
eaa2fe507d66e797981e4be0ba8b2a1b8066edeb
|
A UDDI Search Engine for SVG Federated Medical Imaging Web Services
Sabah Mohammed, Jinan Flaidhi and Marshal Hahn
Department of Computer Science, Lakehead University, 955 Oliver Road
Thunder Bay, Ontario P7B 5E1, Canada
Abstract: With more and more medical web services appearing on the web, web service’s discovery mechanism becomes essential. UDDI is an online registry standard to facilitate the discovery of business partners and services. However, most medical imaging applications exist within their own protected domain and were never designed to participate and operate with other applications across the web. However, private UDDI registries in federated organizations should be able to share the service descriptions as well as to access them if they are authorized. The new initiatives on Federated Web Services Identity Management can resolve a range of both technical and political barriers to enable wide-scale participation and interoperation of separate domains into a singular, robust user experience. However, there is no widely acceptable standard for federated web services and most of the available vendors frameworks concentrate only on the security issue of the federation leaving the issue of searching and discovering web services largely primitive. Federated web services security and web services searching are uniquely intertwined, mutually reliant on each other and are poised to finally solve a long-running problem in both IT and systems security. Traditional keyword search is insufficient for web services search as the very small text fragments in web services are unsuitable for keyword search and the underlying structure and semantics of the web service are not exploited. Engineering solutions that address the security and accessibility concerns of web services, however, is a challenging task. This article introduces an extension to the traditional UDDI that enables sophisticated types of searching based on a lightweight web services federated security infrastructure.
Keywords: Web service federation, web service security, svg image security, medical imaging, medical informatics
INTRODUCTION
Like many complex distributed systems, healthcare information systems involve a variety of services and participants. When giving and receiving medical care, for example, participants such as doctors, radiologists, technicians, administrative staff and patients frequently interact with information services such as medical records databases, radiology image stores and billing systems. In addition, users and services also communicate with external entities such as insurance companies, pharmacies and health clinics. While regular communication is essential between healthcare providers, these exchanges are largely inefficient. Currently, exchanges generally occur in paper form or electronically using mostly custom, incompatible legacy systems (e.g., PACS, RIS, DICOM). Because these disparate users and services lack a common communications framework, it is difficult for healthcare participants to obtain comprehensive medical information about patients when providing care. A patient may have multiple medical records stored at various locations (e.g., at a hospital, doctor’s clinic, pharmacy) and data such as lab results, drug prescriptions and disease histories are often not consolidated. Thus, it is likely that healthcare participants could provide higher quality medical care if they had access to such information and resources, especially during emergency situations. There is a range of other challenging properties of medical work, which makes it fundamentally different from typical distributed office network: extreme mobility, ad hoc collaboration, interruptions, high degree of communication, etc. – attributes that are in strong contrast to normal office work. Moreover, healthcare services are turning to stronger authentication methods. Biometric methods (e.g., fingerprints, iris scans, signature and voice recognition) and non-biometric digital techniques (e.g., e-tokens, RFID, key fobs) are rapidly replacing passwords for authentication purposes. Thus, the security requirements in the healthcare systems require very dynamic and flexible policy enforcement.
As a remedy the recent advent of XML and web services can be seen as an effective solution to the security issues. In particular web services present a standardized, loosely-coupled framework that can incorporate the complex, cross-boundary interactions of a healthcare system into a fully-connected, distributed computer system. Utilizing a standardized computer
language like XML, allows a wide and diverse group of individuals or organizations to "talk" to each other, which greatly facilitates information gathering and online transactions. On the other hand, Web services are applications that can share information and services with other applications over the Internet using a common interface and messaging system. Such an integrated environment for exchanging information may revolutionize communication and information-sharing practices not only for healthcare systems, but also for a variety of other industries. This technology provides an easy way for entities to share data and services with other entities using a common framework and a standardized messaging protocol. Thus, there is a growing number of international associations like MedBiquitous Consortium (http://www.medbiq.org/) and the HL7 new initiative on Medical Informatics (or what is called HL7 V3 Initiative(http://www.hl7.org/)) dedicating their efforts to accommodate this new trend of technology for constructing a new type of medical healthcare systems. Such initiatives provided an environment for a growing number of web services and XML-based data and applications available within hospitals and on the Web which raises new and challenging research problems. Particularly on how to locate desired web services and how to securely access these web services.
Unfortunately the traditional keyword search is insufficient in context of web services: the specific types of queries users require are not captured, the very small text fragments in web services are unsuitable for keyword search and the underlying structure and semantics of the web services are not exploited. Moreover, in creating mechanisms to collect and to retrieve medical information, however, one must recognize that protecting patient privacy is a fundamental system requirement. Engineering solutions that address the security and accessibility concerns of web services especially for medical imaging services, however, is a challenging task. Many corporations and standards organizations currently undertaking this task have developed specifications and tools to address these concerns, but this effort is largely a work in progress.
MEDICAL IMAGING BASED ON SVG WEB SERVICES
Using XML to represent patient data records is a new trend in medical systems[3,4]. However, including binary images within the format of XML limits the ubiquity of XML as well as prevents searchability for these images or their contents. Medical images are at the heart of the patient’s diagnosis, therapy treatment, surgical planning, disease screening and long-term follow-up for outcome assessment. Medical imaging is becoming increasingly important in patient records. In the past three decades, we have witnessed tremendous changes in medical imaging; new techniques include diagnostic ultrasound, X-ray computed tomography (CT), magnetic resonance imaging (MRI), magnetic resonance spectroscopy (MRS), functional magnetic resonance imaging (fMRI), digital subtraction angiography (DSA), positron emission tomography (PET), magnetic source imaging (MSI) and so on. These digital imaging modalities, which currently constitute about 30 percent of medical imaging examinations and records in the United States[5]. These advancements in medical imaging requires better means to acquire patient images from patient records[6]. In this direction Scalable Vector Graphics (SVG) imaging promises to revolutionize the Web through the introduction of standard based on vector graphics for imaging, animation and multimedia interactivity. SVG standard allows to represent complex graphical scenes by a collection of vectorial-based primitives, offering several advantages compared to classical raster images[6,7].
The broad support behind SVG comes from its many advantages. SVG has sophisticated graphic features, which is naturally important for a graphic format, but it also benefits from being an XML grammar. SVG has all the advantages XML brings such as internationalization (Unicode support), wide tool support, easy manipulation through standard APIs (e.g., the Document Object Model, DOM API, Batik API) and easy transformation (e.g., through XML style sheet Language Transformations, XSLT). In the graphical arena and especially compared to raster graphics formats (such as GIF, JPEG or PNG images), SVG has the advantage of being:
* Lightweight. For many types of graphics, an SVG graphic will be more compact than its raster equivalent
* Zoonable. SVG content can be viewed at different resolutions, e.g., enlarged or shrunk without losing quality.
* Searchable. Because SVG content is XML, it becomes possible to search the content of an SVG image for text elements, comments or any kind of meta-data.
* Structured and Accessible. Graphic objects can be grouped and organized hierarchically.
It is natural for Web Services to accommodate SVG for graphical and image based services. In addition to being open and XML, SVG has a rich structure and preserves semantic because of its descriptive element and metadata. This richness provides an opportunity to Web Services to generate, modify or search rich graphical content[8]. Traditionally SVG has been used as a flexible imaging viewer only, which limited its potential for advanced imaging applications. Security issues have been the main challenge in SVG applications. A key challenge, therefore, is to enable the interoperability between SVG Web Services to take place seamlessly and securely[9].
BUILDING FEDERATED WEB SERVICES
To meet the challenge of current industry trends such as growth in increased mobility and the need for persistent connectivity, healthcare organizations are extending internal systems to external users providing connectivity to customers, partners, suppliers and mobile healthcare users. However, providing efficient and seamless connectivity requires building “trust-based” relationships that enable organizations to securely share a user’s identity information. Trust relationships allow identity and policy information to flow between healthcare organizations independent of platform, application, or security model. Trust relationships need to be formed quickly and efficiently to maximize productivity and eliminate the manual processes that often take place today. Web Service Federation describes the technology and business arrangements necessary for this interconnection[10]. The term federation derives from the Latin word for trust. In the world of distributed network services, the term refers to the need for trust agreements among decentralized security and policy domains. Federation lets access-management functions span diverse organizations, business units, sites, platforms, products and applications. Federation requires that an organization trust each trading partner to authenticate its own users’ identities. In a federated environment, a user can log on to his home domain and access resources transparently in external domains, such as those managed by customers or suppliers, subject to various policies defined by home and external administrators. Federated systems need to interoperate across organizational boundaries and connect processes utilizing different technologies, identity storage, security approaches and programming models. Within a federated system, identities and their associated credentials are still stored, owned and managed separately. Each individual member of the federation continues to manage its own identities, but is capable of securely sharing and accepting identities and credentials from other members’ sources. Within a federated system, an organization needs a standardized and secure way of expressing not only the services it makes available to trusted partners and customers, but also the policies by which it runs its business such as which other organizations and users it trusts, what types of credentials and requests it accepts and its privacy policies.
In this direction, web service federation requires standards, specifications and frameworks that will describe the model for establishing both direct and brokered trust relationships (including third parties and intermediaries). The industry is yet to agreed about one standard. Although there are some serious attempt to have a united framework, but we cannot yet see the light in the tunnel. Currently there are many standards and techniques to develop and managed federated web services. The most notable of all is the ebXML which provides an open, industry-wide standard for building support for collaborative Web services, including reliable messaging. This standard provides a suite of managing specifications including:
- Reliable messaging: (ebMS) -- Provides guaranteed, once-and-once-only delivery, layered on SOAP messaging.
- Business process specifications (BPSS) -- Defines business activities, collaborations and transactions and describes their relationships. Also provides a machine-readable specification instance.
- Partner profile and agreements (CPP/A) -- Holds configuration information for partners’ runtime systems and stores quality-of-service (QOS) information.
- Registries and repositories (Reg/Rep) -- Provides a powerful classification and storage mechanism for artifacts, including BPSS and CPP/A.
More recent standards like the WS-Federation standard has been developed recently by a group of major vendors (BEA, IBM, Microsoft, RSA Security and VenSign [http://xml.coverpages.org/WS-Federation.pdf]). This standard provides a language (WS-Federation) that defines mechanisms "used to enable identity, account, attribute, authentication and authorization federation across different trust realms. Other group of vendors (Oracle, BEA, IBM, Microsoft) are also aggressively working on another standard for orchestrating Web-services-based end-to-end business processes. They introduced the BPEL (Business Process Execution Language). BPEL is the XML standard that can create trusted and federated environment for web services. Moreover, there are many other standards that can be used to enforced federated web services environment:
- The Universal Business Language (UBL) ([http://docs.oasis-open.org/ubl/ed-UBL-1.0/])
- The Extensible Access Control Markup Language (XACML) ([http://java.sun.com/developer/technicalArticles/Security/xacml/xacml.html])
- The Business Transaction Protocol (BTP) ([http://www.developer.com/java/data/article.php/10932_3066301_2])
- The Liberty Alliance Project ([http://www.projectliberty.org/])
- XML Key Management Specifications ([http://www.w3.org/TR/xkms/])
Among the many evolving mentioned security standards for Web services there are only two major basic initiatives:
1. Security assertion markup language (SAML): SMAL protocol relies on Single-Sign-On (SSO) services to deliver authentication through federated web services. ClearTrust 5 is an example of such protocol\textsuperscript{[22]}. Actually, SMAL defines set of XML formats for representing identity and attribute information, as well as protocols for requests and responses for access control information. The key principle behind SAML is an assertion, a statement made by a trusted party about another. Assertions can be encoded in browser requests or included in Web services transactions, enabling logins for both person-to-machine and machine-to-machine communications.
2. WS-security (WSS): This is a security standard that focuses on message integrity, confidentiality and authentication. WSS does not address SSO but covers message encryption in detail. WS-Security will standardize how security information is added to SOAP messages. One important class of such security information is one which WS-Security calls Security Tokens -- a security token is "a collection of claims" about the sender (the sender typically proving their right to this claim through a digital signature). WS-Security begins with the assumption that, if one of the parties uses a particular type of security token (e.g. X.509 certificates, Kerberos tickets, SAML Assertions, XACML policies, etc.) within the WS-Security header, then the other party will be able to interpret and process this token. The basic architecture of WSS is shown in the Fig. 1\textsuperscript{[12]}. A SOAP client sends a SOAP message to a business application SOAP service (for the sake of this example a purchasing application), which, after appropriate business processing, sends the SOAP response back. Supporting this fundamental exchange are the components of the security infrastructure, a SOAP gateway and a security token service (STS), which work together to ensure that the SOAP service receives only messages with "appropriate" security tokens.

Fig. 1: Basic model of web service security
Obviously the SAML approach to solving the WS security has been based on SSO, the centralization of access control information into one server or servers that requires special plugins (e.g., "Web agents" for Web servers) to retrieve the information. Every application needs to be "SSO enabled" by programming to the proprietary API, different for each competing vendor. The coding task usually falls to the IT organization. Overall, this technology has not been as successful as originally hoped, with many SSO implementations either behind or experiencing scalability challenges. Indeed, identity information and access control policies are some of the most valuable and frequently used data in any IT organization. Instead of coding to a proprietary agent (which in turn uses a proprietary protocol to communicate with a particular brand of identity management server), applications can make Web services (SOAP) requests to authenticate users or authorize transactions. In conclusion, we may see WSS is more mature than SAML. Even though both SAML and WSS overlap, they are not mutually exclusive and may be merged into a single standard in the future. However, to be successful with either SAML or WSS, you must build a layered system that supports current and future security implementations.
Thus, one interesting of our ongoing research is to extend SOAP to gain access to the actual network stream before it is decentralized into objects and vice versa. For instance, through these extensions encryption algorithms can be developed on top of the Web Service call. However, most of the SOAP extensions are transport Protocols dependent, especially for multimedia applications, such as RTP (http://www.ietf.org/rfc/rfc1889.txt), Multicast and UDP. To deal with this issue, brokering-based architectures can be used. In this direction we proposed an AXIS Based Web Service Management Network (AWSMN), as management architecture for federated service management\textsuperscript{[5]}. Since we believe it is safe to assume Internet services will be implemented using web service technology (SOAP, XML, WSDL). AWSMN is based on such technology. The critical concept in AWSMN is that of Axis Security Handlers (ASH). SAHs are intermediary components that can explicitly defined to manage agreements between services. The SAH concept allows us to frame and solve many problems rather elegantly and effectively. AWSMN, then, is a network of cooperating intermediaries, each such intermediary implemented as a proxy sitting between a service and the outside world and a set of protocols to manage service relationships expressed through SAHs (Fig. 2).

Fig. 2: The AWSMN System
The AWSMN is concerned with the following security-related tasks:
* XML Encryption/Decryption (used by various ASH handlers).
* XML Signature/Verification (also used by various ASH handlers).
* Generation and storage of RSA public/private key pairs. The public key is always wrapped in X509 Certificate.
* Authenticates user requests for Web Service usage.
* Authorizes users for Web Service usage.
* Handles group management tasks such group creation, member addition, WS addition and related tasks such as organization and registration/deletion.
* Assist in handling WS search.
* Processes messages.
Tasks 1-3 and 8 are performed locally using the Secure class. However, tasks 4-6 are performed remotely by the other classes of the AWSMN Security Subsystem (Fig. 3). Task 7 is a compound task that includes tasks 1-6 and work in collaboration with the UDDI Subsystem.

Fig. 3: The AWSMN security subsystem
Details of the AWSMN classes implementation can be found at [33,6].
**UDDI SEARCH ENGINE**
Basically a UDDI client works by searching UDDI servers for all services that matches the service required name through parsing the WSDL specifications. Each web service has an associated WSDL file describing its functionality and interface. A web service is typically published by registering its WSDL file in UDDI registries. Each web service consists of a set of operations. Through the WSDL, we have access to the following information on web services: Name and text description, Operation descriptions and Input/Output descriptions. To the outside world, UDDI provides two sets of interfaces: one for service registration and one for service discovery. For discovery, UDDI can be considered as White Pages registry that provides business contact information. However, UDDI search engines in this context does not help in categorizing web services based-on accepted business classifications nor it comply to the federated security initiatives. For this purpose we are extending the UDDI client to accommodate the category-based searching facilities as well as to comply to the AWSMN federated web service model. The advantage of this approach is that several candidate services can be considered and searched and by using a unified classification pattern, a single interface provided to all of the candidates. In this context, UDDI can be extended to represent Yellow Pages registry that can categorize business and their services according to standard taxonomies. To extend UDDI so to be aware of a business classification scheme, we used mammography cancer staging categories as described by the American Joint Committee on Cancer (http://www.imaginis.com/breasthealth/staging.asp).
The categories are kept in an external XML file and can be changed to any other application. However, the categories use the tumor, nodes, metastasis (TNM) classification system. The stage of a breast cancer describes its size and the extent to which it has spread. The staging system ranges from stage 0 to stage IV according to tumor size, lymph nodes involved and distant metastasis. T indicates tumor size. The letter T is followed by a number from 0 to 4, which describes the size of the tumor and whether it has spread to the skin or chest wall under the breast. Higher T numbers indicate a larger tumor and/or more extensive spread to tissues surrounding the breast.
a. TX: The tumor cannot be assessed.
b. T0: No evidence of a tumor is present.
c. Tis: The cancer may be LCIS, DCIS, or Paget disease.
d. T1: The tumor is 2 cm or smaller in diameter.
e. T2: The tumor is 2-5 cm in diameter.
f. T3: The tumor is more than 5 cm in diameter.
g. T4: The tumor is any size and it has attached itself to the chest wall and spread to the pectoral (chest) lymph nodes.
The letter N indicates palpable nodes. The letter N is followed by a number from 0 to 3, which indicates whether the cancer has spread to lymph nodes near the breast and if so, whether the affected nodes are fixed to other structures under the arm.
a. NX: Lymph nodes cannot be assessed (eg, lymph nodes were previously removed).
b. N0: Cancer has not spread to lymph nodes.
c. N1: Cancer has spread to the movable ipsilateral axillary lymph nodes (underarm lymph nodes on the same side as the breast cancer).
d. N2: Cancer has spread to ipsilateral lymph nodes (on the same side of the body as the breast cancer), fixed to one another or to other structures under the arm.
e. N3: Cancer has spread to the ipsilateral mammary lymph nodes or the ipsilateral supraclavicular lymph nodes (on the same side of the body as the breast cancer).
The letter M indicates metastasis. The letter M is followed by a 0 or 1, which indicates whether the cancer has metastasized (spread) to distant organs (e.g., lungs or bones) or to lymph nodes that are not next to the breast, such as those above the collarbone.
a. MX: Metastasis cannot be assessed.
b. M0: No distant metastasis to other organs is present.
c. M1: Distant metastasis to other organs has occurred.
Hence the main purpose of the extended UDDI client is to search for SVG based mammograms based on the breast cancer staging categories and via Security Server.[16] The object which provides the main functionality of this UDDI client (Search and Retrieve Subsystem) is a singleton of type uddiConnectivity. Fig. 4 illustrates the main classes involved in the UDDI client. The uddiMessage class inherits from class java.lang.Exception. Methods in the uddiConnectivity singleton object sometimes will create and throw objects of type uddiMessage to pass messages to the GUI subsystem such as “No Services Found”. The methods of class uddiConnectivity throw other exceptions as well that are mostly JAXB-related. Information (name, which SVG service offers it, ...) regarding each SVG found by an SVG search is stored in a svgImageResult object and these objects are passed to the GUI subsystem. If the user indicates the desire to view an SVG listed in the search results list, this subsystem will retrieve the URL of the SVG Web Service offering that SVG and store it in an accessPoints object. Next, the accessPoints object will be passed to the SVG Image Retrieval Subsystem.

This image retrieval subsystem is responsible for the secure retrieval of SVG Images and is shown in Fig. 5. The axisRPC class is designed specifically to encapsulate remote procedure calls to an RPC Style SVG Web Service. To create an object of this class, the only parameter required is an accessPoints object (which contains the URL needed to contact a particular SVG Service). Using this object, the constructor will create the necessary Axis call objects. The Axis-related code of this class is similar to that of the securityRPC class of the Security Subsystem.

**Fig. 5:** The SVG image retrieval subsystem
The axisRPCPool class stores a number of axisRPC objects in a static vector object. Each axisRPC object in the pool is configured to connect to exactly one SVG Web Service and none of these objects connect to the same SVG Web Service. Each time the Client user attempts to retrieve an SVG from an SVG Web Service that he has not accessed before, a new axisRPC object configured to connect to that SVG Web Service will be added to the pool. When an axisRPC object is created, the public key certificate of the SVG Web Service it will connect to must be retrieved. An advantage gained by using the axisRPCPool class regards the fact that the certificate of each SVG web service must only be downloaded once. Note that the pool must be rebuilt each time the Client is restarted (the public key certificates are not saved locally). The axisRPC getSVG(...) method returns an object of type svgImage. Each svgImage object contains an SVG image (stored in an org.w3c.dom.Document) as well as the name and udid of that SVG image. Objects of type svgImage are passed to the GUI Subsystem.
The HandlerClientSecurityRequest and HandlerClientSecurityResponse instances behave similarly to the ASHs discussed in[15] with only one difference where the public key of the SVG Web Service being contacted is used for the encryption operation in the Request Handler and the signature verification operation in the Response Handler.
The UDDI client or the SVG Search Engine client can then query the UDDI registry to discover the required Web services and can make use of them. Searches can be performed based on a number of criteria types such as the service name, Breast Cancer classification and the service groups. The system supports hierarchal classification schemes and classification-based searches will return all SVGs with a more specific classification then the one the user chose to search for (Fig. 6).
Fig. 6: UDDI SVG search client
a. Searching for breast cancer web service
b. Searching via the group type
Fig. 7: Searching by category and by group
Fig. 8: The SVG web services search results
After the results of a search are displayed, the user can attempt to view any of the SVGs listed. After the user chooses to view an SVG, the Client will retrieve the location of the SVG Web Service from the UDDI Server. Next, the Client will attempt retrieval of the SVG image. Whether or not this retrieval is successful depends on whether or not the user is a member of the group the SVG belongs to. The SVG Web Service will contact the Security Server to both authenticate (based on the user’s digital signature) and authorize users who request SVGs.
To prevent someone from setting up a fake Security Server in an attempt to gain access to SVGs that have been registered, all messages sent from the Security Server are digitally signed. Moreover, every message sent by each user is digitally signed by that user. Although digital signatures prove that a message is from a particular user and that this message has not been changed, they do not prevent others from seeing what that message is. Therefore, most messages sent within the system are encrypted.
**Searching by SVG web service name:** A search in which name criteria is specified will find all SVGs whose names meet certain criteria. Some of the SQL-92 syntax such % (matches any 0 or more characters) and _ (matches any one character) is supported (this support is actually built into the JAXR/UDDI API). For example, if you enter %SVG% in the “Search By Name” field, all SVGs which have the string “SVG” inside their names will be found. However, if you enter just SVG, all SVGs which begin with the string “SVG” will be found. If the “Exact Match” search criterion is specified, only SVGs whose names match the contents of the “Search By Name” field exactly will be found. For example, if I were to enter “Blue”, only SVGs whose names are “Blue” would be found. Suppose you want to find all SVGs whose names include the strings “SVG” and “Service”. This could be done by entering the following string into the “Search By Name” field: %SVG%Service%. Moreover, suppose that you want to find all SVGs whose names include the string “SVG” or “Service”. This can be done use the | character.
The following string would be entered into the “Search By Name” field: %SVG%Service%. If the % character is entered in the “Search By Name” field more than once, the search will always fail to produce results. I’m not sure why this is happening and I will investigate this matter eventually.
**Searching by breast cancer classification:** To search for an SVG based upon “Breast Diagnosis”, “tumor size”, “palpable nodes”, or “metastasis”, double click the appropriate “Classification” table entry. This will cause the Client to display a window similar to that shown in Fig. 7a. After selecting the appropriate classification, click “OK” to confirm your selection.
You can search for SVGs based upon any combination of these classifications.
Note that classification-based searches will always return all SVGs with a more specific classification then the one the user chose to search for as well as with the classification chosen. For example, if you chose to search based upon the “Breast Cancer” classification shown in Fig. 7a, all SVGs with any of the classifications in the black box shown in Fig. 8 would be returned.
Once you have specified your search criteria, click the “Get SVGs” button. This will cause a list of SVGs to be retrieved from the UDDI server. If any SVG in the list is clicked, that SVG will display in the “SVG Viewer” tab, as shown in Fig. 9.
Fig. 11: The UDDI SVG client subsystems
Fig. 12: A sequence diagram illustrating how the various subsystems are called after the user press get SVG button
Searching by group: To search by group, double click the text field next to the “Search By Group” check box shown in Fig. 7b. You can retrieve a list of groups based on any combination of the following criteria: whether or not you “own”, “belong to”, or “don’t belong to” a group and whether or not the name of the group contains a certain string. If the “Name Containing” text field is left blank, it will be ignored. Once you have selected your search criteria, click the “Submit Query” button to retrieve a list of groups. If the group that you are looking for appears in the results list, select that group and then click “OK”. The name of this group should now appear in the text field next to the “Search By Group” check box.
CONCLUSION
The AWSMN provides a secure means of sharing medical SVG files between Clients which utilize Apache Axis. The architecture of the system is shown in Fig. 10. We can see that the system has three main components: a Security Server (Security Web Service), a UDDI Server (UDDI Web Service) and any number of Clients.
At each client there is exactly one user and one SVG Web Service associated with each Client. However, the SVG Web Service allows the user to share any number of SVGs with other users who belong to particular groups. Each Client has a special folder in which the user can place all SVGs that should be accessible by the SVG Web Service and therefore, other users who belong to the appropriate groups. Each user can create and therefore, own any number of groups. The owner of a group decides who is allowed to join that group and is also a member. Each group can have a subset of all users registered as members and will always have at least one member (the owner). Each user can and should register the SVGs offered by his
SVG Web Service with the Security Server. During this registration process, the user will specify which groups his SVGs can be accessed by (Currently, an SVG can be accessed by exactly one group) and therefore, the set of SVGs belonging to a group is a subset of the union of all SVGs owned by its members. Of course, before a user can register SVGs to a particular group, he must be a member of that group. The Security Server will register each group and the SVGs that belong to it with the UDDI Server.
When a user wishes to search for SVGs, they will contact the UDDI Server directly. Searches can be performed based on a number of criteria types such as name, Breast Cancer classification and group. The system supports hierarchal classification schemes and classification-based searches will return all SVGs with a more specific classification than the one the user chose to search for.
The UDDI SVG Client is composed of seven subsystems, as shown in Fig. 11. In this Fig. 11, a line between two subsystems indicates that some kind of interaction takes place between them. The type of interaction taking place is not indicated. Possible interactions include a subsystem accessing an instance variable located in an object in another subsystem, a subsystem calling one of the methods of a singleton object contained in another subsystem, etc. As indicated in the diagram, many interactions take place between the seven subsystems. In fact, there are many interactions taking place between the objects and classes of each subsystem as well.
Moreover, Fig. 12 shows the interactions that occur when a user searches for SVGs. This diagram shows what happens when a user attempts to view an SVG. After the user enters the search criteria, he will click the “Get SVGs” button shown in Fig. 7. The GUI Subsystem will pass the search criteria to uddiConnectivity findSVGs(...) method which will use JAXR to create and send a SOAP message that specifies search criteria for a tModel classification-based search. This SOAP message will be sent to the UDDI Web Service. The UDDI Web Service will reply to uddiConnectivity with a SOAP message containing the results of the search. UddiConnectivity will store the results of the search in a number of svgImageResult objects and will send them to the GUI Subsystem. Finally, the GUI Subsystem will display the search results for the user. However, it is important to mention that the GUI subsystem has the classification stored as JTable. With each client the GUI load the following four classification schemes: Breast Diagnosis, tumor size, palpable nodes and metastasis. Indeed, the GUI could easily be changed to check which classifications exist in the myClassificationSchemes.xml file. It could then simply create a tree object to store each classification it finds and associate each tree object with a column of the JTable. Therefore, assuming that the program would come preinstalled with appropriate classification schemes, only minimal changes are needed.
When the user indicates his desire to view an SVG, the GUI Subsystem will ask uddiConnectivity to retrieve the URL of the SVG Web Service offering the SVG from the UDDI Web Service. When it receives the UDDI Web Service’s response, uddiConnectivity will return an accessPoints object, which contains the URL, to the GUI Subsystem. The GUI Subsystem will retrieve a reference to the appropriate axisRPC object from the axisRPC pool found in the SVG Image Retrieval (SVGR) Subsystem. Using this object, it will tell the SVGR Subsystem to retrieve the SVG. The SVGR Subsystem will send an encrypted and signed SVG Request SOAP message to the appropriate SVG Web Service, which may be located within the same Client or another user’s Client.
When the SVG Web Service receives the user’s request, it will tell the Security Subsystem to ask the Security Web Service if the user is authorized to view the SVG. The Security Subsystem will react by sending the Security Web Service a signed and encrypted Authorized? SOAP Message. The Security Web Service will reply with a signed and encrypted Authorization SOAP Message, which will indicate to SVG Web Service whether or not the user is authorized to view the SVG. If the user is authorized, the SVG will be sent to the SVGR Subsystem of the requesting user’s Client within a signed and encrypted SOAP Message. The SVGR Subsystem will send an svgImage object containing the SVG to the GUI Subsystem, which will display the SVG for the user.
This research also aims to extend AWSMIN to work for peer-to-peer environments based on JXTA through adding a bridge between AXIS and JXTA.
REFERENCES
4. Health Level Seven XML Patient Record Architecture [http://xml.coverpages.org/hl7PRA.html]
|
{"Source-Url": "https://thescipub.com/pdf/10.3844/jcssp.2006.303.313", "len_cl100k_base": 7830, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 12573, "total-output-tokens": 9249, "length": "2e12", "weborganizer": {"__label__adult": 0.000927448272705078, "__label__art_design": 0.0009083747863769532, "__label__crime_law": 0.001659393310546875, "__label__education_jobs": 0.0014181137084960938, "__label__entertainment": 0.00015664100646972656, "__label__fashion_beauty": 0.00045013427734375, "__label__finance_business": 0.0007677078247070312, "__label__food_dining": 0.000782012939453125, "__label__games": 0.001003265380859375, "__label__hardware": 0.00408935546875, "__label__health": 0.0238494873046875, "__label__history": 0.000499725341796875, "__label__home_hobbies": 0.00020432472229003904, "__label__industrial": 0.0007200241088867188, "__label__literature": 0.000423431396484375, "__label__politics": 0.00047969818115234375, "__label__religion": 0.0009317398071289062, "__label__science_tech": 0.32861328125, "__label__social_life": 0.00023472309112548828, "__label__software": 0.053009033203125, "__label__software_dev": 0.57666015625, "__label__sports_fitness": 0.0008797645568847656, "__label__transportation": 0.0009055137634277344, "__label__travel": 0.00044465065002441406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42049, 0.01867]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42049, 0.39413]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42049, 0.8972]], "google_gemma-3-12b-it_contains_pii": [[0, 4600, false], [4600, 10137, null], [10137, 15458, null], [15458, 20265, null], [20265, 24742, null], [24742, 29114, null], [29114, 31468, null], [31468, 32843, null], [32843, 34784, null], [34784, 40276, null], [40276, 42049, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4600, true], [4600, 10137, null], [10137, 15458, null], [15458, 20265, null], [20265, 24742, null], [24742, 29114, null], [29114, 31468, null], [31468, 32843, null], [32843, 34784, null], [34784, 40276, null], [40276, 42049, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42049, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42049, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42049, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42049, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42049, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42049, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42049, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42049, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42049, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42049, null]], "pdf_page_numbers": [[0, 4600, 1], [4600, 10137, 2], [10137, 15458, 3], [15458, 20265, 4], [20265, 24742, 5], [24742, 29114, 6], [29114, 31468, 7], [31468, 32843, 8], [32843, 34784, 9], [34784, 40276, 10], [40276, 42049, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42049, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
a58ed7fd8122ce7919ed1c4d21c10a7ea8835345
|
Abstract
This document defines a SIP Usage for REsource LOcation And Discovery (RELOAD). The SIP Usage provides the functionality of a SIP proxy or registrar in a fully-distributed system and includes a lookup service for Address of Records (AORs) stored in the overlay. It also defines Globally Routable User Agent URIs (GRUUs) that allow the registrations to map an AOR to a specific node reachable through the overlay. After such initial contact of a peer, the RELOAD AppAttach method is used to establish a direct connection between nodes through which SIP messages are exchanged.
Table of Contents
1. Introduction ................................................. 3
2. Terminology .................................................. 5
3. Registering AORs in the Overlay ............................. 6
3.1. Overview ................................................. 6
3.2. Data Structure ......................................... 6
3.3. Access Control ......................................... 8
3.4. Overlay Configuration Document Extension ............... 9
4. Looking up an AOR ............................................ 10
4.1. Finding a Route to an AOR .............................. 10
4.2. Resolving an AOR ....................................... 11
5. Forming a Direct Connection ................................. 11
5.1. Setting Up a Connection ............................... 11
5.2. Keeping a Connection Alive ........................... 12
6. Using GRUUs .................................................. 12
7. SIP-REGISTRATION Kind Definition ............................ 13
8. Security Considerations .................................... 14
8.1. RELOAD-Specific Issues ............................... 14
8.2. SIP-Specific Issues ................................. 14
8.2.1. Fork Explosion ................................. 14
1. Introduction
REsource LOcation And Discovery (RELOAD) [RFC6940] specifies a peer-to-peer (P2P) signaling protocol for the general use on the Internet. This document defines a SIP Usage of RELOAD that allows SIP [RFC3261] user agents (UAs) to establish peer-to-peer SIP (or SIPS) sessions without the requirement for permanent proxy or registration servers, e.g., a fully distributed telephony service. This service transparently supports SIP addressing including telephone numbers. In such a network, the RELOAD overlay itself performs the registration and rendezvous functions ordinarily associated with such servers.
The SIP Usage involves two basic functions.
Registration: SIP UAs can use the RELOAD data storage functionality to store a mapping from their address-of-record (AOR) to their Node-ID in the overlay, and to retrieve the Node-ID of other UAs.
Rendezvous: Once a SIP UA has identified the Node-ID for an AOR it wishes to call, it can use the RELOAD message routing system to set up a direct connection for exchanging SIP messages.
Mappings are stored in the SipRegistration Resource Record defined in this document. All operations required to perform a SIP registration or rendezvous are standard RELOAD protocol methods.
For example, Bob registers his AOR, "bob@dht.example.com", for his Node-ID "1234". When Alice wants to call Bob, she queries the overlay for "bob@dht.example.com" and receives Node-ID "1234" in
return. She then uses the overlay routing to establish a direct connection with Bob and can directly transmit a standard SIP INVITE. In detail, this works along the following steps.
1. Bob, operating Node-ID "1234", stores a mapping from his AOR to his Node-ID in the overlay by applying a Store request for "bob@dht.example.com -> 1234".
2. Alice, operating Node-ID "5678", decides to call Bob. She retrieves Node-ID "1234" by performing a Fetch request on "bob@dht.example.com".
3. Alice uses the overlay to route an AppAttach message to Bob’s peer (ID "1234"). Bob responds with his own AppAttach and they set up a direct connection, as shown in Figure 1. Note that mutual Interactive Connectivity Establishment (ICE) checks are invoked automatically from AppAttach message exchange.
Overlay
Alice Peer1 ... PeerN Bob
(5678) (1234)
-------------------------------------------------
AppAttach ->
AppAttach ->
AppAttach ->
AppAttach ->
<- AppAttach
<- AppAttach
<- AppAttach
<------------------ ICE Checks ------------------>
INVITE <------------------ OK
ACK <------------------ ICE Checks for media <------------------ RTP
Figure 1: Connection setup in P2P SIP using the RELOAD overlay
It is important to note that here the only role of RELOAD is to set up the direct SIP connection between Alice and Bob. As soon as the ICE checks complete and the connection is established, ordinary SIP or SIPS is used. In particular, the establishment of the media channel for a phone call happens via the usual SIP mechanisms, and RELOAD is not involved. Media never traverses the overlay. After
the successful exchange of SIP messages, call peers run ICE connectivity checks for media.
In addition to mappings from AORs to Node-IDs, the SIP Usage also allows mappings from AORs to other AORs. This enables an indirection useful for call forwarding. For instance, if Bob wants his phone calls temporarily forwarded to Charlie, he can store the mapping "bob@dht.example.com -> charlie@dht.example.com". When Alice wants to call Bob, she retrieves this mapping and can then fetch Charlie’s AOR to retrieve his Node-ID. These mechanisms are described in Section 3.
Alternatively, Globally Routable User Agent URIs (GRUUs) [RFC5627] can be used for directly accessing peers. They are handled via a separate mechanism, as described in Section 6.
The SIP Usage for RELOAD addresses a fully distributed deployment of session-based services among overlay peers. This RELOAD usage may be relevant in a variety of environments, including a highly regulated environment of a "single provider" that admits parties using AORs with domains from controlled namespace(s) only, or an open, multi-party infrastructure that liberally allows a registration and rendezvous for various or any domain namespace. It is noteworthy in this context that - in contrast to regular SIP - domain names play no role in routing to a proxy server. Once connectivity to an overlay is given, any name registration can be technically processed.
2. Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119].
We use the terminology and definitions from Concepts and Terminology for Peer to Peer SIP [I-D.ietf-p2psip-concepts] and the RELOAD Base Protocol [RFC6940] extensively in this document.
In addition, term definitions from SIP [RFC3261] apply to this memo. The term AOR is the SIP "Address of Record" used to identify a user in SIP. For example, alice@example.com could be the AOR for Alice. For the purposes of this specification, an AOR is considered not to include the scheme (e.g. sip:) as the AOR needs to match the rfc822Name in the X509v3 certificates [RFC5280]. It is worth noting that SIP and SIPS are distinguished in P2PSIP by the Application-ID.
3. Registering AORs in the Overlay
3.1. Overview
In ordinary SIP, a UA registers the user’s AOR and its network location with a registrar. In RELOAD, this registrar function is provided by the overlay as a whole. To register its location, a RELOAD peer stores a SipRegistration Resource Record under its own AOR using the SIP-REGISTRATION Kind, which is formally defined in Section 7. Note that the registration lifetime known from the regular SIP REGISTER method is inherited from the lifetime attribute of the basic RELOAD StoredData structure (see Section 7 in [RFC6940]).
A RELOAD overlay MAY restrict the storage of AORs. Namespaces (i.e., the right hand side of the AOR) that are supported for registration and lookup can be configured for each RELOAD deployment as described in Section 3.4.
As a simple example, consider Alice with AOR "alice@dht.example.org" at Node-ID "1234". She might store the mapping "alice@dht.example.org -> 1234" telling anyone who wants to call her to contact node "1234".
RELOAD peers can store two kinds of SIP mappings,
- from an AOR to a destination list (a single Node-ID is just a trivial destination list), or
- from an AOR to another AOR.
The meaning of the first kind of mapping is "in order to contact me, form a connection with this peer." The meaning of the second kind of mapping is "in order to contact me, dereference this AOR". The latter allows for forwarding. For instance, if Alice wants her calls to be forwarded to her secretary, Sam, she might insert the following mapping "alice@dht.example.org -> sam@dht.example.org".
3.2. Data Structure
This section defines the SipRegistration Resource Record as follows:
enum { sip_registration_uri(1), sip_registration_route(2),
(255) } SipRegistrationType;
select (SipRegistration.type) {
case sip_registration_uri:
opaque uri<0..2^16-1>;
case sip_registration_route:
opaque contact_prefs<0..2^16-1>
Destination destination_list<0..2^16-1>;
/* This type can be extended */
}
) SipRegistrationData;
struct {
SipRegistrationType type;
uint16 length;
SipRegistrationData data;
} SipRegistration;
The contents of the SipRegistration Resource Record are:
- **type**
- the type of the registration
- **length**
- the length of the rest of the PDU
- **data**
- the registration data
- If the registration is of type "sip_registration_uri", then the contents are an opaque string containing the AOR.
If the registration is of type "sip_registration_route", then the contents are an opaque string containing the registrant’s contact preferences and a destination list for the peer.
The callee expresses its capabilities within the contact preferences as specified in [RFC3840]. It encodes a media feature set comprised of its capabilities as a contact predicate, i.e., a string of feature parameters that appear as part of the Contact header field. Feature parameters are derived from the media feature set syntax of [RFC2533] (see also [RFC2738]) as described in [RFC3840].
This encoding covers all SIP User Agent capabilities, as defined in [RFC3840] and registered in the SIP feature tag registration tree. In particular, a callee can indicate that it prefers contact via a particular SIP scheme - SIP or SIPS - by using one of the following contact_prefs attribute:
(sip.schemes=SIP)
(sip.schemes=SIPS)
RELOAD explicitly supports multiple registrations for a single AOR. The registrations are stored in a Dictionary with Node-IDs as the dictionary keys. Consider, for instance, the case where Alice has two peers:
- her desk phone (1234)
- her cell phone (5678)
Alice might store the following in the overlay at resource "alice@dht.example.com".
- A SipRegistration of type "sip_registration_route" with dictionary key "1234" and value "1234".
- A SipRegistration of type "sip_registration_route" with dictionary key "5678" and value "5678".
Note that this structure explicitly allows one Node-ID to forward to another Node-ID. For instance, Alice could set calls to her desk phone to ring at her cell phone by storing a SipRegistration of type "sip_registration_route" with dictionary key "1234" and value "5678".
### 3.3. Access Control
In order to prevent hijacking or other misuse, registrations are subject to access control rules. Two kinds of restrictions apply:
A Store is permitted only for AORs with domain names that fall into the namespaces supported by the RELOAD overlay instance.
Storing requests are performed according to the USER-NODE-MATCH access control policy of RELOAD.
Before issuing a Store request to the overlay, any peer SHOULD verify that the AOR of the request is a valid Resource Name with respect to its domain name and the namespaces defined in the overlay configuration document (see Section 3.4).
Before a Store is permitted, the storing peer MUST check that:
- The AOR of the request is a valid Resource Name with respect to the namespaces defined in the overlay configuration document.
- The certificate contains a username that is a SIP AOR which hashes to the Resource-ID it is being stored at.
- The certificate contains a Node-ID that is the same as the dictionary key it is being stored at.
If any of these checks fail, the request MUST be rejected with an Error_forbidden error.
Note that these rules permit Alice to forward calls to Bob without his permission. However, they do not permit Alice to forward Bob’s calls to her. See Section 8.2.2 for additional descriptions.
3.4. Overlay Configuration Document Extension
The use of a SIP-enabled overlay MAY be restricted to users with AORs from specific domains. When deploying an overlay service, providers can decide about these use case scenarios by defining a set of namespaces for admissible domain names. This section extends the overlay configuration document by defining new elements for patterns that describe a corresponding domain name syntax.
A RELOAD overlay can be configured to accept store requests for any AOR, or to apply domain name restrictions. To apply restrictions, the overlay configuration document needs to contain a <domain-restrictions> element. The <domain-restrictions> element serves as a container for zero to multiple <pattern> sub-elements. A <pattern> element MAY be present if the "enable" attribute of its parent element is set to true. Each <pattern> element defines a pattern for constructing admissible resource names. It is of type xsd:string and interpreted as a regular expression according to "POSIX Extended Regular Expression" (see the specifications in [IEEE-Posix]).
Encoding of the domain name complies to the restricted ASCII character set without character escaping as defined in Section 19.1 of [RFC3261].
Inclusion of a <domain-restrictions> element in an overlay configuration document is OPTIONAL. If the element is not included, the default behavior is to accept any AOR. If the element is included and the "enable" attribute is not set or set to false, the overlay MUST only accept AORs that match the domain name of the overlay. If the element is included and the "enable" attribute is set to true, the overlay MUST only accept AORs that match patterns specified in the <domain-restrictions> element.
Example of Domain Patterns:
```
dht\.example\.com
.*\.my\.example
```
In this example, any AOR will be accepted that is either of the form <user>@dht.example.com, or ends with the domain "my.example".
The Relax NG Grammar for the AOR Domain Restriction reads:
```
# AOR DOMAIN RESTRICTION URN SUB-NAMESPACE
# AOR DOMAIN RESTRICTION ELEMENT
Kind-parameter &= element sip:domain-restriction {
attribute enable { xsd:boolean }
# PATTERN ELEMENT
element sip:pattern { xsd:string }*
}
```
### 4. Looking up an AOR
#### 4.1. Finding a Route to an AOR
A RELOAD user, member of an overlay, who wishes to call another user with given AOR SHALL proceed in the following way.
AOR is GRUU? If the AOR is a GRUU for this overlay, the callee can be contacted directly as described in Section 6.
AOR domain is hosted in overlay? If the domain part of the AOR matches a domain pattern configured in the overlay, the user can continue to resolve the AOR in this overlay. The user MAY choose to query the DNS service records to search for additional support of this domain name.
AOR domain not supported by overlay? If the domain part of the AOR is not supported in the current overlay, the user might query the DNS (or other discovery services at hand) to search for an alternative overlay that services the AOR under request. Alternatively, standard SIP procedures for contacting the callee might be used.
AOR inaccessible? If all of the above contact attempts fail, the call fails.
The procedures described above likewise apply when nodes are simultaneously connected to several overlays.
4.2. Resolving an AOR
A RELOAD user that has discovered a route to an AOR in the current overlay SHALL execute the following steps.
1. Perform a Fetch for Kind SIP-REGISTRATION at the Resource-ID corresponding to the AOR. This Fetch SHOULD NOT indicate any dictionary keys, so that it will fetch all the stored values.
2. If any of the results of the Fetch are non-GRUU AORs, then repeat step 1 for that AOR.
3. Once only GRUUs and destination lists remain, the peer removes duplicate destination lists and GRUUs from the list and initiates SIP or SIPS connections to the appropriate peers as described in the following sections. If there are also external AORs, the peer follows the appropriate procedure for contacting them as well.
5. Forming a Direct Connection
5.1. Setting Up a Connection
Once the peer has translated the AOR into a set of destination lists, it then uses the overlay to route AppAttach messages to each of those peers. The "application" field MUST be either 5060 to indicate SIP or 5061 for using SIPS. If certificate-based authentication is in use, the responding peer MUST present a certificate with a Node-ID matching the terminal entry in the destination list. Otherwise, the
connection MUST NOT be used and MUST be closed. Note that it is possible that the peers already have a RELOAD connection mutually established. This MUST NOT be used for SIP messages unless it is a SIP connection. A previously established SIP connection MAY be used for a new call.
Once the AppAttach succeeds, the peer sends plain or (D)TLS encrypted SIP messages over the connection as in normal SIP. A caller MAY choose to contact the callee using SIP or SIPS, but SHOULD follow a preference indicated by the callee in its contact_prefs attribute (see Section 3.2). A callee MAY choose to listen on both SIP and SIPS ports and accept calls from either SIP scheme, or select a single one. However, a callee that decides to accept SIPS calls, only, SHOULD indicate its choice by setting the corresponding attribute in its contact_prefs. It is noteworthy that according to [RFC6940] all overlay links are built on (D)TLS secured transport. While hop-wise encrypted paths do not prevent the use of plain SIP, SIPS requires protection of all links that may include client links (if present) and endpoint certificates.
SIP messages carry the SIP URIs of actual overlay endpoints (e.g., "sip:alice@dht.example.com") in the Via and Contact headers, while the communication continues via the RELOAD connection. However, a UA can redirect its communication path by setting an alternate Contact header field like in ordinary SIP.
5.2. Keeping a Connection Alive
In many cases, RELOAD connections will traverse NATs and Firewalls that maintain states established from ICE [RFC5245] negotiations. It is the responsibility of the Peers to provide sufficiently frequent traffic to keep NAT and Firewall states present and the connection alive. Keepalives are a mandatory component of ICE (see Section 10 of [RFC5245]) and no further operations are required. Applications that want to assure maintenance of sessions individually need to follow regular SIP means. Accordingly, a SIP Peer MAY apply keep-alive techniques in agreement with its transport binding as defined in Section 3.5 of [RFC5626].
6. Using GRUUs
Globally Routable User Agent URIs (GRUUs) [RFC5627] have been designed to allow direct routing to a specific UA instance without the need for dereferencing by a domain-specific SIP proxy function. The concept is transferred to RELOAD overlays as follows. GRUUs in RELOAD are constructed by embedding a base64-encoded destination list in the "gr" URI parameter of the GRUU. The base64 encoding is done
with the alphabet specified in table 1 of [RFC4648] with the exception that ~ is used in place of =.
Example of a RELOAD GRUU:
alice@example.com;gr=MDEyMzQ1Njc4OTAxMjM0NTY3ODk~
GRUUs do not require to store data in the Overlay Instance. Rather when a peer needs to route a message to a GRUU in the same P2P overlay, it simply uses the destination list and connects to that peer. Because a GRUU contains a destination list, it can have the same contents as a destination list stored elsewhere in the resource dictionary.
Anonymous GRUUs [RFC5767] are constructed analogously, but require either that the enrollment server issues a different Node-ID for each anonymous GRUU required, or that a destination list be used that includes a peer that compresses the destination list to stop the Node-ID from being revealed.
7. SIP-REGISTRATION Kind Definition
This section defines the SIP-REGISTRATION Kind.
Name SIP-REGISTRATION
Kind IDs The Resource Name for the SIP-REGISTRATION Kind-ID is the AOR of the user as specified in Section 2. The data stored is a SipRegistration, which can contain either another URI or a destination list to the peer which is acting for the user.
Data Model The data model for the SIP-REGISTRATION Kind-ID is dictionary. The dictionary key is the Node-ID of the storing peer. This allows each peer (presumably corresponding to a single device) to store a single route mapping.
Access Control USER-NODE-MATCH. Note that this matches the SIP AOR against the rfc822Name in the X509v3 certificate. The rfc822Name does not include the scheme so that the "sip:" prefix needs to be removed from the SIP AOR before matching. Escaped characters (’%’
encoding) in the SIP AOR also need to be decoded prior to matching (see [RFC3986]).
Data stored under the SIP-REGISTRATION Kind is of type SipRegistration. This comes in two varieties:
- **sip_registration_uri**
a URI which the user can be reached at.
- **sip_registration_route**
a destination list which can be used to reach the user’s peer.
8. Security Considerations
8.1. RELOAD-Specific Issues
This Usage for RELOAD does not define new protocol elements or operations. Hence no new threats arrive from message exchanges in RELOAD.
This document introduces an AOR domain restriction function that must be surveyed by the storing peer. A misconfigured or malicious peer could cause frequent rejects of illegitimate storing requests. However, domain name control relies on a lightweight pattern matching and can be processed prior to validating certificates. Hence no extra burden is introduced for RELOAD peers beyond loads already present in the base protocol.
8.2. SIP-Specific Issues
8.2.1. Fork Explosion
Because SIP includes a forking capability (the ability to retarget to multiple recipients), fork bombs (i.e., attacks using SIP forking to amplify the effect on the intended victims) are a potential DoS concern. However, in the SIP usage of RELOAD, fork bombs are a much lower concern than in a conventional SIP Proxy infrastructure, because the calling party is involved in each retargeting event. It can therefore directly measure the number of forks and throttle at some reasonable number.
8.2.2. Malicious Retargeting
Another potential DoS attack is for the owner of an attractive AOR to retarget all calls to some victim. This attack is common to SIP and difficult to ameliorate without requiring the target of a SIP registration to authorize all stores. The overhead of that requirement would be excessive and in addition there are good use cases for retargeting to a peer without its explicit cooperation.
8.2.3. Misuse of AORs
A RELOAD overlay and enrollment service that liberally accept registrations for AORs of domain names unrelated to the overlay instance and without further authorisation, eventually store presence state for misused AORs. An attacker could hijack names, register a bogus presence and attract calls dedicated to a victim that resides within or outside the Overlay Instance.
A hijacking of AORs can be mitigated by restricting the name spaces admissible in the Overlay Instance, or by additional verification actions of the enrollment service. To prevent an (exclusive) routing to a bogus registration, a caller can in addition query the DNS (or other discovery services at hand) to search for an alternative presence of the callee in another overlay or a normal SIP infrastructure.
8.2.4. Privacy Issues
All RELOAD SIP registration data is visible to all nodes in the overlay. Location privacy can be gained from using anonymous GRUUs. Methods of providing anonymity or deploying pseudonyms exist, but are beyond the scope of this document.
9. IANA Considerations
9.1. Data Kind-ID
IANA shall register the following code point in the "RELOAD Data Kind-ID" Registry (cf., [RFC6940]) to represent the SIP-REGISTRATION Kind, as described in Section 7. [NOTE TO IANA/RFC-EDITOR: Please replace RFC-AAAA with the RFC number for this specification in the following list.]
<table>
<thead>
<tr>
<th>Kind</th>
<th>Kind-ID</th>
<th>RFC</th>
</tr>
</thead>
<tbody>
<tr>
<td>SIP-REGISTRATION</td>
<td></td>
<td>RFC-AAAA</td>
</tr>
</tbody>
</table>
[Page 15]
9.2. XML Name Space Registration
This document registers the following URI for the config XML namespace in the IETF XML registry defined in [RFC3688]
Registrant Contact: The IESG
XML: N/A, the requested URI is an XML namespace
10. Acknowledgments
This document was generated in parts from initial drafts and discussions in the early specification phase of the P2PSIP base protocol. Significant contributions (in alphabetical order) were from David A. Bryan, James Deverick, Marcin Matuszewski, Jonathan Rosenberg, and Marcia Zangrilli, which is gratefully acknowledged.
Additional thanks go to all those who helped with ideas, discussions, and reviews, in particular (in alphabetical order) Roland Bless, Michael Chen, Alissa Cooper, Marc Petit-Huguenin, Brian Rosen, Meral Shirazipour, and Matthias Waehlisch.
11. References
11.1. Normative References
11.2. Informative References
[I-D.ietf-p2psip-concepts]
[I-D.ietf-p2psip-share]
Appendix A. Third Party Registration
In traditional SIP, the mechanism of a third party registration (i.e., an assistant acting for a boss, changing users register a role-based AOR, ...) is defined in Section 10.2 of [RFC3261]. This is a REGISTER which uses the URI of the third-party in its From header and cannot be translated directly into a P2PSIP registration, because only the owner of the certificate can store a SIP-REGISTRATION in a RELOAD overlay.
A way to implement third party registration is by using the extended access control mechanism USER-CHAIN-ACL defined in [I-D.ietf-p2psip-share]. Creating a new Kind "SIP-3P-REGISTRATION" that is ruled by USER-CHAIN-ACL allows the owner of the certificate to delegate the right for registration to individual third parties. In this way, original SIP functionality can be regained without weakening the security control of RELOAD.
Appendix B. Change Log
B.1. Changes since draft-ietf-p2psip-sip-09
o Added subsection on keepalive
o Updated references
B.2. Changes since draft-ietf-p2psip-sip-08
o Added the handling of SIPS
o Specified use of Posix regular expressions in configuration document
o Added IANA registration for namespace
o Editorial polishing
o Updated and extended references
B.3. Changes since draft-ietf-p2psip-sip-07
o Cleared open issues
o Clarified use cases after WG discussion
o Added configuration document extensions for configurable domain names
o Specified format of contact_prefs
o Clarified routing to AORs
o Extended security section
o Added Appendix on Third Party Registration
o Added IANA code points
o Editorial polishing
o Updated and extended references
B.4. Changes since draft-ietf-p2psip-sip-06
o Added Open Issue
Authors’ Addresses
Cullen Jennings
Cisco
170 West Tasman Drive
MS: SJC-21/2
San Jose, CA 95134
USA
Phone: +1 408 421-9990
Email: fluffy@cisco.com
Bruce B. Lowekamp
Skype
Palo Alto, CA
USA
Email: bbl@lowekamp.net
Eric Rescorla
RTFM, Inc.
2064 Edgewood Drive
Palo Alto, CA 94303
USA
Phone: +1 650 678 2350
Email: ekr@rtfm.com
Salman A. Baset
Columbia University
1214 Amsterdam Avenue
New York, NY
USA
Email: salman@cs.columbia.edu
Henning Schulzrinne
Columbia University
1214 Amsterdam Avenue
New York, NY
USA
Email: hgs@cs.columbia.edu
|
{"Source-Url": "https://tools.ietf.org/pdf/draft-ietf-p2psip-sip-21.pdf", "len_cl100k_base": 6418, "olmocr-version": "0.1.51", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 40948, "total-output-tokens": 8846, "length": "2e12", "weborganizer": {"__label__adult": 0.0004427433013916016, "__label__art_design": 0.00025773048400878906, "__label__crime_law": 0.0010509490966796875, "__label__education_jobs": 0.0013685226440429688, "__label__entertainment": 0.00025153160095214844, "__label__fashion_beauty": 0.00016963481903076172, "__label__finance_business": 0.0008144378662109375, "__label__food_dining": 0.0003142356872558594, "__label__games": 0.0013332366943359375, "__label__hardware": 0.007465362548828125, "__label__health": 0.0005087852478027344, "__label__history": 0.0004839897155761719, "__label__home_hobbies": 8.869171142578125e-05, "__label__industrial": 0.0006508827209472656, "__label__literature": 0.0004487037658691406, "__label__politics": 0.00046944618225097656, "__label__religion": 0.0005583763122558594, "__label__science_tech": 0.2744140625, "__label__social_life": 0.00017976760864257812, "__label__software": 0.167724609375, "__label__software_dev": 0.53955078125, "__label__sports_fitness": 0.00040340423583984375, "__label__transportation": 0.000701904296875, "__label__travel": 0.00028777122497558594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31649, 0.03841]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31649, 0.24224]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31649, 0.8203]], "google_gemma-3-12b-it_contains_pii": [[0, 586, false], [586, 1884, null], [1884, 3325, null], [3325, 5051, null], [5051, 7345, null], [7345, 9021, null], [9021, 9835, null], [9835, 11718, null], [11718, 13964, null], [13964, 15471, null], [15471, 17478, null], [17478, 19985, null], [19985, 21659, null], [21659, 23178, null], [23178, 25132, null], [25132, 26858, null], [26858, 28749, null], [28749, 30265, null], [30265, 31102, null], [31102, 31649, null], [31649, 31649, null]], "google_gemma-3-12b-it_is_public_document": [[0, 586, true], [586, 1884, null], [1884, 3325, null], [3325, 5051, null], [5051, 7345, null], [7345, 9021, null], [9021, 9835, null], [9835, 11718, null], [11718, 13964, null], [13964, 15471, null], [15471, 17478, null], [17478, 19985, null], [19985, 21659, null], [21659, 23178, null], [23178, 25132, null], [25132, 26858, null], [26858, 28749, null], [28749, 30265, null], [30265, 31102, null], [31102, 31649, null], [31649, 31649, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31649, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31649, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31649, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31649, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31649, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31649, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31649, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31649, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31649, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31649, null]], "pdf_page_numbers": [[0, 586, 1], [586, 1884, 2], [1884, 3325, 3], [3325, 5051, 4], [5051, 7345, 5], [7345, 9021, 6], [9021, 9835, 7], [9835, 11718, 8], [11718, 13964, 9], [13964, 15471, 10], [15471, 17478, 11], [17478, 19985, 12], [19985, 21659, 13], [21659, 23178, 14], [23178, 25132, 15], [25132, 26858, 16], [26858, 28749, 17], [28749, 30265, 18], [30265, 31102, 19], [31102, 31649, 20], [31649, 31649, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31649, 0.01049]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
6034cb3760949f1b0e1e7ce62ce1b3ac53fa8862
|
Reducing Global Schedulers’ Complexity Through Runtime System Decoupling
Alexandre Santana, Vinicius Freitas, Marcio Castro, Laércio Lima Pilla, Jean-François Méhaut
To cite this version:
HAL Id: hal-01873526
https://inria.hal.science/hal-01873526
Submitted on 13 Sep 2018
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Reducing Global Schedulers’ Complexity Through Runtime System Decoupling
Alexandre de Limas Santana, Vinicius de Freitas, Márcio Castro, Laércio Lima Pilla, Jean-François Méhaut
Abstract
Global schedulers are components used in parallel solutions, specially in dynamic applications, to optimize resource usage. Nonetheless, their development is a cumbersome process due to necessary adaptations to cope with the programming interfaces and abstractions of runtime systems. This paper proposes a model to dissociate schedulers from runtime systems to lower software complexity. Our model is based on the scheduler breakdown into modular and reusable concepts that better express the scheduler requirements. Through the use of meta-programming and design patterns, we were able to achieve fully reusable workload-aware scheduling strategies with up to 63% fewer lines of code with negligible run time overhead.
1 Introduction
The efforts to provide advances in parallel components, programming models and architectural design by the high performance computing community has led to solutions able to reach unprecedented computational landmarks. Unavoidably, future parallel components will be required to seamlessly benefit from improvements in multiple scientific fronts, preferably with low re-implementation efforts. Of special interest, within the context of dynamic applications, are global schedulers. They are specialized resource management components required to guarantee an adequate allocation of resources in a parallel solution. For that, they must be aware of parallelism intricacies in order to distribute the application workload among available processing elements (PEs).
Scheduling strategies may consider different information, like topology data [Hoefler et al. 2014], power consumption [Langer et al. 2015], or communication affinity [Jeannot et al. 2014, Cruz et al. 2015] to achieve their goals. As applications and scheduling strategies became more complex, runtime systems (RTS) such as Charm++ [Kale et al. 2007], OpenMP [Chapman et al. 2008] and OpenACC [Wienke et al. 2012], have been applied as containers and frameworks to simplify the development of applications’ parallel behavior and their relationship with global schedulers. As a result, these systems provide reusability to their components and provide a beneficial disconnection of application and scheduler code.
A RTS depends on software hooks to assemble components into a parallel solution and common approaches are strict APIs and code annotations. As consequence, RTS components are required to be adapted into the system’s workflow and parallelism abstractions (e.g., threads, tasks, chares). However, scheduler components are composed by algorithms that each have their own functional requirements but does not depend on parallelism nor data abstractions. As such, by enforcing such traits through strict software hooks, global schedulers’ software become bloated with adaptations, becoming more complex and less resilient to system modifications.
As novel parallel platforms are proposed, larger portions of RTSs are dedicated to exploit their particularities to maximize applications’ performance. The exploitation of individual traits in parallel solutions leads to an increase in software complexity and component specialization, ultimately limiting their reusability [Dongarra et al. 2005]. Classic techniques such as aspect-oriented programming [Kiczales et al. 1997] and component-based software engineering [Heineman and Council 2001] have been used to compose very large systems¹ based on reusable components. However, due to possible resource competition among parallel segments of code, these techniques can not be directly applied [Grossman et al. 2017]. We believe that the lack of studies in how to properly compose global schedulers with other components and RTSs will eventually result in bloated systems that are too complex to manage and challenging to port into future parallel programming models and tools.
To counteract this problem, we propose to exploit the sequential execution flow within RTSs to extract the scheduler component into a self-contained module. Isolated from its context, we are able to create a system-independent global scheduler model based on reusable and specialized concepts. As a result, this model can be used to implement scheduling policies that are less complex due to their isolation from specific technologies, RTSs and external scheduling-unrelated libraries. To achieve this results without high overheads, our proposal is based solely on modern language meta-programming facets and the Adapter Design Pattern [Vlissides et al. 1994] to link smaller segments of code into a global scheduler.
We evaluated our proposed model by comparing two re-implemented versions of scheduling policies from Charm++ and OpenMP against the original versions native to these systems. Our global scheduler implementations are independent of RTS which requires them to be packed in external containers. Therefore, a global scheduler library called _Meta-programmed-Oriented Global Scheduler Library_ (MOGSLib) was developed to portray a collection of reusable scheduling concepts that can be assembled to form system-specific global schedulers. The main focus of our experimental analysis is the comparison between identical scheduling strategies to evaluate discrepancies in performance, complexity and reusability. The proposed model was able to achieve the original behavior of the strategies in regards to application execution and strategy decision times, and schedule quality, while also lowering the number of lines of code (LoC) needed to express schedulers.
The remainder of this paper is structured as follows: Section 2 presents our global scheduler model. Section 3 describes our experiments. Section 4 discusses our results and analysis. Sec-
¹Systems featuring millions of source lines of code.
tion 5 presents related work. Finally, Section 6 concludes this paper.
2 System-Independent Scheduler Model
The implementation of a global scheduling algorithm depends on a multitude of factors and design choices aside from the scheduling policy such as: (i) data structure selection, (ii) third-party library usage, (iii) memory placement (e.g., Data- or Object-Oriented Design) and (iv) target RTS. These decisions are important as they provide optimizations for a scheduler in regards to its target environment. However, each design decision further specializes the global scheduler implementation and limit its reusability as a whole.
Regardless of the different designs an implementation can portray, each runtime system and library also offer its own set of capabilities for schedulers (e.g., load balancing database in Charm++ [Kale et al. 2007]). The divergent interfaces among tools results in different implementations even when accessing a common functionality in distinct systems. As a consequence, modifications in scheduling policies are required when experimenting with different designs choices (RTS, data structures, libraries, etc.).
The contemporary relationship between runtime system and global schedulers is depicted in Figure 1. This figure expresses the dependencies (directed arrows) from software abstractions (inner boxes) to components within their context (outer boxes). As such, the problematic relationships are characterized by dependencies that spans out of the component’s source context. Those relationships require unrelated code to be injected into a component, further binding its implementation and increasing its complexity as its code increases.
Figure 1: Simplification of parallel components’ dependencies.
As a practical example of the aforementioned problem, we propose a scenario where a developer would implement a workload-aware scheduling policy. In this scenario, the scheduler requires the application data regarding its tasks’ workload. An implementation of this policy in the OpenMP loop-scheduling interface would rely on user-provided data as OpenMP has neither a method in its scheduling API to query the application workload, nor a method to inform it on the application API. On the other hand, Charm++ presents data structures on its scheduling API that contain these and other data dynamically collected by the system. Regardless of RTS, the scheduling policy must query its required data from some source. As frameworks for developing global schedulers on these systems must be as flexible as possible, the same scheduling API and data structures are displayed for all policies to obtain their own set of data. This design forces scheduling policies to contain scheduling-unrelated code responsible for manipulating RTS structures to fulfill their functional requirements. Nonetheless, we envision that the exposure of a global scheduler’s requirements through scheduling concepts is a solution that not only simplifies the development of schedulers but isolates the policy code from external functionalities.
2.1 Scheduling Concepts
Scheduling concepts are code segments which provide scheduling-unrelated functionalities that may be specialized for different RTSs or contexts. Different specializations of concepts must express its functionalities through functions with equal names/syntax. However, specializations must be sensible to its target environment in order to call the correct procedures needed to fulfill its functionality in the target RTS, platform or library.
The adapter design pattern is a software modeling technique that fits the aforementioned characteristics of scheduling concepts. As an example, a Unified Modeling Language class diagram (UML) is displayed in Figure 2 depicting classes representing both an abstract concept and the specialized concepts for the functionality of querying the application's workload. Both Static Workload and Dynamic Workload represents specialized concepts for accessing the application workload with different semantics, the former gathers static data and the latter dynamic workload through RTS data structures. Finally, both classes are implementations of the Application Workload interface, which defines a layer of functions for accessing the specialization's methods.

Although there is the possibility of implementing scheduling concepts solely through the adapter pattern, its usage incurs overheads as it relies on runtime type checks and virtual functions. We propose that scheduling policies can be better declared as partially defined template structures that depend on auxiliary data-types to implement functions that are sensitive to a given context. As template structures, scheduling concepts must be attached to a data-type that contains all the methods the concept requires during compilation. This way, both the scheduling concept and the specialized structure that provides its functionality are loosely linked until compilation, when data-types must be resolved. After the compilation, a direct association links both structures to construct a concrete scheduling concept that can provide its functionality without virtual calls nor dynamic type checks, avoiding their overhead.
To exemplify the proposed approach, we present a snippet in Figure 3, which showcases a concept that exposes the workload of an application. In the snippet, lines 1-6 portray the declaration of a concept that depends on a Concrete data-type (line 1) to properly provide its functionality through the workloads method (line 5). In lines 8-20, two classes that contain all the required methods to be a Concrete type for the WorkloadConcept are presented. The first class (lines 10-19) represents a class that packages the semantics to obtain the application's workload from the Charm++ RTS. The WorkloadCharm is capable of querying the load balancing database contained in Charm++ (lines 13-15) and obtain the workload data from its parallelism abstractions, represented by structures named chares. On the other hand, the second class (lines 21-30) portrays an auxiliary data structure to the OpenMP RTS that registers the application's workload data informed by the user (lines 17-19). With those definitions, a scheduling policy can make use of two complete WorkloadConcept, one for the Charm++ system and other for OpenMP.
depending of the Concrete template parameter. The advantage of this approach is that the concept is entirely responsible for gathering, storing, and manipulating the data structures for exposing its functionality. Ultimately, this design allows for a less complex scheduling strategy that requires no changes if the semantics of acquiring the application workload is changed.
```
1 template<typename Concrete>
2 class WorkloadConcept {
3 public:
4 Concrete data;
5 Load * workloads() { return data.workloads(); }
6 };
7
8 template<typename Loads, typename PEs>
9 class Greedy :
10 public Scheduler<Loads, PEs> {
11 TaskMap work(Tuple<Loads, PEs> concepts);
12 };
13
14 template<typename ... Concepts>
15 class Scheduler {
16 public:
17 TaskMap work(Tuple<Concepts> concepts);
18 };
19
```
Figure 4: Global scheduler model abstraction.
Figure 3: Meta-programmed scheduling concept.
Our approach of using modular and smaller scheduling concepts that compose a larger component displays advantages beyond alleviating the software complexity. Through the addition of dummy classes (like the one in the example) containing testing workloads to schedulers, it is possible to validate the scheduling policy independently from applications or RTSs. That way, we can find flaws in the implementation code on early stages of prototyping more precisely.
### 2.2 Global Scheduler Model
Similar to the process of declaring a scheduling concept, a global scheduler can be declared as a template structure that depends on one or multiple scheduling concepts. A C++ example of this approach is depicted in Figure 4. Lines 1-6 serve as a declaration of the scheduler template model. The first line states that the Scheduler template will require an arbitrary number of parameters. The collection of parameters forms the Concepts type which is used in line 4 to construct the work() function signature. Moreover, as seen in lines 7-12, every scheduler policy has a specialized work() function signature that depends on its requirements rather than being defined by an external API.
### 2.3 The Role of MOGSLib
As concepts are defined as incomplete template structures, there must be concrete classes capable of providing the necessary functionalities for the
concepts. These structures must be sensitive to the parallel solution’s contexts (the target RTS, chosen libraries and execution environment) but they should remain decoupled from those. Our proposal is to encapsulate this software stack into a library that exposes a configuration interface that can be easily composed into different contexts. Our implementation of such library is the Meta-programming-Oriented Global Scheduler Library (MOGSLib), an extensible and open-source library developed in C++14.
A global scheduler in MOGSLib is represented by a tuple \((P, F, S)\) where \(P\) is the scheduling policy, \(F\) is a set of concrete scheduling concepts and \(S\) is a target context. Through this representation, it is possible for a given scheduling policy \(P_i\) to generate different global schedulers by utilizing different concrete concepts or being targeted to a different context. As reusability is encouraged, previously developed functionalities can be used to compose new global schedulers, reducing the effort to develop them (coding, testing, etc.) and providing better reproducibility in scientific experiments.
A careful explanation of how the library operates is out of the scope of this work and interested readers are encouraged to check its public repository. Nonetheless, an overview of MOGSLib components and their interactions with external technologies is provided in Figure 5 where the components are denoted by labeled boxes and their dependencies by directed arrows. Objectively, MOGSLib is attached to the RTS and uses pre-compilation scripts to prompt the user for compilation and template parameters, ultimately generating a global scheduler that can interoperate with external RTSS and libraries (e.g., Load

3 Experiments
In order to obtain precise information about overhead, we chose to implement centralized and greedy scheduling policies due to their predictable quasi-linear execution time. This class of schedulers is capable of making fast and precise decisions in smaller scenarios while displaying little variations in policy execution time when receiving the same input data. As such, greedy schedulers are ideal to observe small overhead variations between different global scheduler models.
In this work, we have chosen two greedy policies to implement in our model to compare against the native versions found in runtime systems. In this work, we selected two policies implemented within runtime systems to re-implement in our model. Those policies are: (i) Charm++’s native greedy scheduler (GreedyLB), and (ii) a workload-aware loop scheduler implemented in libGOMP (BinLPT) [Penna et al. 2016]. The GreedyLB strategy iteratively pulls tasks from a task load max-heap and assigns to the top element of a PE load min-heap until there are no more unassigned tasks. The loop scheduler BinLPT groups adja-
---
2 Available at: [https://github.com/ECLScheduling/1b-framework](https://github.com/ECLScheduling/1b-framework)
cent iterations of a loop in up to $k$ task packs (defined by the user) and iteratively assign the heaviest group to the least overloaded PE. Although different, these strategies share the same scheduling concepts requirements, which also serve as an example of code reuse. The required scheduling concepts are: (i) application workload data retrieval and (ii) PE workload data retrieval.
### 3.1 Evaluated Metrics
Given the intent of analyzing a development model rather than a novel scheduling policy, our metrics have the intent of spotting differences between implementations and are enumerated as follows: (i) strategy decision time, (ii) application makespan, (iii) global scheduler lines of code (LoC) and (iv) number of reusable LoC. The time related metrics have the objective of measuring the overhead incurred by our model both in application makespan and in strategy decision time. The LoC metric serves as an indicator of the code complexity as fewer lines point to less complex segments of code [Nguyen et al. 2007].
### 3.2 Software and Hardware
In order to compare our model against native implementations, our evaluation contemplates the Charm++ v6.7 and OpenMP v4.0 runtime systems. The MOGSLib library, Charm++ runtime and benchmarks were compiled with g++ v5.4.0 with the following compilation flags: -O3 -std=c++14. Finally, the libGOMP library was compiled with its own makefile found in its aforementioned repository with the gcc compiler without additional flags.
To test the greedy strategy in Charm++, we chose the synthetic benchmark contained within the default Charm++ package, LB Test, an iterative application that issues busy wait operations to simulate the workload. The benchmark was executed with different configurations in order to discover a parameter set that displays enough load imbalance to benefit from a global scheduler. To create this scenario, the following LB Test configurations were applied: (i) **Iterations**: 150, (ii) **Load balancing calls**: every 40 iterations, (iii) **Minimal task load**: 10 microseconds, (iv) **Maximum task load**: 3000 microseconds. To analyze the schedulers’ scalability under different numbers of tasks, we ran this experiment with 300, 600, 900 and 1,200 tasks.
The tests using the OpenMP runtime system were executed over a modified version of libGOMP, the library responsible for providing the OpenMP directives implementations for open-source compilers. The modified version of libGOMP contains the required hooks for both MOGSLib and BinLPT and can be found in GitHub.
We test the BinLPT scheduler with the SimSched synthetic benchmark. This application simulates CPU intensive kernels utilizing statistical distributions to generate random classes of workload that are later assigned to loop iterations. The parameters for the SimSched benchmark used in this paper were selected in conformity to the BinLPT paper. Their objective is to create a use case that better fits the use case of this global scheduler and are configured as follows: (i) **Distribution**: exponential, (ii) **Number of workload classes**: 12, (iii) **Kernel complexity**: quadratic. The necessary modifications to support the BinLPT in OpenMP system, SimSched benchmark details and the param...
ters to test workload-aware in it are explained in [Penna et al. 2016].
Our experiments were executed on the Genepi\(^5\) cluster within the Grid’5000 distributed environment. Furthermore, our Charm++ tests were executed over 4 nodes whereas the OpenMP tests used only one.
4 Results
The results of our experiments in regards to total application execution time are provided in Figure 6(a) for the Charm++ system and Figure 6(b) for OpenMP. Each bar in those figures represents the arithmetic mean of 50 application runs. In order to further analyze the application execution time, we executed two-tailed \(t\)-student tests to check if both scheduler versions (native and MOGSLib) were originated from a distribution with the same parameters. The confidence interval was set to 5\% and \(p\)-values are displayed in Table 1.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|}
\hline
\textbf{Charm++ environment} & \textbf{OpenMP environment} \\
\hline
\textbf{Task Count} & \\
\hline
300 & 0.96 \\
600 & 0.55 \\
900 & 0.18 \\
1200 & 0.43 \\
\hline
\end{tabular}
\end{table}
As the experiments generated \(p\)-values that surpass the confidence interval of 0.05, we cannot reject the null hypothesis that both distributions are equal. This conclusion implies that both native and the MOGSLib schedulers are able to perform equally on the different tested applications, runtime systems and application size.
4.1 Schedule Decision Time Analysis
In order to analyze both implementations in detail, we analyzed the time taken in order to decide the task mapping on each of the aforementioned scenarios through Figures 7(a) and 7(b). The bars depicted in these figures represent the arithmetic mean of the time each strategy took to decide a schedule. Therefore, in Charm++ experiments, each bar represents 300 data points (3 schedules computed for each of the 50 runs). Moreover, in OpenMP experiments, each bar is composed by 50 data points, as the scheduler is called before the loop and there is only one loop per application kernel.
\(^5\)Genepi complete system specification at: https://www.grid5000.fr/mediawiki/index.php/Grenoble: Hardware#genepi
The scheduler decision time data depicted in Figures 7(a) and 7(b) presented standard deviations smaller than 1%. Furthermore, our model portrayed decision times that were 45% and 18% faster on Charm++ and OpenMP systems, respectively. However, for these tests, the impact of the scheduler decision time is negligible due to its scale ( microseconds) in comparison to the application makespan (in seconds). This overhead would be more important in scenarios with more tasks or higher rescheduling frequency. With a small time scale, differences between implementations were bound to happen and their origin is related to different parameters between schedulers. In Charm++, generic data structures were used in our model in contrast to the ones used by the Charm++ scheduler. Moreover, the only difference between the schedulers in OpenMP was the compiler used to generate the MOGSLib global scheduler. While libGOMP is compiled through gcc, MOGSLib used g++ to compile and link its scheduler into OpenMP.
4.2 Complexity Analysis
To better analyze our model in contrast to native implementations, we break our approach in different components that form the global scheduler. Each component’s lines of code count is displayed in Table 2, with the last column designated to portray where the component can be reused within other parallel solutions.
<table>
<thead>
<tr>
<th>Component</th>
<th>BinLPT</th>
<th>GreedyLB</th>
<th>Reusable on</th>
</tr>
</thead>
<tbody>
<tr>
<td>Scheduling Policy</td>
<td>37</td>
<td>30</td>
<td>Runtime Systems</td>
</tr>
<tr>
<td>Runtime System Adapter</td>
<td>60</td>
<td>22</td>
<td>Scheduling Policies</td>
</tr>
<tr>
<td>Concepts</td>
<td>30</td>
<td>40</td>
<td>Scheduling Policies</td>
</tr>
</tbody>
</table>
The native versions of BinLPT and GreedyLB are composed, respectively, by 84 and 81 LoC. Our version of those same schedulers are composed by 127 and 97 LoC respectively when accounting for the sum of components that assemble the scheduler. However, every component can be reused in at least one scenario as stated in the last column in Table 2. In the scenario where a new scheduler is proposed and the concepts and RTS adapter have been previously developed, the only required implementation is the scheduling policy. This scenario is not uncommon as the concepts can be reused and novel policies are encouraged to use existing system adapters and scheduling concepts. As such, when analyzing solely the size of the scheduling policy code, our model attained up to 63% size reduction in comparison to the native versions.
The segmentation into concepts is advantageous as each concept portrays a single role within the scheduler in contrast to current implementations found in RTSs. Despite resulting in a larger sum of LoC, this approach enables the composition of functionalities implemented by developers of different expertise. That way, developers can rely on reusing concrete concepts rather than reimplementation, leaving the burden of assembling the functionalities to be taken care by libraries such as MOGSLib.
5 Related Work
Schedulers are a relevant topic in real-time systems due to application diversity which, ultimately, demands specialized policies. Classically, schedulers are kernel components and, as such, developed policies are tied to a specific patch and OS. Furthermore, to enable higher customization and enhance the range of scheduling policies, Asberg [Asberg et al. 2012] and Mollison [Mollison and Anderson 2013] moved the
policies implementations from kernel space to the user space. Through the use of an abstraction layer inside the kernel space, their work showcased policies developed over higher level abstractions with acceptable overhead in hard real-time systems. Ultimately, both proposals used different techniques to decouple the schedulers from the kernel primitives. However, in concordance with our proposal, they also proposed the extraction of the scheduler component into its own module.
In respect to providing modularity to scheduler components, HPC runtime systems like Charm++, OpenMP and OpenACC provide a simple way for decoupling application and scheduler code. Either through annotations, abstractions or language modifications, these systems allow resource management hooks in the applications life-cycle, thus enabling reuse and portability of scheduling policies among applications within the same system. Nonetheless, scheduler implementations are yet limited to a specific abstraction set, contain system-specific code and their algorithms are often scattered through different segments in the RTS. Ultimately, our approach intends to benefit from these RTSS while alleviating the scheduling implementation problems associated with their usage.
Grossman et al. [Grossman et al. 2017] proposed that, through a better description of a component’s connections, the composability of parallel libraries can be achieved through modern language facets such as lambda functions and asynchronous calls. Their work is oriented towards the development of a novel runtime system that showcases component connections using lambda functions. Nonetheless, our work shares the use of modern language traits and better description of components’ connections. However, we apply meta-programming and ultimately target system-independence rather than proposing a novel system architecture.
Bigot [Bigot et al. 2012] defined parallel solutions as an assembly of components that can be interconnected and swapped to provide performance portability. The work is based on performance portability through modularity and applies driver components to resolve the intricacies of specific systems and technologies. Despite the similarities, our work diverges in the technique applied and context within the parallel solution. Ultimately, their approach presents a component-based model for applications that utilizes a small runtime to link components together, whereas our work presents an attachable scheduler model that can be linked to runtime systems through a library in compilation time.
6 Conclusions
In order to ease the development of global schedulers and enable simpler scheduling policy implementations, this paper contributes to the topic by exposing a global scheduler model capable of describing its requirements through meta-programmed scheduler concepts. We discuss the impacts of reusability, modularity and software complexity on parallel components while we present a novel library, MOGSLib, as a technical contribution for global scheduler developers. The evaluation of the proposed model is made through synthetic benchmarks executed on Charm++ and OpenMP systems analyzing two distinct workload-aware scheduling policies, GreedyLB and BinLPT.
Our results (presented in Section 4) displayed that the model incurs in negligible variations in scheduler quality and application makespan. Additionally, at the best case scenario, our approach can reduce the number of LoC needed to develop
a new global scheduler by up to 63% when reusing previously implemented scheduling concepts. The possibility of component reusability is beneficial as it enables code replayability without development efforts and less complex software segments. Even in the worst case scenario, where concepts must be implemented from scratch, this approach allows for a better prototyping phase that can adhere to test-oriented development due to each module being responsible for a single role in the system.
In regards to future works, we intend to further study the proposed model, experimenting its adoption into different scheduling policies. Of special interest are strategies that take into account informations about the platform topology, task affinity and memory hierarchy. As more functionalities are required for different policies, we aim to enhance MOGSLib with more concrete scheduling concepts and system adapters to provide developers with more tools and options to compose simple and reusable global schedulers.
ACKNOWLEDGMENT
This work was partially supported by the Brazilian Federal Agency for the Support and Evaluation of Graduate Education (CAPES) and by the Brazilian Council of Technological and Scientific Development (CNPq), project grant 401266/2016-8.
Experiments presented in this paper were carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by INRIA and including CNRS, RENATER and several Universities as well as other organizations (see https://www.grid5000.fr).
References
|
{"Source-Url": "https://inria.hal.science/hal-01873526/file/mogs.pdf", "len_cl100k_base": 6398, "olmocr-version": "0.1.48", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 35843, "total-output-tokens": 8421, "length": "2e12", "weborganizer": {"__label__adult": 0.00034546852111816406, "__label__art_design": 0.0003933906555175781, "__label__crime_law": 0.0003058910369873047, "__label__education_jobs": 0.0006270408630371094, "__label__entertainment": 8.702278137207031e-05, "__label__fashion_beauty": 0.00017440319061279297, "__label__finance_business": 0.0002799034118652344, "__label__food_dining": 0.0003597736358642578, "__label__games": 0.0006260871887207031, "__label__hardware": 0.0017910003662109375, "__label__health": 0.0005197525024414062, "__label__history": 0.0003809928894042969, "__label__home_hobbies": 0.000118255615234375, "__label__industrial": 0.0006213188171386719, "__label__literature": 0.0002162456512451172, "__label__politics": 0.0003447532653808594, "__label__religion": 0.0006041526794433594, "__label__science_tech": 0.07373046875, "__label__social_life": 9.167194366455078e-05, "__label__software": 0.00782012939453125, "__label__software_dev": 0.9091796875, "__label__sports_fitness": 0.0003800392150878906, "__label__transportation": 0.0008740425109863281, "__label__travel": 0.0003001689910888672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36466, 0.07049]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36466, 0.41891]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36466, 0.8884]], "google_gemma-3-12b-it_contains_pii": [[0, 1111, false], [1111, 3514, null], [3514, 7057, null], [7057, 10133, null], [10133, 13454, null], [13454, 15723, null], [15723, 18753, null], [18753, 22015, null], [22015, 24173, null], [24173, 27618, null], [27618, 31112, null], [31112, 34111, null], [34111, 36466, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1111, true], [1111, 3514, null], [3514, 7057, null], [7057, 10133, null], [10133, 13454, null], [13454, 15723, null], [15723, 18753, null], [18753, 22015, null], [22015, 24173, null], [24173, 27618, null], [27618, 31112, null], [31112, 34111, null], [34111, 36466, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36466, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36466, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36466, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36466, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36466, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36466, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36466, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36466, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36466, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36466, null]], "pdf_page_numbers": [[0, 1111, 1], [1111, 3514, 2], [3514, 7057, 3], [7057, 10133, 4], [10133, 13454, 5], [13454, 15723, 6], [15723, 18753, 7], [18753, 22015, 8], [22015, 24173, 9], [24173, 27618, 10], [27618, 31112, 11], [31112, 34111, 12], [34111, 36466, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36466, 0.03425]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
bd3097cd7b629093ffd0e11468f29245c31703af
|
THE STRUCTURAL ANALYSIS OF PROGRAMMING LANGUAGES
B. J. MacLennan
UNCLASSIFIED NPS-81-009
THE STRUCTURAL ANALYSIS
OF PROGRAMMING LANGUAGES
B. J. MacLennan
September 1981
Approved for public release; distribution unlimited
Prepared for:
Chief of Naval Research
Arlington, VA 22217
The work reported herein was supported by the Foundation Research Program of the Naval Postgraduate School with funds provided by the Chief of Naval Research.
Reproduction of all or part of this report is authorized.
This report was prepared by:
BRUCE J. MACLENNAN
Assistant Professor of
Computer Science
Reviewed by:
GORDON H. BRADLEY, Chairman
Department of Computer Science
Released by:
WILLIAM M. TOLLES
Dean of Research
**Title**: The Structural Analysis of Programming Languages
**Author**: Bruce J. MacLennan
**Performing Organization Name and Address**: Naval Postgraduate School, Monterey, CA 93940
**Type of Report & Period Covered**: Technical Report
**Report Date**: September 1981
**DISTRIBUTION STATEMENT**: Approved for public release; distribution unlimited.
**Key Words**: Structural Analysis, Programming Language Metrics, Software Metrics, Software Measurement, Metrics, Programming Language Design Methods.
**Abstract**: A language's structures are some of its most important characteristics. These include the data structures, those mechanisms that the language provides for organizing elementary data values. They also include the control structures, which organize the control flow. Less obviously, they include the name structures, which partition and organize the name space.
Languages can be compared relative to their structures in the data, control, and name domains. This report describes a syntax-independent...
method of representing the structures of a language which facilitates visual complexity comparisons and is amenable to measurement. The data, control, and name structures of a number of languages are analyzed, including Pascal, LISP, Algol-60, Algol-68, the lambda calculus, FORTRAN, and Basic.
1. Introduction
It is common to find articles in the programming language literature riddled with unsupported claims. Words and phrases, such as 'better', 'simpler', 'more structured' and 'less error prone', are used with abandon. If we were selling aspirin and made such unsupported claims, we would probably be sued. We clearly need more precise ways of measuring our languages.
A language's structures are some of its most important characteristics. These include the data structures: those mechanisms that the language provides for organizing elementary data values. They also include the control structures, which organize the control flow. Less obviously, they include the name structures, which partition and organize the name space.
Languages can be compared relative to their structures in the data, control and name domains (and others, such as the syntactic domain). To make this comparison precise, we need a precise method of describing the structural properties of a language. Further, this method should be syntax independent; it should "look through" the syntax of a language to its underlying structure. In the next section we discuss a means by which pro-
2. Describing Structure
The number of different structures that a programmer can use are essentially unlimited. For instance, there are an infinite number of ways he can organize his data or control flow. Since programming languages are finite, there must be some finite means of generating this infinite number of structures.
The means, of course, is to have some number of primitive structures and some number of constructor functions which take existing structures and compose them into new structures. For instance, Pascal data types are built by applying the data type constructors (array, record, set, etc.) to the primitive data types (real, integer, char, etc.). This results in hierarchical structures. Similarly, control flows may be organized by applying the control flow constructors ('sequence', 'if,' and 'while') to the control flow primitives (those constructs that do not alter the control flow).
The hierarchical application of constructors to primitives is the most common method of building structures. Thus, we can use this as a starting point for our analysis of structures. For instance, as a first approximation, we can compare the complexity of structures of two programming languages by comparing the number of primitives and constructors in each. For instance, we can see from Table 1 that Pascal has 5 primitive data types and 7
data type constructors.
TABLE 1. Data Structures
Pascal
5 primitives: real, integer, Boolean, char, text
7 constructors: subrange, enumeration, set, array, file, pointer, record.
Algol - 60
3 primitives: real, integer, Boolean.
1 constructor: array.
Lisp 1.5
1 primitive: atom
1 constructor: list
Algol - 68
11 primitives: int, real, bool, char, format, compl, bits, bytes, string, sema, file
6 constructors: long, ref, array, struct, union, proc
Since Algol-60 has 3 primitives and 1 constructor, it is probably simpler than Pascal. Conversely, since Algol-68 has 11 primitives and 5 constructors it is likely to be more complex. However, the number of primitives and constructors is not the entire story.
A significant aspect of the structuring mechanisms provided by a language is the complexity of the inter-relationships among
the primitives and constructors. For instance, if the output of every constructor is a legitimate input to every constructor, and every primitive is a legitimate input to every constructor, then the system will be more regular than if this is not the case. This is often called 'orthogonality'. It is also part of what is involved when we call a language 'structured'. In the next section we will develop means for analyzing these relationships.
3. Data Structures
3.1 Semantic Grammars
We will begin with data structures to illustrate our technique for analyzing structure. Our goal is to analyze the interrelationships among the primitives and constructors of a system of data structures. How are we to go about this? We can begin by looking at syntax because, in most languages, there is a close relation between the syntax and the structures it embodies (i.e., form follows function). In particular, there will usually be exactly one syntactic construct for each data primitive. Consider Pascal. We can see from Table 1 that the primitives are denoted by the predefined type identifiers, 'integer', 'Boolean', 'real', 'char' and 'text'. There are constructors for enumerations, subranges, sets, arrays, records, files and pointers. We know that these are constructors because each can generate a potentially unlimited number of structures (types). Since the Pascal grammar tells us what syntactic entities can go together this will be a big help in deciding what semantic entities can go
together.
Consider the array type. We can write its syntax as
\[
\text{array-type ::= array \[ index-type ,... \] of type}
\]
The index-type must be a type isomorphic to a subrange of the integers. Syntactically, this can take the form:
\[
\text{index-type: scalar-type | subrange-type | type-identifier}
\]
\[
\text{scalar-type: ( identifier ,...)}
\]
\[
\text{subrange-type: constant .. constant}
\]
What we are interested in, however, is the semantics of the array constructor. Since we know that the index type must be isomorphic to a subrange of the integers, we know that the type-identifier must either name a scalar-type or a subrange-type or one of the predefined finite discrete-types, Boolean and char. Also, a subrange must be constructed from a discrete constant (i.e., an integer, or an element of a scalar or finite discrete type). We can write this as a "semantics-oriented grammar":
\[
\text{array-type: array \[ index-type ,... \] of type}
\]
\[
\text{index-type: scalar-type | subrange-type | discrete-type}
\]
\[
\text{scalar-type: ( identifier ,...)}
\]
\[
\text{subrange-type: constant .. constant}
\]
\[
\text{discrete-type: Boolean | char}
\]
One further simplification can be made here. Recall that in Pas-
cal
array [i, j] of t
is just an abbreviation for
array [i] of array [j] of t
Thus, without loss of generality, the definition of array-type can be written
array-type: array [index-type] of type
We have not altered the syntax; we have just eliminated some syntactic sugar. The semantics of most of the rest of Pascal's constructors closely follows their syntax.
If we are to be able to compare structures in different languages, we must obviously ignore any syntactic differences that exist between them. This we can do by writing the grammar in a neutral, functional form. For instance, for arrays:
array-type: array (index-type, type)
index-type: scalar-type | subrange-type | discrete-type
scalar-type: scalar (identifier+)
subrange-type: subrange (constant, constant)
discrete-type: Boolean | char
3.2 Interpretation
Now, let us make some observations about these rules. Consider a typical string generated by this grammar:
This string describes a particular Pascal data type. Now suppose
\[ \text{BOOLEAN} = \{ \text{true, false} \} \]
is the set of all Boolean values and
\[ \text{REAL} \]
is the set of all real values. Then, the set of all arrays with Boolean indices and real elements in just the set of functions mapping BOOLEAN into REAL:
\[ \{ \text{BOOLEAN} \rightarrow \text{REAL} \} \]
Therefore, we can see that the string shown above describes the set of data values:
\[ \{ \text{CHAR} \rightarrow \{ \text{BOOLEAN} \rightarrow \text{REAL} \} \} \]
This suggests that we can define an interpretation function, \( I \), that associates a set of data values with each string generated by the grammar. This can be defined recursively:
\[
\begin{align*}
I[\text{array}(t, t')] &= \{ I[t] \rightarrow I[t'] \} \\
I[\text{scalar}(i_1, \ldots, i_n)] &= \{ i_1, \ldots, i_n \} \\
I[\text{subrange}(C, C')] &= \{ x \mid C \leq x \land x \leq C' \} \\
I[\text{Boolean}] &= \text{BOOLEAN} \\
I[\text{char}] &= \text{CHAR} \\
I[\text{real}] &= \text{REAL}
\end{align*}
\]
To make this interpretation more obvious, we will write \( \text{subrange}(C, C') \) as \( C..C' \), and \( \text{scalar}(i_1, \ldots, i_n) \) as \( \{ i_1, \ldots, i_n \} \). Figure 1 shows the complete Pascal type system using these conventions.
Defining the interpretation for record-type and pointer-type is quite complicated without the notations of a relational
The interpretation of set and file types are easy to define:
\[ I \{ \text{set}(t) \} = P( I[t] ) \]
\[ I \{ \text{file}(t) \} = I[t]^* \]
where \( P \) is the power-set function.
It should be noted that the above equations imply structural equivalence of Pascal types, as opposed to name equivalence. The Revised Report on Pascal [4] does not define the form of type equivalence used. It is simple to alter the above definitions to accommodate name equivalence; we just represent each type by a pair where the first element of the pair is the type's identifier and the second element of the pair is the type in the structural sense. Thus we have,
\[ \text{type: identifier } \times \text{ unnamed-type} \]
\[ \text{unnamed-type: simple-type } | \text{ structured-type } | \text{ pointer-type} \]
It should be pointed out that there are limitations to the descriptive power of this notation. For instance, it does not express the fact that the identifiers in scalar-types must be distinct, or that type identifiers must be distinct, etc. To include all this information would clutter the notation to the point of unusability.
4. Structure Diagrams
We have said that the complexity of a collection of structures is reflected by the complexity of the semantic grammar. It is still a little difficult to see this complexity in the traditional BNF form. For this purpose we have found a diagrammatic form enlightening. This is really a dependency graph (showing which nonterminals depend on which others) coupled with special symbols for various operations, viz.
\[
\begin{align*}
\text{\( \xrightarrow{\circ} A \)} & = \ A^* \\
\text{\( \xrightarrow{\oplus} A \)} & = \ A^+ \\
\text{\( \xrightarrow{\triangle} A \)} & = \ A \times B \\
\text{\( \xrightarrow{\sqcap} A \)} & = \ A \mid B \mid C \\
\text{\( \xrightarrow{\mathbf{\{} A \mid B \mathbf{\}} \)} & = \ [A \mid B]
\end{align*}
\]
where \([A \mid B]\) means either \(A\) or \(B\) or nothing.
In our semantic grammars (as in syntactic grammars) common
structural patterns are factored out and given names. This reflects the fact that these structural patterns only have to be learned once. In the structure diagrams this factoring is represented by an edge that forks and goes to each of the uses of that structure. For example, since 'index-type' is used both as a part of 'discrete-type' and as a part of array and set types, the edge from index type goes to the subgraphs defining each of these structures. We have adopted the convention of only using binary forks; since edges represent dependencies, this simplifies complexity estimation by edge counting.
Structures from other systems are represented by T-shaped terminations. Given this explanation, the reader is encouraged to compare the diagram of Pascal's data structures in Figure 2 with the semantic grammar in Figure 1. The data structures of LISP, Algol-60, and Algol-68 are diagrammed in Figures 3 - 5.
Figure 2. The Pascal Type System
Figure 3. The LISP Type System
Figure 4. The Algol-60 Type System
Figure 5. The Algol-68 Type System
5. Name Structures
Next, we will demonstrate the application of these techniques to the name structures, another subsystem of programming languages. The name structures of programming languages are often described by terms such as "block-structured", "monolithic", "disjoint", etc. To get a better grasp on these structuring techniques we must ask, "What is being structured?" To put it more precisely, "What relation or relations are being controlled by the structuring mechanisms in question?"
For name structures this relation is visibility, that is, the relation that holds between a binding and a use of an identifier when that use can refer to that binding. Thus, the primitives from which names structures are assembled are bindings and uses of identifiers, and the constructors used to assemble these structures are mechanisms such as block structure.
How can we abstract the name structures from a programming language? Again, we can use syntax as a guide. In Figure 5 we show the fragments of Algol-60 syntax relevant to visibility. Irrelevant parts of the syntax have been elided. Each string generated by this grammar (ignoring reordering of declarations, etc.) defines a unique name structure, i.e., structural arrangement of visibility relations. In Figure 7 we have formulated a semantics oriented grammar for these relations.
\[\begin{align*}
\text{<identifier>} &::= \ldots \\
\text{<block>} &::= \text{<block head>;} \text{<compound tail>}
\end{align*}\]
\[\begin{align*}
\text{<block head>} &::= \text{begin} \text{<declaration>} \mid \text{<block head>;} \text{<declaration>}
\end{align*}\]
\[\begin{align*}
\text{<compound tail>} &::= \text{<statement>} \text{end} \\
&\quad \mid \text{<statement>} ; \text{<compound tail>}
\end{align*}\]
\[\begin{align*}
\text{<program>} &::= \text{<block>} \mid \text{<compound statement>}
\end{align*}\]
\[\begin{align*}
\text{<procedure declaration>} &::= [\text{<type>}] \text{procedure}
\end{align*}\]
\[\begin{align*}
&\quad \text{<proc.heading>} \text{<proc.body>}
\end{align*}\]
\[\begin{align*}
\text{<proc.heading>} &::= \text{<proc. identifier> <formal par.part>};
\end{align*}\]
\[\begin{align*}
\text{<formal par.part>} &::= ( \text{<identifier>}, \ldots )
\end{align*}\]
\[\begin{align*}
\text{<declaration>} &::= \text{<proc.decl.>} \mid \text{<other decl.>}
\end{align*}\]
Figure 6. A Fragment of Algol-60
Program: executable
block: scope (declaration\+, executable)
declaration: simple-decl \mid proc-decl
proc.decl: identifier \times scope (simple-decl\*, executable)
simple-decl: identifier
executable: \{
\text{identifier} \mid \text{block}\}\*
Figure 7. The Algol-60 Name System
Notice that, from the visibility standpoint, a procedure declaration is the same as a block; they both bind local identifiers and delimit a scope. Figure 8 shows the Algol-60 name system in diagrammatic form. The following figures (9-11) show the name systems of the lambda calculus, FORTRAN and Pascal.
Figure 8. The Algol-60 Name System
Figure 9. The Lambda-Calculus Name System
Figure 10. The Pascal Name System
Figure 11. The FORTRAN Name System
In the latter case (Pascal), note that we have analyzed the record declaration as a scope defining (or name grouping) constructor. Figure 12 compares the complexities (as measured by edge-count) of these name systems along with the complexities of
their type systems.

6. **Control Structures**
Control structures are analyzed in the same way as the other structures. These are reflected in the equations and structure diagrams shown in Figures 13-16.
Figure 13. Pascal Control Structures
Figure 14. LISP Control Structures
Figure 15. FORTRAN Control Structures
Consider Pascal; the relevant parts of the grammar are shown in Figure 17. These diagrams are somewhat deceptive because they do not reflect the extraordinary complexity introduced into the control structures by the goto statement. An analogous complexity is caused in data structures by the pointer construct. These are both examples of non-local references, whose proper treatment
simple-statement: assign-stat | proc-stat | goto-stat | empty
assign-stat: expr
function-desig: call (fid, exprlist)
exprlist: expr*
expr: function-desig*
proc-stat: call (fid, {expr | fid}*)
goto-stat: goto (label)
statement: [label] x unlab-stat
unlab-stat: simple-statement | struc-stat
struc-stat: comp-stat | cond-stat | rep-stat | with-stat
comp-stat: statement+
cond-stat: if-stat | case-stat
if-stat: if (expr, stat, [stat])
case-stat: case (expr, case-list-element+)
case-list-element: const+ x statement
rep-stat: while-stat | repeat-stat | for-stat
while-stat: while (expr, stat)
rep-stat: rep (stat+, expr)
for-stat: for (id, forlist, stat)
forlist: expr x [down] x expr
with-stat: with (expr+, stat)
Figure 17. Pascal Control Structure Grammar.
remains an open question.
7. Conclusions
The techniques we have described provide a simple, visual method of comparing the structuring methods provided by programming languages. Languages can often be ranked as to their structural complexity by comparing the complexity of their structural grammars or structure diagrams. In addition, the diagrams allow the language designer to appraise the regularity or irregularity of a structural subsystem and to identify areas where they can be simplified.
Of course, it is very desirable to be able to quantify these ideas, and there are many approaches to this quantification. One of the simplest, which was used in this paper, was to count the number of edges in the graph, since this reflects the dependencies within the system. In the cases we have investigated, this metric agrees with our informal evaluation.
These are, of course, other graph theoretic measures that can be applied, for instance, variants of McCabe's Cyclomatic Number [3], although which is the best remains an open question. It is also possible to apply the measures of Halstead's "Software Science" [1] to either the structural grammar or the structure diagrams. This has also been tried, but this work is still in progress [2].
Although the proper measure to be applied remains an open problem, the representation of structures in a measurable form, such as the structure diagrams, is a first step towards
Development of these notions. Future research will attempt to refine the analysis of structures and their representation as graphs, and will attempt to develop appropriate measures of their complexity.
References
4. Wirth, N. The Programming Language Pascal (Revised Report).
INITIAL DISTRIBUTION LIST
Defense Technical Information Center
Cameron Station
Alexandria, VA 22314
Dudley Knox Library
Code 0142
Naval Postgraduate School
Monterey, CA 93940
Office of Research Administration
Code 012A
Naval Postgraduate School
Monterey, CA 93940
Chairman, Code 52Bz
Department of Computer Science
Naval Postgraduate School
Monterey, CA 93940
Professor Bruce J. MacLennan, Code 52M1
Department of Computer Science
Naval Postgraduate School
Monterey, CA 93940
Chief of Naval Research
800 N. Quincy Street
Arlington, VA 22217
|
{"Source-Url": "https://apps.dtic.mil/dtic/tr/fulltext/u2/a111688.pdf", "len_cl100k_base": 4933, "olmocr-version": "0.1.53", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 53365, "total-output-tokens": 6427, "length": "2e12", "weborganizer": {"__label__adult": 0.00045561790466308594, "__label__art_design": 0.0003535747528076172, "__label__crime_law": 0.00036263465881347656, "__label__education_jobs": 0.0012273788452148438, "__label__entertainment": 9.21487808227539e-05, "__label__fashion_beauty": 0.00017440319061279297, "__label__finance_business": 0.00023698806762695312, "__label__food_dining": 0.0004622936248779297, "__label__games": 0.000492095947265625, "__label__hardware": 0.0007987022399902344, "__label__health": 0.0006809234619140625, "__label__history": 0.00026607513427734375, "__label__home_hobbies": 0.00010150671005249023, "__label__industrial": 0.0004498958587646485, "__label__literature": 0.0005817413330078125, "__label__politics": 0.0003008842468261719, "__label__religion": 0.0006489753723144531, "__label__science_tech": 0.028289794921875, "__label__social_life": 0.00012445449829101562, "__label__software": 0.003780364990234375, "__label__software_dev": 0.958984375, "__label__sports_fitness": 0.0003516674041748047, "__label__transportation": 0.0006313323974609375, "__label__travel": 0.00019800662994384768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21087, 0.02455]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21087, 0.80101]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21087, 0.86003]], "google_gemma-3-12b-it_contains_pii": [[0, 91, false], [91, 91, null], [91, 285, null], [285, 717, null], [717, 1742, null], [1742, 2037, null], [2037, 3214, null], [3214, 4574, null], [4574, 5413, null], [5413, 6908, null], [6908, 8153, null], [8153, 9091, null], [9091, 10519, null], [10519, 11320, null], [11320, 12534, null], [12534, 13452, null], [13452, 13485, null], [13485, 13552, null], [13552, 13587, null], [13587, 14932, null], [14932, 16567, null], [16567, 16645, null], [16645, 16964, null], [16964, 17229, null], [17229, 17266, null], [17266, 17301, null], [17301, 17339, null], [17339, 17722, null], [17722, 18508, null], [18508, 19911, null], [19911, 20541, null], [20541, 21087, null]], "google_gemma-3-12b-it_is_public_document": [[0, 91, true], [91, 91, null], [91, 285, null], [285, 717, null], [717, 1742, null], [1742, 2037, null], [2037, 3214, null], [3214, 4574, null], [4574, 5413, null], [5413, 6908, null], [6908, 8153, null], [8153, 9091, null], [9091, 10519, null], [10519, 11320, null], [11320, 12534, null], [12534, 13452, null], [13452, 13485, null], [13485, 13552, null], [13552, 13587, null], [13587, 14932, null], [14932, 16567, null], [16567, 16645, null], [16645, 16964, null], [16964, 17229, null], [17229, 17266, null], [17266, 17301, null], [17301, 17339, null], [17339, 17722, null], [17722, 18508, null], [18508, 19911, null], [19911, 20541, null], [20541, 21087, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21087, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21087, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21087, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21087, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21087, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21087, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21087, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21087, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21087, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21087, null]], "pdf_page_numbers": [[0, 91, 1], [91, 91, 2], [91, 285, 3], [285, 717, 4], [717, 1742, 5], [1742, 2037, 6], [2037, 3214, 7], [3214, 4574, 8], [4574, 5413, 9], [5413, 6908, 10], [6908, 8153, 11], [8153, 9091, 12], [9091, 10519, 13], [10519, 11320, 14], [11320, 12534, 15], [12534, 13452, 16], [13452, 13485, 17], [13485, 13552, 18], [13552, 13587, 19], [13587, 14932, 20], [14932, 16567, 21], [16567, 16645, 22], [16645, 16964, 23], [16964, 17229, 24], [17229, 17266, 25], [17266, 17301, 26], [17301, 17339, 27], [17339, 17722, 28], [17722, 18508, 29], [18508, 19911, 30], [19911, 20541, 31], [20541, 21087, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21087, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
dcebfbe38b7678bc20f1f6ba4d360e16556a8eca
|
A TOOLBOX APPROACH TO NETWORKING SAS® EXPERTS
OR
RE-USING INSTEAD OF RE-INVENTING THE WHEEL
Don Henderson, ORI, Inc.
1. ABSTRACT
Toolboxes are often used by data processing groups in the development of computer systems. The concept of a toolbox is straightforward; generalized tools (e.g., software modules) are developed and packaged so that they may be easily used by an entire team or staff of people. PROC PRINT is an excellent example. SAS programmers do not have to write PUT and FILE statements each time they want to display a report of data; SAS Institute has developed a tool (PROC PRINT) and put it in a toolbox (the procedure library) available to all SAS users. Macro tools presented at this and previous SAS conferences provide other examples.
Unfortunately, toolboxes have traditionally been used only for generic routines which can be taken "off the shelf" and used as is (e.g., the SAS procedure library and user developed libraries of procedures and Macros). Toolboxes have a much broader application, however, especially in the educational and consulting environment. By generalizing toolboxes to include "sample applications" which can be used in a variety of application areas, it is possible to greatly enhance and extend the ability and productivity of SAS programmers.
2. INTRODUCTION
A toolbox is a collection of routines or software modules that an applications programmer or systems designer can use or consult in the development of new programs or systems. Programs in a toolbox can range on a continuum from the SAS procedure libraries, through user developed generalized Macro routines, to sample or prototype code which toolbox users can use to guide them in solving their own special problems.
Toolboxes are widely used by data processing organizations, both at the organizational and application area level in the form of a library of routines or subroutines which are available for general use. This is true regardless of the language of choice. In the more traditional languages such as PL/I or COBOL, such toolboxes contain subroutines. The SAS System itself has taken advantage of this concept: it is a programming language (the DATA Step) with a built-in collection of tools (procedure library) that are automatically available to the user. In fact, other Institute products are nothing more than a second level of sample toolboxes which toolbox users can use to guide them in solving their own special problems.
Toolboxes are widely used by data processing organizations, both at the organizational and application area level in the form of a library of routines or subroutines which are available for general use. This is true regardless of the language of choice. In the more traditional languages such as PL/I or COBOL, such toolboxes contain subroutines. The SAS System itself has taken advantage of this concept: it is a programming language (the DATA Step) with a built-in collection of tools (procedure library) that are automatically available to the user. In fact, other Institute products are nothing more than a second level of sample toolboxes which toolbox users can use to guide them in solving their own special problems.
There are numerous reasons why toolboxes are advantageous. Many of them are the same reasons why the SAS System itself is so popular and widely used. Among the more prominent reasons are:
- they minimize the amount of application specific code that needs to be written;
- if a common function needs to be performed in a variety of places, the use of one toolbox routine increases the reliability of the application;
- the availability of easy to use tools can lead to the development of more robust, reliable and effective systems;
- they increase the productivity and, through time, the expertise of toolbox users; and
- their use increases the probability that a project can be completed in a timely manner.
Two levels of sample toolboxes will be discussed — Macro libraries, and libraries of sample or prototype code — followed by a discussion of how to build and maintain toolboxes.
3. SAMPLE TOOLBOXES
3.1 A LIBRARY OF MACRO TOOLS
The use of Macro libraries and tools has been discussed in many previous papers (see references 1-7). Many organizations maintain their own Macro libraries because through their use, programmers and analysts are more productive and build and maintain better systems. In general, such libraries (or toolboxes) should contain packaged and easy to use code which solve common problems to the organization. These problems may be standard problems (e.g., checking for an empty transaction file) or may solve some problem or produce some specific report (e.g., produce report ONE on a specified user data set).
As mentioned above, a general Macro library should contain Macros which solve common data processing problems such as:
- conditionally producing test output based on the setting of a Macro variable so that "test code" can always be available to a production application;
- checking for the existence of a data set;
- checking to see if a data set (transaction file) has observations;
- creating a format table from information stored in a SAS data set;
- checking for the validity of user input:
- valid SAS names,
- variables are of correct type (numeric vs. character),
- variables exist in a specified data set,
- user selected option is valid.
A second level of Macro toolboxes corresponds to application-specific tools. These come into play if SAS based systems are designed as discussed in reference 5. Systems can be designed and implemented so that they are implemented through a "driver" Macro which calls or executes a series of "subroutine" Macros (see Figure 1). Each "subroutine" Macro can then be used from the toolbox. For example, suppose that system ABC produces report ONE by having the driver program call %REPTONE on a specified data set. Report ONE can be generated either from within the system...
or externally. It is only necessary to call Macro %REPTONE with the
required input parameters, e.g., %REPTONE(...,DATA=SPECIAL).
```sas
-%MACRO ABC(parameter list);
%GLOBAL •••;
%LOCAL •••;
%EXTRACT (...);
%RESHAPE (... ,OUT=FINAL);
%REPTONE (... ,DATA=FINAL);
-%END ABC;
```
**FIGURE 1. Sample Driver Macro**
3.2 A LIBRARY OF PROTOTYPE CODE
A library of prototype code can contain a wide variety of programs. There is no judging whether the inclusion of a typical piece of code will be productive or not. Most often, code that is unique in some way, either because it solves a difficult problem or because it is a novel solution to a common problem, is the most productive type of code to include. It is important to recognize that such toolboxes are not bound by application area. In order to illustrate these points, this section will discuss, by example, the utility of toolboxes of prototype code.
The first example presents a problem that a research scientist is having with data he has collected in a blind trial. His data set (CHEM.TRIAL) contains observations for many subjects and he needs to compute descriptive statistics by treatment on a specified subset of the patients. He has two problems with the data set. First, it has no treatment identifier. Second, he only wants to select subjects who started the study after May 1986. Since the data set does not contain the necessary information for him to complete his task and because he cannot add the information to the permanent data set, he decides to create a second dataset (OTHER.DESCRIP) which has the subject identifier (SUBJ_ID) and the appropriate identification and selection fields. He develops the code and places it in the prototype code toolbox (see Figure 2).
```sas
DATA STATS;
MERGE CHEM.TRIAL OTHER.DESCRIP;
BY SUBJ_ID;
DROP START;
IF 'OlMAY86' D <= START;
RUN;
PROC MEANS;
BY TRTMNT;
TITLE 'Descriptive Statistics by Treatment';
RUN;
```
**FIGURE 2. Sample Toolbox Code**
Upon reviewing the prototype code library, the marketing analyst discovers that the solution developed by the research scientist can be used to solve her problem. First, she creates a parameter file data set REPT.GROUPING (see Figure 3) which specifies for each DEMOGRP the additional special groups to which it belongs.
<table>
<thead>
<tr>
<th>DEMOGRP</th>
<th>N_GROUPS</th>
<th>GRP1</th>
<th>GRP2</th>
<th>GRP3</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>3</td>
<td>S1</td>
<td>S2</td>
<td>S3</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td></td>
<td>S3</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td></td>
<td>S3</td>
<td></td>
</tr>
<tr>
<td>4</td>
<td>2</td>
<td>S1</td>
<td>S3</td>
<td></td>
</tr>
<tr>
<td>5</td>
<td>2</td>
<td>S2</td>
<td>S3</td>
<td></td>
</tr>
<tr>
<td>7</td>
<td>1</td>
<td></td>
<td>S1</td>
<td></td>
</tr>
<tr>
<td>8</td>
<td>1</td>
<td></td>
<td>S2</td>
<td></td>
</tr>
<tr>
<td>9</td>
<td>1</td>
<td></td>
<td>S1</td>
<td></td>
</tr>
</tbody>
</table>
**FIGURE 3. Parameter File Data Set**
Next, she includes the code shown in Figure 4 just before the call to %SALESRPT.
```sas
DATA REPORT;
MERGE ORIGDATA REPT.GROUPING;
BY DEMOGRP;
ARRAY GRPS (3) GRP1 GRP2 GRP3;
DROP N_GROUPS GBP1 GRP2 GRP3 I;
OUTPUT; /* THE ORIGINAL GROUPS */
/* Output the record, associating it with each new */
/* group to which the input group belongs to */
DO I=1 TO N_GROUPS;
DEMOGRP..,(,'RP8(I);
OUTPUT; /* EACH NEW GROUP*/
END;
RUN;
%SALESRPT(DATA=REPORT, /*NOW CONTAINS NEW GROUPS*/
...as before...)
```
**FIGURE 4. Toolbox Code To Use the Parameter Data Set**
She generalized the solution that the research scientist came up with. His problem was to assign the group to a record. Her solution was to "duplicate" the records for specific groups, creating a new record for each of the new group identifiers.
The last example presents a problem from the accounting department. The programmer responsible for Profit and Loss (P&L) report generation has just received another change to the order and calculation formulas for the P&L account lines. He has become tired of digging into the body of the code to change the sort order and calculation for each account line so he decides to try and come up with a procedure that is not as labor intensive. Upon reviewing the prototype code library, he discovers the solution developed by the marketing analyst and decides that such a table driven approach will work for him as well. Part of the P&L with its new formulas are shown in Figure 5.
**FIGURE 5. Selected P&L Account Line Formulas**
Cross Sales: Accounts 5801, 5802 and 5905
Cost: Account 1203 plus 1/2 of 1204
Net Sales: Gross Sales - Costs
Taxes: .25 - Net Sales
Profit: Net Sales - Taxes
A partial listing of his data set that translates P&L lines to their sort order and provides labels is stored in PANDLLINES (Figure 6).
<table>
<thead>
<tr>
<th>LINE</th>
<th>SORTORDR</th>
<th>LABEL</th>
</tr>
</thead>
<tbody>
<tr>
<td>14</td>
<td>17</td>
<td>Gross Sales</td>
</tr>
<tr>
<td>17</td>
<td>10</td>
<td>Costs</td>
</tr>
<tr>
<td>22</td>
<td>19</td>
<td>Net Sales</td>
</tr>
<tr>
<td>28</td>
<td>20</td>
<td>Taxes</td>
</tr>
<tr>
<td>34</td>
<td>21</td>
<td>Profit</td>
</tr>
</tbody>
</table>
**FIGURE 6. P&L Lines.**
A partial listing of the parameter file data set of formulas, PANDLFORMULAS is given in Figure 7.
<table>
<thead>
<tr>
<th>ACCOUNT</th>
<th>N_LINES</th>
<th>TO1</th>
<th>TO2</th>
<th>TO3</th>
<th>TO4</th>
<th>FR1</th>
<th>FR2</th>
<th>FR3</th>
<th>FR4</th>
</tr>
</thead>
<tbody>
<tr>
<td>1203</td>
<td>4</td>
<td>17</td>
<td>22</td>
<td>28</td>
<td>34</td>
<td>1</td>
<td>-1</td>
<td>-25</td>
<td>-75</td>
</tr>
<tr>
<td>1204</td>
<td>4</td>
<td>17</td>
<td>22</td>
<td>28</td>
<td>34</td>
<td>.5</td>
<td>-.5</td>
<td>-.125</td>
<td>-.375</td>
</tr>
<tr>
<td>5801</td>
<td>4</td>
<td>14</td>
<td>22</td>
<td>28</td>
<td>34</td>
<td>1</td>
<td>1</td>
<td>-25</td>
<td>.75</td>
</tr>
<tr>
<td>5802</td>
<td>4</td>
<td>14</td>
<td>22</td>
<td>28</td>
<td>34</td>
<td>1</td>
<td>1</td>
<td>-25</td>
<td>.75</td>
</tr>
<tr>
<td>5805</td>
<td>4</td>
<td>14</td>
<td>22</td>
<td>28</td>
<td>34</td>
<td>1</td>
<td>1</td>
<td>-25</td>
<td>.75</td>
</tr>
</tbody>
</table>
**FIGURE 7. P&L Formulas.**
Looking at the variables T01-T04 and FR1-FR4, one can see that Line 14 (Gross Sales) is calculated as 5801+5802+5805; Line 22 (Net Sales = Gross Sales - Cost) is calculated as 5801+5802+5805-1203-1204/2. The calculations are now totally parameter driven.
The code for his solution to the problem is shown in Figure 8.
```sql
DATA PANDL{KEEP=LINE ORDER COST};
create formats for SORTORDR and LABEL.
formats using PANDL.LINES as input toigion code.
RUN;
Call the toolbox routine that makes account reports
the tool library, another programmer found formats for SORTORDR and LABEL.
DATA PANEL KEEP=LINE ORDER COST;
MERGE INPUT (KEEP=ACCOUNT AMOUNT)
BY ACCOUNT;
ARRAY LINES (*) TO1 TO2 TO3 TO4 ...
ARRAY FACTORS(*) FR1 FR2 FR3 FR4 ...
/* Associate each input account with the */
/* appropriate account line, including the */
/* application of any factors */
DO I=1 TO N_LINES;
LINE=LINES(I);
COST=AMOUNT*FACTORS(I);
ORDER=SORTORDR(LINE, SORTORDR.);
OUTPUT;
END;
RUN:
Call the routine/Macro that produces the P&L report, making sure to use the LABEL format created above. Note that this routine will aggregate the data to the account line level.
**FIGURE 8. Parameter Driven P&L Generation**
Once again, someone has used the solution to a simpler problem to solve a more complex one. For this solution, the P&L programmer built upon the market research analyst's solution by including factors in his parameter file. He now has an application which can absorb changes merely by changing the parameter files. It is no longer necessary to change and retest the main body of the application code.
Each one of the three toolbox members discussed above contributed to a solution of a different problem in a different application area. None of the above pieces of code lent itself to being addressed in a generalized Macro, yet their inclusion in a toolbox performed a comparable function to a Macro. In each case, since the developer of the code included it in a toolbox library, another programmer found a solution to his/her own problem. It is unlikely that the P&L programmer would have come up with the solution to his problem so quickly if he had not had the toolbox code to use as a model.
Quite often, after several generations or iterations, a programmer will be able to develop generalized Macro code. A paper presented at this year's SUGI (see reference 8) presents a collection of Macros that solve table lookup problems through hashing techniques. This started from the development, use and enhancement of "prototype sample code."
**4. BUILDING AND MAINTAINING TOOLBOXES**
In order for toolboxes to be effective they must be supported and encouraged by all of the levels in the organization, from the junior programmers to the most senior managers. This commitment is crucial because without it the toolboxes will not get either the use or the exposure that they require. During the beginning phases of toolboxing it is very hard to get an organization to spend any effort on organizing a toolbox environment. In many cases, toolboxes will start from the bottom up. A single analyst or programmer will start a toolbox for his/her own use; it will then grow to be used by an entire project or system; and finally the organization as a whole will support it. At first, it appears that toolboxes will divert resources from the problem at hand. (One easy way to start a toolbox is to consult papers presented at SUGI, e.g., references 1-14). However, once an individual, organization or group commits itself to toolboxing, its merits become obvious and support for its maintenance and enhancement grows rapidly. This enthusiasm is good; but it must be controlled. The following should be addressed by any organization that plans on utilizing the toolbox concept:
- Coding Conventions
- Documentation Standards
- Code Walkthroughs
- Code Organization
- Maintenance and Control
Each of these is discussed in the following sections.
**4.1 CODING CONVENTIONS**
In order to be effective, toolbox routines whether they are generalized Macros or sample code, must be written clearly and effectively. Many organizations already have coding conventions. These should be rigorously enforced for toolbox code. Such code should be thoroughly commented to explain the purpose and function behind each step. The easier the code is to read and understand, the further its use will spread. Comments should give emphasis to a generic statement of the problem being solved.
For generalized Macro tools, coding conventions must include how the Macros and parameters are named and used. Some important conventions would include:
- Consistent parameter specification
- Use of the typical "SAS defaults"
- Meaningful Macro and parameter names
As is the case with all of the factors discussed in this section, it is important to have a "committee" which oversees and controls the toolboxes. This committee should review and screen all entries to the toolbox to ensure that the appropriate coding conventions have been followed.
4.2 DOCUMENTATION STANDARDS
It is well known that software without good documentation can not be used to its fullest. This is especially true of toolboxes. Each member (i.e., each Macro or piece of prototype code) must be thoroughly and completely documented. The documentation requirements are different for Macro toolboxes as opposed to prototype code toolboxes.
For Macro toolboxes, the documentation for each member should include:
- a clear and concise statement of the function of the Macro;
- complete descriptions of all inputs and outputs to and from the Macro;
- detailed explanations of each parameter, including what each parameter value (or class of values) does;
- sample executions of the Macro, including annotated output of both the SAS Log and Print pages;
- a complete list of any "side effects" of calling the Macro.
For toolboxes of prototype code the documentation should be heavily weighted towards a complete description, as generic as possible, of the problem. Almost as important is a detailed discussion of the techniques/code used to solve the problem along with the reason why the particular technique was chosen. Why a particular technique was used can be the most important. Consider the prototype specification that she merged in the new group requirements rather than using IF statements to create them because it made the solution data driven and thus easy to change.
4.3 CODE WALKTHROUGHS
Frequent walkthroughs should be held to discuss toolbox code and to maximize its exposure. The implementation of such walkthroughs is very different for generalized Macro tools as opposed to prototype code.
For Macro tools, walkthroughs should begin with a statement of the problem and should be geared towards collecting input on all of the required features. Quite often development will predicate this; however, it is still useful to have a walkthrough before the code is completed (and installed). In addition to this walkthrough, two other types of walkthroughs should be held. First, would be one attended by fellow programmers whose goal is to make sure that the code is clean and efficient. Second, would be one attended by potential users that addresses how the Macro is used and its potential "side effects".
For prototype code, walkthroughs should have more of an emphasis on the problem and a general discussion of how it was solved. In this way, the organization can maximize the exposure of the techniques. In many cases, the original author may not realize the full impact or utility of what they have developed. It is only through the input of others attending the walkthrough that prototype code be fully appreciated. Such walkthroughs may identify a future Macro tool that needs to be developed.
The committee responsible for toolboxes should make sure that these walkthroughs are scheduled on a regular basis and are convenient for staff to attend.
4.4 CODE ORGANIZATION
In order for any toolbox to be effective, the code must be centralized, available and accessible. It is recommended that the code be stored on-line on a specific userid/path/directory (location). This location should be devoted to toolboxes; it should not be shared with any other functions.
For Macro toolboxes, the member name should be the same as the Macro name. This has two advantages. It makes it easy to locate specific Macros and it also makes the use of the SAS AUTOCALL facility possible.
For prototype code toolboxes, an important criteria is that the member names, in addition to being unique, either be meaningful or if that is not possible, that a cross reference dictionary that maps the name into a brief description be supplied.
Such a cross reference is a good idea for both types of toolboxes. It should be located in a "member" of the toolbox library, perhaps called "AAHELP" (so that it comes out first alphabetically in, for example, PROC SOURCE).
4.5 MAINTENANCE AND CONTROL
As is the case with all code, toolbox code must be maintained. Since toolbox code will typically get wider exposure than application specific code, maintenance is particularly important. The importance of controlling the toolbox environment is also an issue of critical importance. In order for the code to be effectively used in developing applications, the general user must be assured that the toolbox contains code that works as described; the most up to date copy of the code; all of the applicable code (i.e., there are not an undefined number of toolboxes which the user has to first find and then review). Only if all of these conditions are met, can the use of the toolbox gain wide acceptance and use.
The committee that controls the toolbox environment should make sure that only designated staff have write access to the toolboxes. No unauthorized staff should be allowed to add something to or change a member in the toolbox. Unless the proper procedures are followed, the integrity of the toolbox is in jeopardy. It is important to make sure that only tools that meet the above conditions find their way into the toolbox library, with a designated member of the committee responsible for reviewing test runs to validate the accuracy and reliability of the code. It is also important for the committee to make sure that all useful tools make their way into the library. This requires that committee members come from a broad range of the organization and "beat the bushes" to make sure they are aware of what is going on in terms of SAS applications. If an organization permits a proliferation of toolboxes, most of the benefits are lost.
For Macro toolbox code, maintenance should begin when the code is installed in the toolbox. Before its installation the committee should validate that a complete test bed has been developed, run and its output saved for comparison with future test runs. All changes,
corrections and enhancements to the Macro should be coordinated through the committee which will make sure that the test bed is run again. If any new test cases are necessary, the staff member designated by the committee should add them to the test bed. The test bed should also be run whenever a new release of the SAS system is installed to make sure it does not introduce any problems.
For prototype code, there are two primary maintenance activities. First, the committee should validate that the code still works with new releases of the SAS System. Second, obsolete code should be deleted. Code can become obsolete when there are new and better ways of solving the same problem (i.e., new SAS products/features, Macros or newer prototype code).
5. CONCLUSION
All organizations can increase both their productivity and expertise by making sure that they widely distribute the techniques and methodology they have developed. This is true for both generalized and specific solutions to problems. You can never tell how someone else can use an idea or technique that you have developed.
Properly implemented, toolboxes facilitate communication, the spread of ideas, and prevention of duplication of effort.
The author can be contacted at:
Don Henderson
ORI, Inc.
Suite 1000
601 Indiana Avenue, N.W.
Washington, D.C. 20004
(202)337-2666
SAS is a registered trademark of SAS Institute, Inc.
6. REFERENCES
|
{"Source-Url": "https://support.sas.com/resources/papers/proceedings-archive/SUGI88/Sugi-13-57%20Henderson.pdf", "len_cl100k_base": 5654, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 16375, "total-output-tokens": 6379, "length": "2e12", "weborganizer": {"__label__adult": 0.00023925304412841797, "__label__art_design": 0.0003266334533691406, "__label__crime_law": 0.00025177001953125, "__label__education_jobs": 0.002872467041015625, "__label__entertainment": 5.793571472167969e-05, "__label__fashion_beauty": 9.900331497192384e-05, "__label__finance_business": 0.0013780593872070312, "__label__food_dining": 0.00021088123321533203, "__label__games": 0.0002903938293457031, "__label__hardware": 0.00054168701171875, "__label__health": 0.0002505779266357422, "__label__history": 0.00016832351684570312, "__label__home_hobbies": 9.876489639282228e-05, "__label__industrial": 0.0003995895385742187, "__label__literature": 0.00018155574798583984, "__label__politics": 0.00015032291412353516, "__label__religion": 0.0002551078796386719, "__label__science_tech": 0.0114593505859375, "__label__social_life": 0.0001043081283569336, "__label__software": 0.035491943359375, "__label__software_dev": 0.94482421875, "__label__sports_fitness": 0.00014102458953857422, "__label__transportation": 0.00026917457580566406, "__label__travel": 0.0001499652862548828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26278, 0.03503]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26278, 0.32107]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26278, 0.91598]], "google_gemma-3-12b-it_contains_pii": [[0, 5961, false], [5961, 10427, null], [10427, 15978, null], [15978, 22380, null], [22380, 26278, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5961, true], [5961, 10427, null], [10427, 15978, null], [15978, 22380, null], [22380, 26278, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26278, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26278, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26278, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26278, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26278, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26278, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26278, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26278, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26278, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26278, null]], "pdf_page_numbers": [[0, 5961, 1], [5961, 10427, 2], [10427, 15978, 3], [15978, 22380, 4], [22380, 26278, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26278, 0.11321]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
1d02cc87d511b17f91a61bf48605b301d0adef3c
|
A Policy-Based Architecture for Container Migration in Software Defined Infrastructures
Original
Availability:
This version is available at: 11583/2752093 since: 2019-09-17T11:09:54Z
Publisher:
IEEE
Published
DOI:10.1109/NETSOFT.2019.8806659
Terms of use:
This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository
Publisher copyright
IEEE postprint/Author's Accepted Manuscript
©2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collecting works, for resale or lists, or reuse of any copyrighted component of this work in other works.
A Policy-Based Architecture for Container Migration in Software Defined Infrastructures
Xu Tao*
Politecnico di Torino, Italy
xu.tao@studenti.polito.it
Flavio Esposito
Saint Louis University, USA
flavio.esposito@slu.edu
Alessio Sacco
Politecnico di Torino, Italy
alessio_sacco@polito.it
Guido Marchetto
Politecnico di Torino, Italy
guido.marchetto@polito.it
Abstract—Software-Defined Networking (SDN) is a paradigm that enables easier network programmability based on separation between network control plane and data plane. Network Function Virtualization (NFV) is another recent technology that has enabled design, deploy, and management of softwarized networking services. The vast majority of SDN and NFV based architectures, whether they use Virtual machines (VMs) or Lightweight Virtual Machines (LVMs), are designed to program forwarding, probably the most fundamental among all network mechanisms.
In this paper instead we demonstrated that there are other (as important) networking mechanisms that need programmability. In particular, we designed, implemented and extensively tested an architecture that enables policy-programmability of (live) migration of LVMs. Migration is used for maintenance, load balancing, or as a security mechanism in what is called Moving Target Defence (a virtual host migrates to hide from an attacker).
Our architecture is based on Docker and it is implemented within a Software-Defined Infrastructure. Migration mechanism can be set easily by means of configuration file, to make a novel policy-based architecture. We evaluated the performance of our system in several scenarios, over a local Mininet-based testbed. We analyzed the tradeoff between several Load Balancing policies as well as several Moving Target Defense solutions inspired by network coding.
Index Terms—software defined networking, container migration, moving target defense.
I. INTRODUCTION
The recent surge in popularity of Cloud Computing and Internet of Things (IoT) has resulted in a number of IoT networks, widely deployed. As new technologies showing up, today’s network is much harder and more complex to manage and monitor. Thus, new network solutions come up. For instance, Software Defined Networking (SDN) is the latest network paradigm to solve the complexity of networking. It provides the benefits by detaching networking control layer and data layer, providing the possibility to use powerful central commands to meet the requirements of underlying demand data planes. Instead, Network Functions Virtualization (NFV) is a new method to design, deploy, and manage networking services. Virtual Machines (VMs) are widely used to implement NFV. Despite VMs, Lightweight Virtual Machines (LVM), such as Dockers, are a more efficient solution. The Docker technology allows the true independence between application, infrastructure, developers, and IT Ops. It enables creating a model for better collaboration and innovation.
Why a policy-based programmable migration mechanism is needed? Based on these new network solutions, migration is a new solution widely used in cloud network structure and data center. Migration is the movement of a virtual machine from one physical host to another, it happens without the awareness of end users. It can achieve networking maintainance, load balancing, network failure repair for providing an always available system. Apart from these, it can also be used as a security moving target defense strategy.
Nowadays migration solutions mostly focus on VMs [1], and Virtual Routers (VR) [2]. Besides, they are usually in ad-hoc environment, concerning a specific policy of the migration mechanisms; for instance, loading balancing [3] or energy optimization [4]. There is less concern on container migration. The container is known as the lightweight virtual machine. It does not virtualize only the hardware, but also the operating system. Comparing with the virtual machine, it is much lighter.
If there is a high requirement with respect to the speed for migration, container migration could be a good solution. Virtualization is a way that enables network programmability, and software defined networking is a good example. Above control plane, it is flexible and easy to develop applications such as routing, access control, etc. But it is only good for forwarding mechanism. In addition, network protocols are usually designed in the ad-hoc fashion. Different versions of TCP or routing exist, some of them are suitable for bandwidth sensitive applications, some are for delay sensitive applications, some aim to achieve security, some aim to provide better performance. There is no one-size-fits-all, a policy-based programmable migration mechanism is needed.
Our contribution. We designed a policy-programmable container migration architecture based on Docker. The policy-based architecture allows us to change policies with a simple configuration file, so programming the migration mechanism is easy. Second, we test security and load balancing policies within our SDN-based prototype over Mininet. Third, we designed and evaluated novel Moving Target Defense (MTD) solutions inspired by network coding.
The policy-based migration system can do software defined measurement based on the network traffic statistics obtained through SDN controller. We developed our algorithms to make migration decision and applied it on two use cases. The first is Load Balancing that we feature with 3 policies: bandwidth-based, shortest path, random. The second is Moving Target
Defense, where novel solutions are inspired by network coding, that we feature also with three policies: Shamir, Digital Fountain, and Pseudo Random function.
The paper is organized as follows. In section II, we discuss related migration solutions. Section III describes our migration system architecture. In section IV we present two use cases: load balancing, moving target defense. Section V illustrates the experimental validation results we obtained. In the end, the work is concluded in Section VI.
II. RELATED WORK
Several network migration solutions exist nowadays, and a considerable work has been done concerning live VM migration [5]–[7]. In addition, there are a set of papers in which the authors compare and analyze the possible factors that could affect the migration performance. In [8]–[10], the authors examined the major issues of virtual machine live migration with some metrics, e.g. downtime, total migration time, also classifying the techniques and comparing the different solutions. However, containers (lightweight virtual machine (LVMs)) are showing up as recent virtualization technique, they don’t virtualize only the hardware infrastructure but also the operating system. Recently, new attempts to use containers instead of VMs have been proposed [11]. They focus on reducing migration time, with no concern about the network traffic situation. In our work, we concentrate on container migration, because compared to VM it is lighter and the migration can be faster than migrating a virtual machine. Our policy-based system performs migration adapting different application needs by just changing a configuration file. This is the first attempt, to the best of our knowledge, to build an architecture to enable programmable migration mechanism.
Moving Target Defense (MTD) is a new security paradigm. Instead of defending unchanging infrastructure by detecting, preventing, monitoring, tracking and remedying threats, moving target defense makes the attack surface dynamic. Many attempts have been proposed to achieve security through MTD. For instance, U-TRI adopts a randomly changing identifier to replace the original static data link layer address [12]. They defend traffic privacy by obfuscating the identifiers in network and transport layer. A different approach is used in WebMTD, that randomizes certain attributes of web elements to differentiate the application code from injected code and disallow its execution [13]. Besides, a more general solution is Mutated Policies [14]. It is an attribute-based defense strategy for access control that carefully selects the attributes that uniquely identify the entities involved. Then it randomly mutates the original access policies over time by adding additional policy rules constructed from the newly-identified attributes.
In our migration system, we move the container from one host to another one, to guarantee that the hosted machine IP address keeps changing. Then, we improve different algorithms existing with information of the network, integrating polynomial concept with a novel algorithm such as digital fountain mechanism.
measurement system enables the simplicity and flexibility in collecting network traffic statistics. (3) Migration Manager monitors the process and makes migration decisions. In a configuration file, we specify a set of threshold parameters and the policy name. In our prototype we implemented two sets of policies for two use cases: Load Balancing and Moving Target Defense. Users can, however, easily implement their own policies. This component includes a Migration Brain, which executes the policy specified in the configuration file. (4) Migration Daemon is the process running on hosts, and handles the migration process. We use the Docker API to create, start, stop, take the memory and storage snapshot of the current container status. A schema of our prototype implementation is shown in 2.
B. Migration Model and Protocol
Migration manager makes the migration decision and communicates the destination host to the source host. When the source receives the command and the migration destination IP address, it starts the migration process. We defined a Migration Protocol used to execute such migration. First, Migration Manager makes migration decision and communicates it to the Source Host with a “MIGRATE” command. At this point, Migration Source Host takes the snapshots and stores the image files of the current running container (docker checkpoint). After that, it transfers the container image files to the Destination Host. During this communication, the source host does not stop providing the service. The communication between Source and Destination Host starts with a “RESTART” command sent by the source. This message is followed by the information about container image files. Once Destination has received all the required details, it restarts the container. After the service starting, Destination sends “SUCCESS” command to the migration source host. Then, the TCP connection will be closed between all the parties involved. Also, source host stops the container providing the service. In the end, the routing is redirected to the migration destination host.
Our programmable migration framework enables to chose the destination host according to different criteria. In such a way an administrator is able to choose different policies for different use cases.
IV. Migration Policy Tradeoff and Use Cases
In this section, we explain our migration system on two use cases: Load Balancing and Moving Target Defense. The policies used in each use case will be listed and compared.
A. Use Case 1: Load Balancing
This application allows migration by monitoring the network traffic. The destination host is selected according to different criteria, and we focused on three policies to select the destination:
**Random:** destination host is selected at random.
**Bandwidth-based:** destination host is the host with the maximum available outgoing bandwidth. We define this value as the minimum link capacity of the links in the path.
**Shortest Path:** leveraging Floodlight controller we are able to get the network topology and compute the shortest path for each couple of nodes.
B. Use Case 2: Moving Target Defense
Moving Target Defense is a paradigm whose idea is to make the attack surface more dynamic. During the setup phase, (private) key(x,y) and a lookup table are distributed to each host. The lookup table is encrypted with a master secret for protecting the migration destination host. This table is an hash table associating to each index the destination host IP address. At the migration stage, our system provides to the source host a random number \( R \) to combine with the key as the input of a hash function:
\[
hash(x) = Hash(R + X * Y)\% (N + 1), \tag{1}
\]
where \( N \) is the number of hosts, \( R \) is the random number, \((X,Y)\) is a key represented as a point and \( \% \) is the modulo function. The value obtained from the hash function is the index of the lookup table. Here, three policies are used to share a secret:
**Shamir:** This policy is inspired by Shamir’s method [15]: a secret is divided into \( K \) parts, and each participant has its own unique part. To get the secret key, a host needs to authenticate with some or all other hosts. The migration source host has to ask \( K \) disjoint hosts for \( K \) different keys to reconstruct the key and decrypt the lookup table. \( K \) is specified in the configuration file.
**Digital Fountain:** The migration source host needs to ask to \( K \) hosts for \( K \) keys, not necessarily disjoint. In our implementation we pick these \( K \) hosts probabilistically, using the following formula:
\[
P(i, k) = \frac{1}{\sum_{j=1}^{K} \frac{\text{latency}(i, k)}{\text{latency}(i, j)}}, \tag{2}
\]
where \( i \) is the source host, \( k \) is the random host, and \( P(i, k) \) is the probability that host \( k \) is selected for asking the key. The host which has a smaller latency has a higher probability of being selected. This means that closer hosts may be contacted multiple times for the key.
**Random**: The destination host is selected by using a pseudo random function. We use this policy as a benchmark.
At the beginning, Migration Manager distributes different encrypted lookup table with the information required for the algorithms to each host. The manager generates also a set of key\((x,y)\) for each host. Hence each host has a part of the information to decrypt the lookup table. Then, Migration manager sends a random number to the source host, it applies hash\((x)\), and the result is the migration destination host index \( i \). According to the policy specified in the configuration file, different strategy are used for decrypting the table. In case of Digital Fountain the same host can be contacted many times, since the one which has the shorter path will have the higher probability to be chosen. On the other hand, in Shamir the host asks to \( k \) disjoint hosts the key pair in order to decrypt the lookup table. After getting the \( k \) keys, the migration source host applies the algorithm (Digital Fountain or Shamir) to get the master secret \( S \). Hence, the source host decrypt the lookup table using \( S \), get the migration destination host IP, and start the migration process.
V. EXPERIMENTAL VALIDATION
In this section, we test our system in a Mininet testbed, evaluating the two use cases and all the policies. The results are obtained using a Ubuntu Intel i7-6500U @ 2.50GHZ, 8.00 GB RAM, 64-bits.
A. Use Case1: 3 policies evaluation for Load Balancing
**Scenario 1: Link capacity is heterogeneous**. The topology we used as testbed is shown in Figure 3, where the link capacity varies among the links. H1 executes a docker container running iperf client, while H2 will be the source host and executes a docker container running the iperf server. The migration decision is different according to the chosen policy.
1) **Bandwidth-based policy**: H4 is selected as the destination host with a minimum bandwidth on its path of 10 Mbps.
2) **Shortest path policy**: H3 is selected as the destination host because of just 2 switches in between.
Fig. 3: Network topology with heterogeneous link capacity
Fig. 4: The graphs display migration source and destination hosts bandwidth consumption collected for bandwidth-based policy.
<table>
<thead>
<tr>
<th></th>
<th>Bandwidth (s)</th>
<th>Shortest Path (s)</th>
<th>Random (s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Heterogeneous</td>
<td>9.1 ± 0.15</td>
<td>40.1 ± 0.15</td>
<td>23.6 ± 7.12</td>
</tr>
</tbody>
</table>
TABLE I: Migration time of 3 policies in Load Balancing task for Scenario 1 on Mininet.
3) **Random Policy**: the destination host is randomly selected among the free hosts set: (H3, H4, H5).
Figure 4 shows the bandwidth consumption during the migration process. Bandwidth consumption value is the sum of sent and received bandwidth for the migration source and destination host. In the first period (up to 125s) the migration procession is not started yet, so on the source host (red line) the traffic is related to the docker container running the iperf server. After 125s the traffic on the switch is detected as too high and the migration process starts. During this period, the source host generates not only traffic data for the iperf client, but also the traffic data for the container migration. As a consequence, the destination host (blue line) starts to receive the migration files, so bandwidth consumption starts increasing. Then after migration process is done (150s), the source host (red line), does not run the iperf server anymore, so there is no more traffic. On the other hand, the destination host (blue line) starts to run the iperf server after migration.
In order to evaluate the time necessary for the migration process, we run 20 times the procedure for 3 the policies as shown in Table I. The time is the sum of time to make the decision and to make the migration.
Table I shows that bandwidth policy provides the best trade-off between network load balancing and the migration time. Bandwidth policy takes the advantage of the bigger link bandwidth, so the migration time is much smaller than shortest path policy, random policy. The confidence interval for bandwidth and shortest path is very small. This happens because the migration decision made for both use cases is determinate, H4 for bandwidth policy, H3 for shortest path.
TABLE II: Migration time of 3 policies in Load Balancing task for Scenario 2 on Mininet.
<table>
<thead>
<tr>
<th></th>
<th>Bandwidth (s)</th>
<th>Shortest Path (s)</th>
<th>Random (s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Homogeneous</td>
<td>18.6 ± 0.91</td>
<td>17.1 ± 0.16</td>
<td>18.7 ± 0.91</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th></th>
<th>Digital Fountain (s)</th>
<th>Shamir (s)</th>
<th>Random (s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Homogeneous</td>
<td>36.6 ± 5.20</td>
<td>40.1 ± 6.60</td>
<td>27.2 ± 3.70</td>
</tr>
</tbody>
</table>
TABLE III: Migration time of 3 policies for Moving Target Defense.
the other hand, for Random, the migration destination is not determinate, so each run, it may choose different destination.
**Scenario 2: Link capacity is homogeneous.** In addition to topology with heterogeneous capacity, we tested the same topology where for all the link the capacity is homogeneous, and set to 5Mbps. In this context the decisions of three policies are as follows:
1) **Bandwidth-based policy:** the destination host is randomly selected among the free host set: (H3, H4, H5).
2) **Shortest path policy:** H3 is selected as the destination host because of just 2 switches in between.
3) **Random Policy** the destination host is randomly selected among the free host set: (H3, H4, H5).
In this case, the bandwidth policy has multiple choices, so the migration destination may vary every run. Table II highlights how in this case, shortest path performs better than bandwidth and random.
**B. Use Case 2: Three policies evaluation for Moving Target Defense**
In addition to the Load Balancing use case, we evaluated the cost of the system security by the application of Moving Target Defense. We tested the migration time for the three policies aforementioned. Looking at Table III, it is possible to observe how Random policy is the fastest one while for Shamir the migration time is the highest. This happens because in Shamir policy source host asks to k disjoint hosts for k different keys, hence far hosts can be selected. In Digital Fountain policy the source host asks to k non-disjoint hosts for k keys. It is likely to ask the host with small latency more times, leading to a smaller migration time.
In essence, random policy is the fastest one, but it does not apply any secure mechanisms, while Digital Fountain provides the better speed-security trade-off.
**VI. CONCLUSION AND FUTURE PLAN**
In this paper we presented a policy-programmable container migration architecture based on Docker within an SDN prototype. It allows to change strategy and algorithm with a simple configuration file. Moreover, we tested two uses *i.e.*, Load Balancing and Moving Target Defense, and we applied three different policies for each use case. Based on the results obtained we found that in different scenarios different algorithms provide the best performance. Hence, our policy-programmable LVM migration system guarantees the appropriate flexibility, as such it can adapt to different application needs by just modifying the configuration.
As a plan for the future, we want to improve the system in several aspects. For the software defined measurement, we could integrate the SDN controller with big data and machine learning algorithms. In this case, the migration destination host can be predicted. By doing this we can improve the network management service. In addition, we could scale further the testbed and explore the policies trade-off in different topologies, such as tree, linear, star, fully connected.
**ACKNOWLEDGMENTS**
This work has been partially supported by NSF CNS-1647084 and CNS-1836906.
**REFERENCES**
|
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/2752093/e384c431-1057-d4b2-e053-9f05fe0a1d67/Final_public.pdf", "len_cl100k_base": 4783, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18228, "total-output-tokens": 6264, "length": "2e12", "weborganizer": {"__label__adult": 0.0004222393035888672, "__label__art_design": 0.0004134178161621094, "__label__crime_law": 0.00075531005859375, "__label__education_jobs": 0.000514984130859375, "__label__entertainment": 0.00016379356384277344, "__label__fashion_beauty": 0.0001800060272216797, "__label__finance_business": 0.0004725456237792969, "__label__food_dining": 0.000408172607421875, "__label__games": 0.0008673667907714844, "__label__hardware": 0.004055023193359375, "__label__health": 0.0009055137634277344, "__label__history": 0.0003685951232910156, "__label__home_hobbies": 0.00014925003051757812, "__label__industrial": 0.000774383544921875, "__label__literature": 0.00028824806213378906, "__label__politics": 0.0003800392150878906, "__label__religion": 0.00044083595275878906, "__label__science_tech": 0.46484375, "__label__social_life": 0.00014483928680419922, "__label__software": 0.03875732421875, "__label__software_dev": 0.4833984375, "__label__sports_fitness": 0.00031447410583496094, "__label__transportation": 0.0006203651428222656, "__label__travel": 0.0002287626266479492}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25731, 0.0307]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25731, 0.13147]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25731, 0.88091]], "google_gemma-3-12b-it_contains_pii": [[0, 1163, false], [1163, 6722, null], [6722, 9851, null], [9851, 14578, null], [14578, 19237, null], [19237, 25731, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1163, true], [1163, 6722, null], [6722, 9851, null], [9851, 14578, null], [14578, 19237, null], [19237, 25731, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25731, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25731, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25731, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25731, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25731, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25731, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25731, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25731, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25731, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25731, null]], "pdf_page_numbers": [[0, 1163, 1], [1163, 6722, 2], [6722, 9851, 3], [9851, 14578, 4], [14578, 19237, 5], [19237, 25731, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25731, 0.07317]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
c00cad4f0e5d8e93b480f4ff00a2a241dbdb5a35
|
A Lower Bound for the Intersection of Regular Forests
by Dennis M. Volpano
October 1993
Approved for public release; distribution is unlimited.
Prepared for:
Naval Postgraduate School
Monterey, California 93943
Best Available Copy
REAR ADmirAL T. A. MERCER
Superintendent
HARRISON SHULL
Provost
This report was prepared with research funded by the Naval Research Laboratory under the Reimbursable Funding.
Reproduction of all or part of this report is authorized.
This report was prepared by:
Dennis M. Volpano
Professor of Computer Science
Reviewed by:
Yutaka Kanayama
Associate Chairman for Technical Research
Released by:
PAUL MARTO
Dean of Research
**Title:** A Lower Bound for the Intersection of Regular Forests
**Personal Author:** Dennis M. Volpano
**Type of Report and Period Covered:** Final, 10/92 to 9/93
**Abstract:**
Regular ΣX-forests continue to play an important role in programming languages, specifically in the design of type systems. They arise naturally as terms of constructor-based, recursive data types in logic and functional languages. Deciding whether the intersection of a sequence of regular ΣX-forests is nonempty is an important problem in type inference. We show that this problem is PSPACE-hard and as a corollary that the problem of constructing a regular ΣX-grammar representing their intersection is PSPACE-hard.
A Lower Bound for the Intersection of Regular Forests
Dennis M. Volpano
Department of Computer Science
Naval Postgraduate School
Monterey, California, USA
email: volpano@cs.nps.navy.mil
October 5, 1993
Abstract
Regular $\Sigma X$-forests continue to play an important role in program-
ing languages, specifically in the design of type systems [MiR85, AM91, Vol93]. They arise naturally as terms of constructor-based, recursive data types in logic and functional languages. Deciding whether the intersection of a sequence of regular $\Sigma X$-forests is nonempty is an important problem in type inference. We show that this problem is PSPACE-hard and as a corollary that the problem of constructing a regular $\Sigma X$-grammar representing their intersection is PSPACE-hard.
1 Introduction
Regular $\Sigma X$-forests are playing an increasingly important role in language design and in particular in the design of type systems. Type inference then usually relies upon various operations over regular forests, one of which is $RF\text{-INT}$, deciding the emptiness of their intersection.
Definition 1.1 The problem $RF\text{-INT}$ is given a sequence of regular $\Sigma X$-grammars $G_1, \ldots, G_m$, decide whether $\bigcap_{k=1}^m T(G_k)$ is nonempty.
Regular forests have been used to characterize the types of logic and functional programs [Mis84, MiR85, HeJ90, AM91] as well as overloading introduced through classes in Haskell [Kae88, Vol93]. For example, Heintze and Jaffar propose what amounts to regular $\Sigma X$-grammars as inferred "types" or approximations of the semantics of logic programs. Corresponding to a logic program, say
\[
\begin{align*}
p(a). \\
p(f(X)) & \leftarrow p(X). \\
r(b). \\
r(f(Y)) & \leftarrow r(Y). \\
q(Z) & \leftarrow p(Z), r(Z).
\end{align*}
\]
is a set of equations
\[
\begin{align*}
X &= a \cup f(X) \\
Y &= b \cup f(Y) \\
Z &= X \cap Y
\end{align*}
\]
whose simultaneous least fixed point is an approximate meaning of the program. The inferred approximation or "type" is given by
\[
\begin{align*}
X &= a \cup f(X) \\
Y &= b \cup f(Y) \\
Z &= \emptyset
\end{align*}
\]
Solving for variable $Z$ requires deciding whether the intersection of the two regular forests described by the first two equations is nonempty.
One can also view the logic program above as describing a set of valid overloading in Haskell for $p$ and $r$ as operators where $p$ has instances at types $a$ and $f$, and $r$ at $b$ and $f$:
\[
\begin{align*}
\text{class } P \alpha \text{ where } p :: \alpha \\
\text{instance } P a \text{ where } p = \ldots \\
\text{instance } P f(X) \Rightarrow P f(X) \text{ where } p = \ldots \\
\text{class } R \alpha \text{ where } r :: \alpha \\
\text{instance } R b \text{ where } r = \ldots \\
\text{instance } R f(Y) \Rightarrow R f(Y) \text{ where } r = \ldots
\end{align*}
\]
Instance declarations for an overloaded operator in Haskell describe a regular forest. So for example, deciding whether term $p = r$ is typable requires
deciding whether the regular forest arising from p’s instance declarations intersects with the forest described by instances for r.
2 Forests and Regular $\Sigma X$-grammars
Given an alphabet $A$, an $A$-valued tree $t$ is specified by its set of nodes (the “domain” $\text{dom}(t)$) and a valuation of the nodes in $A$. Formally, a $k$-ary, $A$-valued tree is a map $t : \text{dom}(t) \rightarrow A$ where $\text{dom}(t) \subseteq \{0, \ldots, k - 1\}^*$ is a nonempty set, closed under prefixes. The frontier of $t$ is the set
\{ w \in \text{dom}(t) \mid \exists i. w_i \in \text{dom}(t) \}.
It is assumed that $A$ is partitioned into a ranked alphabet $\Sigma$ and a frontier alphabet $X$. A ranked alphabet, or signature, is a finite nonempty operator domain. For any $\Sigma$ and $X$, we denote the set of all finite $\Sigma X$-trees by $F_{\Sigma}(X)$. A forest, or tree language, $T \subseteq F_{\Sigma}(X)$ is called regular if and only if for some finite set $C$ disjoint from $\Sigma$ and $X$, $T$ can be obtained from finite subsets of $F_{\Sigma}(X \cup C)$ by applications of union, concatenation $\cdot$, and closure "*" where $c \in C$ [Tho90].
A regular forest can alternatively be defined as a tree language generated by a regular $\Sigma X$-grammar [GeS84].
**Definition 2.1** A regular $\Sigma X$-grammar $G$ consists of
- a finite nonempty set $N$ of nonterminal symbols,
- a finite set $P$ of productions of the form $A \rightarrow r$ where $A \in N$ and $r \in F_{\Sigma}(N \cup X)$, and
- an initial symbol $S \in N$.
**Definition 2.2** If $G = (N, \Sigma, X, P, S)$ is a regular $\Sigma X$-grammar then the $\Sigma X$-forest generated by $G$ is
$T(G) = \{ t \in F_{\Sigma}(X) \mid S \Rightarrow^*_G t \}$
Regular $\Sigma X$-grammars are a class of context-free grammars that define the same family of forests as those recognized by nondeterministic root-to-frontier (NDR) $\Sigma X$-automata. A root-to-frontier automaton can be viewed
as an attribute evaluator for a tree whose attributes are states prescribed
by an attribute grammar with inherited attributes only. Formally, a NDR
$\Sigma X$-automaton $A$ is a tuple $(A, A', \alpha)$ such that
1. $A$ is a finite NDR $\Sigma$-algebra $(A, \Sigma)$,
2. $A' \subseteq A$ is a set of initial states, and
3. $\alpha : X \rightarrow \wp A$ is a final assignment.
In a NDR $\Sigma$-algebra $(A, \Sigma)$, $A$ is a nonempty set of states and every
$\sigma \in \Sigma_m$ with $m \geq 1$ is realized as a mapping $\sigma^A : A \rightarrow \wp(A^m)$. For $\sigma \in \Sigma_0$, $\sigma^A$ is a subset of $A$.
For example, a NDR $\Sigma X$-automaton $A = (A, A', \alpha)$ recognizing set
\[
\{\sigma(x, y), \sigma(y, x)\}
\]
can be defined as follows. Let $\Sigma = \Sigma_2 = \{\sigma\}$, $X = \{x, y\}$, and the set of
initial states $A' = \{S\}$. Define $A = (\{\hat{x}, \hat{y}, S\}, \Sigma)$ such that
\[
\sigma^A(S) = \{(\hat{x}, \hat{y}), (\hat{y}, \hat{x})\}
\]
and finally define the final assignment $\alpha$ as
\[
\begin{align*}
\sigma x &= \{\hat{y}\} \\
\sigma y &= \{\hat{x}\}
\end{align*}
\]
It is interesting to note that there is no deterministic root-to-frontier $\Sigma X$-automaton that accepts the set above. Suppose automaton $A$ accepts $\sigma(x, y)$
and $\sigma(y, x)$ and that $\sigma(a) = (a_1, a_2)$ for some states $a$, $a_1$, and $a_2$ of $A$. If $\alpha$
is $A$'s final assignment function, then
\[
\begin{align*}
\sigma x &= a_1, & \sigma y &= a_2, & \sigma y &= a_1, & \sigma x &= a_2
\end{align*}
\]
Since $A$ is deterministic, $a_1 = a_2$. So we have $\sigma(a) = (a_1, a_1)$ where $\sigma x = y \alpha = a_1$. Therefore on $\sigma(x, x)$ and $\sigma(y, y)$, $A$ enters the leaves in state $a_1$
such that $a_1 \in x \alpha$, and $a_1 \in y \alpha$. Thus $A$ accepts $\sigma(x, x)$ and $\sigma(y, y)$ as well.
Given that regular $\Sigma X$-grammars define exactly the forests recognized by
NDR $\Sigma X$-automata, one could formulate $RF\text{-}INT$ in terms of the latter rep-
resentation of regular forests. But we choose regular $\Sigma X$-grammars instead
since they are better suited for manipulation.
Regular forests are effectively closed under intersection.
Theorem 2.1 If $G_1$ and $G_2$ are regular $\Sigma X$-grammars, for a given $\Sigma$ and $X$, then $T(G_1) \cap T(G_2)$ is a forest generated by a regular $\Sigma X$-grammar.
Proof. Suppose $G_1 = (N_1, \Sigma, X, P_1, S_1)$ and $G_2 = (N_2, \Sigma, X, P_2, S_2)$ are regular $\Sigma X$-grammars. Let $\Sigma X$-grammar $G = (N_1 \times N_2, \Sigma, X, P, [S_1, S_2])$ where
\[
[A, B] \rightarrow a([Y_1, Z_1], \ldots, [Y_n, Z_n]) \in P, \quad \text{for } n \geq 0
\]
if and only if
\[
A \rightarrow a(Y_1, \ldots, Y_n) \in P_1,
B \rightarrow a(Z_1, \ldots, Z_n) \in P_2,
\]
and $a \in \Sigma$, or $[A, B] \rightarrow a \in P$ if and only if $a \in X$. Then $T(G) = T(G_1) \cap T(G_2)$. \hfill $\square$
The theorem implies that the family of regular forests is properly contained within the context-free languages since the latter is not closed under intersection.
We now state and prove the main result.
Theorem 2.2 RF-INT is PSPACE-hard.
Proof. The proof uses a result of [Koz77]. For every deterministic Turing machine $M$ of polynomial space complexity, we give a log-space transducer that on input $x$, outputs a sequence of regular $\Sigma X$-grammars whose intersection is nonempty iff $M$ accepts $x$.
Let $M$ be a single tape DTM of polynomial space complexity $p(n) \geq n$ and assume that $M$ always makes at least three odd number of moves, has a unique accepting state, $q_{acc}$, and erases its tape before accepting, positioning its tape head at the left end of the tape. Let $x = a_1 \ldots a_n$ be a string over $M$'s input alphabet and suppose $M$ has states $Q$ and tape symbols $\Gamma$ such that $Q, \Gamma$, and set $\{\text{nil}, \#, \###\}$ are pairwise disjoint. If
\[
\Delta = \Gamma \cup \{[qX] \mid q \in Q \& X \in \Gamma\}
\]
then ranked alphabet $\Sigma = \Sigma_0 \cup \Sigma_1 \cup \Sigma_2 \cup \Sigma_3$ where $\Sigma_0 = \{\text{nil}\}$, $\Sigma_1 = \Delta$, $\Sigma_2 = \{\###\}$ and $\Sigma_3 = \{\#\}$. Suppose $\text{ID}_\Delta$ derives regular forest
\[
Z_1 (Z_2 ( \cdots Z_{p(n)} (\text{nil}) \cdots)
\]
for all $Z_k \in \Delta$, $1 \leq k \leq p(n)$, and $ID_{X_1X_2X_3}^{[X_1X_2X_3]}$ derives regular forest
$$Z_1 (\cdots Z_{i-1} (X_1 (X_2 (X_3 (Z_i (\cdots Z_{p(n)-3} (\text{nil}) \cdots)) \cdots)) \cdots)$$
for all $X_1, X_2, X_3, Z_k \in \Delta$, $1 \leq k \leq p(n) - 3$.
A computation of $M$ consists of a sequence of instantaneous descriptions $\text{ID}_0 \vdash \text{ID}_1 \vdash \cdots \vdash \text{ID}_{2m+1}$, each containing the contents of $M$'s tape padded with blanks ($\text{B}$'s) to length $p(n)$. If according to a move of $M$, symbols $Y_1Y_2Y_3$ in positions $i, i+1, \text{and } i+2$ respectively of an $\text{ID}$ can follow from symbols $X_1X_2X_3$ in the same positions of another $\text{ID}$, we write
$$\text{ID}_{X_1X_2X_3}^{[X_1X_2X_3]} \vdash_M \text{ID}_{Y_1Y_2Y_3}^{[Y_1Y_2Y_3]}$$
We give two regular $\SigmaX$-grammars $F_i^{\text{odd}}$ and $F_i^{\text{even}}$ such that $F_i^{\text{odd}}$ ensures that even $\text{ID}$'s follow from odd ones, and $F_i^{\text{even}}$ that odd ones follow from even ones. Let $F_i^{\text{odd}}$ be a regular $\SigmaX$-grammar with empty frontier alphabet, start symbol $S$ and productions
$$S \rightarrow \#(\text{ID}_\Delta, \text{ID}_X^{[Z_1Z_2Z_3]}, F_i^{[Z_1Z_2Z_3]})$$
for all $Z_k \in \Delta$, $1 \leq k \leq 3$,
$$F_i^{[X_1X_2X_3]} \rightarrow \#(\text{ID}_i^{[Y_1Y_2Y_3]}, \text{ID}_i^{[Z_1Z_2Z_3]}, F_i^{[Z_1Z_2Z_3]})$$
for all $X_k, Y_k, Z_k \in \Delta$, $1 \leq k \leq 3$, such that $ID_i^{[X_1X_2X_3]} \vdash_M ID_i^{[Y_1Y_2Y_3]}$, and
$$F_i^{[X_1X_2X_3]} \rightarrow \#(\text{ID}_i^{[Y_1Y_2Y_3]}, \text{ID}_\Delta)$$
for all $X_k, Y_k \in \Delta$, $1 \leq k \leq 3$, such that $ID_i^{[X_1X_2X_3]} \vdash_M ID_i^{[Y_1Y_2Y_3]}$.
Let $F_i^{\text{even}}$ be a regular $\SigmaX$-grammar with empty frontier alphabet, start symbol $S$ and productions
$$S \rightarrow \#(\text{ID}_i^{[X_1X_2X_3]}, \text{ID}_i^{[Y_1Y_2Y_3]}, S)$$
$$S \rightarrow \#(\text{ID}_i^{[X_1X_2X_3]}, \text{ID}_i^{[Y_1Y_2Y_3]})$$
for all $X_k, Y_k \in \Delta$, $1 \leq k \leq 3$, such that $ID_i^{[X_1X_2X_3]} \vdash_M ID_i^{[Y_1Y_2Y_3]}$.
Finally, suppose $\text{initID}$ derives the unary tree
$$(\text{qa}_1)(a_2(\cdots a_n(\text{B}_n+1(\cdots \text{B}_{p(n)}(\text{nil}) \cdots))))$$
where $B_k$ is a blank and $q_0$ is the start state of $M$, and finalID derives
$$[q_{acc}B](B_2(\cdots B_{p(n)}(nil)\cdots)$$
Then let $F_{end}$ be a regular grammar with start symbol $S$ and productions
$$S \rightarrow \#(\text{initID}, ID\Delta, F_{acc})$$
$$F_{acc} \rightarrow \#(ID\Delta, ID\Delta, F_{acc})$$
$$F_{acc} \rightarrow \#\#(ID\Delta, \text{finalID})$$
Then we have
$$\bigcap_{i=1}^{p(n)-2} T(F_i^{\text{odd}})$$
iff $u = \#(ID_0, ID_1, \#(\cdots \#(ID_{2m-2}, ID_{2m-1}, \#(ID_{2m}, ID_{2m+1})\cdots)$ and from
$ID_{2k-1}$ follows $ID_{2k}$ according to the transition rules of $M$ for $1 \leq k \leq m$. Likewise,
$$\bigcap_{i=1}^{p(n)-2} T(F_i^{\text{even}})$$
iff $u = \#(ID_0, ID_1, \#(\cdots \#(ID_{2m-2}, ID_{2m-1}, \#(ID_{2m}, ID_{2m+1})\cdots)$ and from
$ID_{2k}$ follows $ID_{2k+1}$ according to the rules of $M$ for $0 \leq k \leq m$. Then
$$T(F_{end}) \cap \bigcap_{i=1}^{p(n)-2} T(F_i^{\text{odd}}) \cap T(F_i^{\text{even}})$$
is nonempty iff $M$ accepts $x$. □
As is the case for emptiness of intersection of a sequence of DFA's, the
source for the hardness of $RF-\text{INT}$ lies not in deciding emptiness but rather
in computing the intersection of regular forests.
**Corollary 2.3** Given regular $\Sigma X$-grammars $G_1, \ldots, G_m$, constructing a regular $\Sigma X$-grammar $G$ such that $T(G) = \bigcap_{i=1}^{m} T(G_i)$ is PSPACE-hard.
**Proof.** The emptiness of $T(G)$ for a regular $\Sigma X$-grammar $G$ is decidable in time $O(|G|^2)$ in the usual way. From the proof of Theorem 2.2 then every problem in PSPACE is P-time Turing reducible to the problem of constructing the intersection of a sequence of regular $\Sigma X$-grammars. □
A simple algorithm for constructing $G$ is based on the usual construction of forming the cartesian product of reachable states as is suggested in the proof of Theorem 2.1 [AiM91]. It has worst-case time complexity exponential in $m$. Unfortunately this naive construction is likely the best we can do. It should be pointed out that for a fixed $m$, constructing $G$ from $G_1, \ldots, G_m$ can be done in polynomial time.
Deciding whether some number of DFA's accept a common string can be done in nondeterministic linear space, but this does not appear to be true for RF-INT, which can be decided in deterministic exponential time. This suggests that a tighter lower bound exists for RF-INT.
References
Distribution List
Defense Technical Information Center
Cameron Station
Alexandria, VA 22314
Library, Code 52
Naval Postgraduate School
Monterey, CA 93943
Director of Research Administration
Code 08
Naval Postgraduate School
Monterey, CA 93943
Dr. Neil C. Rowe, Code CSRp
Naval Postgraduate School
Computer Science Department
Monterey, CA 93943-5118
Prof. Robert B. McGhee, Code CSMz
Naval Postgraduate School
Computer Science Department
Monterey, CA 93943-5118
Dr. Ralph Wachter
Software Program
Office of Naval Research
800 N. Quincy St.
Arlington VA 22217-5000
Dr. Dennis Volpano, Code CSVo
Naval Postgraduate School
Computer Science Dept.
Monterey, CA 93943-5118
|
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/a278653.pdf", "len_cl100k_base": 5296, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 31085, "total-output-tokens": 6653, "length": "2e12", "weborganizer": {"__label__adult": 0.0005598068237304688, "__label__art_design": 0.0003790855407714844, "__label__crime_law": 0.0007319450378417969, "__label__education_jobs": 0.0017223358154296875, "__label__entertainment": 0.00010144710540771484, "__label__fashion_beauty": 0.0002818107604980469, "__label__finance_business": 0.00033545494079589844, "__label__food_dining": 0.0006718635559082031, "__label__games": 0.000774383544921875, "__label__hardware": 0.001293182373046875, "__label__health": 0.0017824172973632812, "__label__history": 0.0003914833068847656, "__label__home_hobbies": 0.00017940998077392578, "__label__industrial": 0.0007309913635253906, "__label__literature": 0.0005326271057128906, "__label__politics": 0.0004930496215820312, "__label__religion": 0.0007848739624023438, "__label__science_tech": 0.08477783203125, "__label__social_life": 0.00017762184143066406, "__label__software": 0.003986358642578125, "__label__software_dev": 0.8974609375, "__label__sports_fitness": 0.0005674362182617188, "__label__transportation": 0.001007080078125, "__label__travel": 0.00028252601623535156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17483, 0.0304]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17483, 0.5659]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17483, 0.77657]], "google_gemma-3-12b-it_contains_pii": [[0, 215, false], [215, 235, null], [235, 666, null], [666, 1367, null], [1367, 2631, null], [2631, 4372, null], [4372, 6342, null], [6342, 8559, null], [8559, 10619, null], [10619, 12872, null], [12872, 14572, null], [14572, 16401, null], [16401, 16811, null], [16811, 17483, null]], "google_gemma-3-12b-it_is_public_document": [[0, 215, true], [215, 235, null], [235, 666, null], [666, 1367, null], [1367, 2631, null], [2631, 4372, null], [4372, 6342, null], [6342, 8559, null], [8559, 10619, null], [10619, 12872, null], [12872, 14572, null], [14572, 16401, null], [16401, 16811, null], [16811, 17483, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17483, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17483, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17483, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17483, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17483, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17483, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17483, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17483, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17483, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17483, null]], "pdf_page_numbers": [[0, 215, 1], [215, 235, 2], [235, 666, 3], [666, 1367, 4], [1367, 2631, 5], [2631, 4372, 6], [4372, 6342, 7], [6342, 8559, 8], [8559, 10619, 9], [10619, 12872, 10], [12872, 14572, 11], [14572, 16401, 12], [16401, 16811, 13], [16811, 17483, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17483, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
62ffdde289088f3249cb7b6fec43de9a84dc62ee
|
Performance Analysis of Computer Systems
Monitoring Techniques
Holger Brunst (holger.brunst@tu-dresden.de)
Matthias S. Mueller (matthias.mueller@tu-dresden.de)
Summary of Previous Lecture
Holger Brunst (holger.brunst@tu-dresden.de)
Matthias S. Mueller (matthias.mueller@tu-dresden.de)
Stream Benchmark for Memory Bandwidth
- Author: John McCalpin (“Mr Bandwidth“)
STREAM: measure memory bandwidth with the operations:
- Copy: \( a(i) = b(i) \)
- Scale: \( a(i) = s * b(i) \)
- Add: \( a(i) = b(i) + c(i) \)
- Triad: \( a(i) = b(i) + s * c(i) \)
STREAM2: measures memory hierarchy bandwidth with the operations:
- Fill: \( a(i) = 0 \)
- Copy: \( a(i) = b(i) \)
- Daxpy: \( a(i) = a(i) + q * b(i) \)
- Sum: \( \text{sum} += a(i) \)
## Stream 2 properties
<table>
<thead>
<tr>
<th>Kernel</th>
<th>Code</th>
<th>Bytes/iter read</th>
<th>Bytes/iter written</th>
<th>FLOPS/iter</th>
</tr>
</thead>
<tbody>
<tr>
<td>FILL</td>
<td>$a(i) = q$</td>
<td>0 (+8)</td>
<td>8</td>
<td>0</td>
</tr>
<tr>
<td>COPY</td>
<td>$a(i) = b(i)$</td>
<td>8 (+8)</td>
<td>8</td>
<td>0</td>
</tr>
<tr>
<td>DAXPY</td>
<td>$a(i) = a(i) + q*b(i)$</td>
<td>16</td>
<td>8</td>
<td>2</td>
</tr>
<tr>
<td>SUM</td>
<td>sum = sum + a(i)</td>
<td>8</td>
<td>0</td>
<td>1</td>
</tr>
</tbody>
</table>
Stream 2 Results
\[ a(i) = b(i) + \alpha c(i) \]
- NEC_Azusa_Intel_Itanium_azusa_efc
- Pentium4_1400MHz_loan1_ifc
log_2(loop length)
What is LINPACK NxN
- LINPACK NxN benchmark
- Solves system of linear equations by some method
- Allows the vendors to choose size of problem for benchmark
- Measures execution time for each size problem
- LINPACK NxN report
- Nmax – the size of the chosen problem run on a machine
- Rmax – the performance in Gflop/s for the chosen size problem run on the machine
- N1/2 – the size where half the Rmax execution rate is achieved
- Rpeak – the theoretical peak performance Gflop/s for the machine
- LINPACK NxN is used to rank TOP500 fastest computers in the world
HPCS Performance Targets
- HPCC was developed by HPCS to assist in testing new HEC systems.
- Each benchmark focuses on a different part of the memory hierarchy.
- HPCS performance targets attempt to:
- Flatten the memory hierarchy
- Improve real application performance
- Make programming easier
**HPC Challenge Performance Targets**
<table>
<thead>
<tr>
<th>Max</th>
<th>Relative</th>
</tr>
</thead>
<tbody>
<tr>
<td>2 Pflop/s</td>
<td>8x</td>
</tr>
<tr>
<td>6.5 Pbyte/s</td>
<td>40x</td>
</tr>
<tr>
<td>0.5 Pflop/s</td>
<td>200x</td>
</tr>
<tr>
<td>64000 GUPS</td>
<td>2000x</td>
</tr>
</tbody>
</table>
**Examples of Benchmarks**
- **HPL**: linear system solve
\[ Ax = b \]
- **STREAM**: vector operations
\[ A = B + s \cdot C \]
- **FFT**: 1D Fast Fourier Transform
\[ Z = \text{fft}(X) \]
- **RandomAccess**: integer update
\[ T[i] = \text{XOR}(T[i], \text{rand}) \]
HPC Challenge Benchmark
- Consists of basically 7 benchmarks;
- Think of it as a framework or harness for adding benchmarks of interest.
- HPL (LINPACK) — MPI Global \((Ax = b)\)
- STREAM — Local; single CPU
* STREAM — Embarrassingly parallel
- PTRANS \((A \ A + BT)\) — MPI Global
- RandomAccess — Local; single CPU
* RandomAccess — Embarrassingly parallel
- RandomAccess — MPI Global
- BW and Latency — MPI
- FFT - Global, single CPU, and EP
- Matrix Multiply — single CPU and EP
Tests on Single Processor and System
- **Local** - only a single processor is performing computations.
- **Embarrassingly Parallel** - each processor in the entire system is performing computations but they do no communicate with each other explicitly.
- **Global** - all processors in the system are performing computations and they explicitly communicate with each other.
Computational Resources and HPC Challenge
- HPL Matrix Multiply
- CPU computational speed
- Computational resources
- Memory bandwidth
- Node Interconnect bandwidth
- Random & Natural Ring Bandwidth & Latency
STREAM
ZIH Center for Information Services & High Performance Computing
Memory Access Patterns
- **STREAM (EP & SP)**
- **PTRANS (G)**
- **HPL Linpack (G)**
- **Matrix Mult (EP & SP)**
**Applications:***
- **Computational Fluid Dynamics**
- **Radar Cross Section**
- **Traveling Sales Person**
- **Digital Signal Processing**
- **Zoom-FFT Algorithm**
**Figure 1.** Concept of Radar Cross Section
**Spatial Locality**
**Temporal Locality**
## Condensed Results - Base Runs Only - 106 Systems - Generated on Mon Jun 26 09:17:02 2006
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Atipa Conquest cluster AMD Opteron</td>
<td>1.4GHz 1 128 1 128</td>
<td>0.252610</td>
<td>3.2471</td>
<td>208.525</td>
<td>1.6261</td>
<td>0.05627</td>
<td>23.66</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Clustersvision BV Beesley AMD Opteron</td>
<td>2.4GHz</td>
<td>32 1 32</td>
<td>0.103764</td>
<td>0.8159</td>
<td>0.0000235</td>
<td>2.1470</td>
<td>106.951</td>
<td>3.3422</td>
<td>4.1943</td>
<td>0.02668</td>
</tr>
<tr>
<td>Cray X1 MSP</td>
<td>0.8GHz 64 1 64</td>
<td>0.521560</td>
<td>3.2288</td>
<td>959.334</td>
<td>14.9566</td>
<td>0.94074</td>
<td>20.34</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cray X1 MSP</td>
<td>0.8GHz 60 1 60</td>
<td>0.577790</td>
<td>20.4312</td>
<td>898.446</td>
<td>14.9741</td>
<td>1.02291</td>
<td>20.62</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cray X1 MSP</td>
<td>0.8GHz 120 1 120</td>
<td>1.068790</td>
<td>2.4603</td>
<td>1019.519</td>
<td>8.4860</td>
<td>0.83014</td>
<td>20.12</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cray T3E Alpha 21164</td>
<td>0.6GHz 1024 1 1024</td>
<td>0.0481693</td>
<td>10.2765</td>
<td>529.242</td>
<td>0.5160</td>
<td>0.03314</td>
<td>12.09</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cray X1 MSP</td>
<td>0.8GHz 252 1 252</td>
<td>2.384290</td>
<td>97.4076</td>
<td>3729.404</td>
<td>14.9413</td>
<td>0.42899</td>
<td>22.27</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cray X1 MSP</td>
<td>0.8GHz 124 1 124</td>
<td>1.2054200</td>
<td>39.3232</td>
<td>1856.964</td>
<td>14.9771</td>
<td>0.70857</td>
<td>20.17</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cray X1 MSP</td>
<td>0.8GHz 60 1 60</td>
<td>0.5058740</td>
<td>1.6342</td>
<td>0.0020750</td>
<td>2.1444</td>
<td>694.114</td>
<td>14.9019</td>
<td>10.91520</td>
<td>1.16779</td>
<td>14.66</td>
</tr>
<tr>
<td>Cray T3E Alpha 21164</td>
<td>0.67GHz 512 1 512</td>
<td>0.233180</td>
<td>9.7741</td>
<td>0.0288644</td>
<td>15.4774</td>
<td>272.166</td>
<td>0.5316</td>
<td>0.66077</td>
<td>0.03571</td>
<td>8.14</td>
</tr>
<tr>
<td>Cray X1I AMD Opteron</td>
<td>2.26GHz 64 1 64</td>
<td>0.223890</td>
<td>10.5924</td>
<td>0.0223566</td>
<td>16.8011</td>
<td>159.359</td>
<td>2.6555</td>
<td>4.03375</td>
<td>0.22697</td>
<td>1.63</td>
</tr>
<tr>
<td>Cray X1 MSP</td>
<td>0.8GHz 32 1 32</td>
<td>0.2767140</td>
<td>32.6836</td>
<td>0.0166620</td>
<td>2.9849</td>
<td>475.846</td>
<td>14.8702</td>
<td>8.28248</td>
<td>1.41289</td>
<td>14.94</td>
</tr>
<tr>
<td>Cray XT3 AMD Opteron</td>
<td>2.6GHz 1100 1 1100</td>
<td>4.762340</td>
<td>217.9230</td>
<td>0.1370020</td>
<td>266.6600</td>
<td>5274.588</td>
<td>4.7952</td>
<td>4.61050</td>
<td>0.26637</td>
<td>25.94</td>
</tr>
<tr>
<td>Cray X1I AMD Opteron</td>
<td>2.4GHz 128 1 128</td>
<td>0.5030760</td>
<td>10.8515</td>
<td>0.0666722</td>
<td>35.5152</td>
<td>500.069</td>
<td>2.9068</td>
<td>4.23425</td>
<td>0.25919</td>
<td>2.06</td>
</tr>
<tr>
<td>Cray X1E X1E MSP</td>
<td>1.13GHz 252 1 252</td>
<td>3.1945908</td>
<td>85.2040</td>
<td>0.1486884</td>
<td>15.3353</td>
<td>2439.985</td>
<td>9.6835</td>
<td>14.18470</td>
<td>0.36024</td>
<td>14.93</td>
</tr>
<tr>
<td>Cray XT3 AMD Opteron</td>
<td>2.4GHz 8744 1 8744</td>
<td>14.7402000</td>
<td>608.3060</td>
<td>0.2202950</td>
<td>417.1720</td>
<td>16146.382</td>
<td>4.0460</td>
<td>4.413200</td>
<td>0.16154</td>
<td>25.32</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Cray XT3 AMD Opteron</td>
<td>2.4GHz 5200 1 5200</td>
<td>20.5270000</td>
<td>674.6990</td>
<td>0.2095000</td>
<td>644.7200</td>
<td>260.2000</td>
<td>5.0040</td>
<td>4.29353</td>
<td>0.14622</td>
<td>25.80</td>
</tr>
<tr>
<td>Cray xt3 AMD Opteron</td>
<td>2.4GHz 32 1 32</td>
<td>0.1387810</td>
<td>7.3764</td>
<td>0.0326000</td>
<td>9.3983</td>
<td>158.424</td>
<td>4.8883</td>
<td>4.77641</td>
<td>0.57281</td>
<td>7.84</td>
</tr>
<tr>
<td>Cray X1E</td>
<td>1.13GHz 4070 1 8600</td>
<td>16.3750000</td>
<td>202.9790</td>
<td>0.5320720</td>
<td>905.5690</td>
<td>2065.456</td>
<td>5.0421</td>
<td>4.78166</td>
<td>0.18965</td>
<td>9.44</td>
</tr>
<tr>
<td>Cray XT2 AMD Opteron</td>
<td>2.6GHz 1100 1 1100</td>
<td>4.7279660</td>
<td>233.3460</td>
<td>0.358608</td>
<td>328.8260</td>
<td>5161.134</td>
<td>4.6919</td>
<td>4.77440</td>
<td>0.3967</td>
<td>7.29</td>
</tr>
<tr>
<td>Cray Inc XT3 AMD Opteron</td>
<td>2.4GHz 2520 1 2520</td>
<td>20.4006000</td>
<td>944.2270</td>
<td>0.6724120</td>
<td>751.7290</td>
<td>2426.447</td>
<td>4.6590</td>
<td>4.41173</td>
<td>0.20636</td>
<td>9.20</td>
</tr>
<tr>
<td>Cray Inc XT3 AMD Opteron</td>
<td>2GHz 10130 1 10130</td>
<td>35.2985000</td>
<td>1813.0800</td>
<td>1.0176500</td>
<td>1118.2900</td>
<td>43581.780</td>
<td>4.2108</td>
<td>3.86719</td>
<td>0.15188</td>
<td>10.32</td>
</tr>
<tr>
<td>Cray Inc 1X Cray E</td>
<td>1.13GHz 1008 1 1008</td>
<td>12.0263000</td>
<td>106.0190</td>
<td>0.0651190</td>
<td>82.2604</td>
<td>15522.031</td>
<td>15.0599</td>
<td>14.50000</td>
<td>0.15667</td>
<td>16.30</td>
</tr>
<tr>
<td>Cray Inc XT3 AMD Opteron</td>
<td>2.6GHz 4128 1 4128</td>
<td>16.6421000</td>
<td>674.7860</td>
<td>0.6767490</td>
<td>821.6770</td>
<td>19295.876</td>
<td>4.6743</td>
<td>4.72946</td>
<td>0.22249</td>
<td>8.27</td>
</tr>
<tr>
<td>Daloce Opteron/Quillot Linux Cluster AMD Opteron</td>
<td>2.2GHz 64 1 64</td>
<td>0.2180410</td>
<td>6.3195</td>
<td>0.0047003</td>
<td>13.5481</td>
<td>133.394</td>
<td>2.3968</td>
<td>3.87865</td>
<td>0.17003</td>
<td>11.46</td>
</tr>
</tbody>
</table>
NPC Challenge Benchmark
Benchmarks normalize to the show the highest performance with a value of 1
- **PP-HPL**
- **PP-PTAHM**
- **PP-RandomAccess**
- **SN-STREAM Triad**
- **RandomRing Latency**
- **RandomRing Bandwidth**
**Legend:**
- Gray XL 1: Fujitsu-Siemens - 32 procs - 0.8 GHz
- 1 thread/MPI process (32) - Cray modified 2-0 Torus - 11-22-2004
- NEC SX-6 32 procs - 0.5 GHz
- 1 thread/MPI process (32) - InfiniBand Crossbar Switch - 11-04-2004
- SGI Altix 3700 8x2 Intel Itanium 2 32 procs - 1.6 GHz
- 1 thread/MPI process (32) N/A 03-15-2005
- Dell PowerEdge 2650 Cluster Intel Xeon 32 procs - 2.4 GHz
- 1 thread/MPI process (32) Gigabit Ethernet, PowerConnect 5224 switch - 02-18-2005
Differences in the benchmark results between computers, even of the same model, can be a result of the number of processors used, the number of threads used, the processor interconnect, the amount of memory allocated for the run, the version of the BLAS and MPI, and other factors. A complete listing of the environment for each benchmark run can be found at: [http://icl.cs.utk.edu/mpc/export/mpc,xhtml](http://icl.cs.utk.edu/mpc/export/mpc,xhtml)
Monitoring Techniques
Holger Brunst (holger.brunst@tu-dresden.de)
Matthias S. Mueller (matthias.mueller@tu-dresden.de)
When the only tool you have is a hammer,
every problem begins to resemble a nail.
Abraham Maslow
Outline
- Introduction
- General: Terminology and classification
- Trigger Mechanisms
- Interval Timers
- Program Execution Monitors
- Instrumentation
- Tools and their Formats
**Introduction**
- A monitor is a tool to observe the activities on a system
- Reasons to monitor a system:
- System programmer:
- Find frequently used segments of a program and optimize their performance
- System manager:
- Measure resource utilization and find performance bottleneck(s)
- Tune the system by adjusting system parameters
- System Analyst:
- Use monitor data to characterize the workload for capacity planning
- Find model parameters, validate models, and find model inputs
General: Monitor Terminology
- **Event**: A change in the system’s state
- Measuring any kind of values typically based around the idea of events
- Examples: Memory reference, disk access, network communication operation, change in a processor’s internal state, pattern or a combination of other sub-events
- **Trace**: A log of timed events including event type and important parameters
- **Overhead**: The perturbation to the system induced by the monitor
- **Domain**: The set of observable activities forms the domain of a monitor
- **Input Rate**: Maximum frequency of events a monitor can correctly observe
- **Resolution**: The coarseness of the information observed
- **Input Width**: The size of information recorded per event
General: Monitor Classification
- System level at which monitor is implemented:
- Software monitor
- Hardware monitor
- Firmware monitor
- Hybrid monitor
- Trigger mechanisms:
- Event-driven
- Timer-driven
- Recording:
- Profiling
- Tracing
- Displaying ability:
- On-line
- Batch/Post mortem
Trigger Mechanisms: Event-driven
- Measure performance whenever the pre-selected event occurs
- Simplest type: event counter
- Perturbation to the system might be small if the event occurs infrequently
- High-frequency events: great deal of overhead may be introduced
- Can significantly alter program behavior
- Inter-event time can be highly variable and completely unpredictable
- Perturbation assessment not easy
Trigger Mechanisms: Timer-driven
- Measures at fixed time intervals the portion of the system state
- Overhead due to this strategy is independent of the number of times a specific event occurs
- Is instead function of the sampling frequency
- Determined by the resolution necessary
- Not every occurrence of the events will be measured
- Sampling produces statistical view on the overall behavior of the system
- Events that occur infrequently may be completely missed
- Each run of a sampling-experiment is likely to produce a different result
- Exact behavior may differ, statistical behavior should remain approximately the same
Interval Timers
- Most fundamental measuring tool in computer-system performance analysis
- Used to measure execution time
- Can provide time basis for sampling measurement tool
- Are based on the idea of counting the number of clock pulses
- Hardware timers:
- Counter typically initialized with power up
- Difference gives time interval
- Software timers:
- Software interrupt based interval timer
- Hardware counter used to initiate interrupt
- Value is a count of the number of interrupts
- Time rollover:
- Important is the number of bits available for counting
### Interval Timers: Rollover Time
<table>
<thead>
<tr>
<th>Resolution</th>
<th>16</th>
<th>24</th>
<th>32</th>
<th>48</th>
<th>64</th>
</tr>
</thead>
<tbody>
<tr>
<td>CPU</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1,00E-09</td>
<td>6,5536E-05</td>
<td>1,6777E-02</td>
<td>4,2950E+00</td>
<td>2,8147E+05</td>
<td>1,8447E+10</td>
</tr>
<tr>
<td>1,00E-08</td>
<td>6,5536E-04</td>
<td>1,6777E-01</td>
<td>4,2950E+01</td>
<td>2,8147E+06</td>
<td>1,8447E+11</td>
</tr>
<tr>
<td>1,00E-07</td>
<td>6,5536E-03</td>
<td>1,6777E+00</td>
<td>4,2950E+02</td>
<td>2,8147E+07</td>
<td>1,8447E+12</td>
</tr>
<tr>
<td>1,00E-06</td>
<td>6,5536E-02</td>
<td>1,6777E+01</td>
<td>4,2950E+03</td>
<td>2,8147E+08</td>
<td>1,8447E+13</td>
</tr>
<tr>
<td>1,00E-05</td>
<td>6,5536E-01</td>
<td>1,6777E+02</td>
<td>4,2950E+04</td>
<td>2,8147E+09</td>
<td>1,8447E+14</td>
</tr>
<tr>
<td>1,00E-04</td>
<td>5536E+00</td>
<td>1,6777E+03</td>
<td>4,2950E+05</td>
<td>2,8147E+10</td>
<td>1,8447E+15</td>
</tr>
</tbody>
</table>
| RTC | | | | | |
| 4 Seconds | | | | | |
Often bad: gettimeofday, or MPI_Wtime
Interval Timers: Overhead
- Time overhead
- xstart = read_timer()
- <event being timed>
- x_end = read_timer()
- elapsed_time = (x_end – x_start) * t_cycle
Time we measure includes more than the time required for the event
- Timer may require operating system call
- If the interval being measured is substantially larger than time overhead: no problem
- Alternatively, overhead can also be subtracted
- But! Overhead subtraction impossible for concurrent processes depending on each other
Interval Timers: Measuring Short Intervals
- Based on quantization effects, we cannot directly measure events whose durations are less than the resolution of the timer.
- We can however make many measurements of a short duration event to obtain a statistical estimate of the event's duration.
- Problem: Events need to take place within the timers resolution from time to time.
- Problem 2: Only average values.
Use of inaccurately synchronized timers results in an erroneous representation of the program trace data:
- Q1 Qualitative error: Violation of the logical order of distributed events.
- Q2 Quantitative error: Distorted time measurement of distributed activities. Leads to skewed performance values.
Synchronization of Multiple Timers: Standard
- **Hardware synchronization:**
- Tight synchronization, cost-intensive, not portable
- **Software synchronization:**
- Asymmetric:
- no load-balancing, reference timer can be false ticker and is bottleneck
- Symmetric: needs $O(n^2)$ messages, not scalable to thousands of processes
- Controlled logical clock: corrects violation of logical order (Q1), no correction of skewed performance values (Q2)
Synchronization of Multiple Timers: Goals
Need for a novel timer synchronization with respect to the requirements of parallel event tracing:
- Load-balanced, low synchronization overhead,
- Portable, scalable and robust synchronization algorithm,
- Restore the relationship of concurrent events,
- Accurate mapping of the event flow for an enhanced performance analysis.
Synchronization of Multiple Timers: Solution
- Two parts of the synchronization scheme:
- Recording synchronization information during runtime
- Subsequent correction, i.e. Transformation of asynchronous local time stamps to synchronous global time stamps with a linear interpolation
- Due to small fluctuations in the timer drift the synchronization error will be accumulated over long intervals
- Linear begin-to-end correction insufficient for long trace runs
- Synchronize the timers frequently and piecewise interpolate the timer parameters between the synchronization phases
Program Monitors: PC Sampling
- General statistical measurement technique in which a subset (i.e. a sample) of the members of a population being examined is selected at random
- Information of interest is gathered from the subset of the total population
- Assumption: Since the samples were chosen completely at random, the characteristics of the overall population will approximately follow the same proportion as do the characteristics of the subset actually measured
- Profile: Samples are taken at fixed times
- Interrupt service routine examines the return address stack to find the address of the instruction/function that was executed
Program Monitors: PC Sampling
Time
Sample
Task 1
Task 2
Task 3
Sample Monitors: PC Sampling
Program Monitors: Basic Block Counting
- Produces an exact execution profile by counting the number of times each basic block is executed.
- Basic block is sequence of processor instructions that has no branches into or out of the sequence.
- Additional instructions simply count the number of times the block is executed.
- After termination: values form a histogram.
- Show how often each block is executed.
- Complete instruction execution frequency counts can also be obtained from these counts.
- Key difference between basic block profiling and PC sampling: basic block profiling gives the exact execution frequencies of all instructions.
- Can add substantial amount of runtime overhead! Average number of instructions varies between three and 20!
- Overhead: More instructions, different memory behavior!
Program Monitors: Indirect Strategies
- Indirect strategy has to be used if metric is not directly accessible.
- Try to deduce and derive the desired performance metric from related event which can be measured.
- Development of an appropriate indirect measurement strategy which minimizes the overhead: difficult, needs experience.
- Impossible to make any general statements about a measurement tool that makes use of an indirect strategy.
- Key: match the characteristics of the desired metric with the appropriate measurement strategies.
Program Monitors: Tracing
- Not only simply recording the fact that the event has happened
- Stores some portion of the system state
- Instead of keeping just the number of page faults, a tracing record strategy may store the addresses that caused the page fault.
- Requires significantly more storage
- Time required to save the state can significantly alter program behavior being measured
Program Monitors: Tracing
1. Application
Monitor
2. Application
Monitor
3. Application
Monitor
4. Application
Monitor
. . .
10,000
Trace Data
Performance Visualization
Enable Scalability
LARS: Monitoring – Slide 35
Program Monitors: Tracing
- Profiling: provides summary information
- Profiling does not provide any information about the order in which the instructions were executed
- Trace:
- Dynamic list of the events generated by the program as it executes
- Time ordered list of:
- all of the instructions executed
- sequences of memory addresses accessed by a program
- sequences of disk blocks referenced by the file system
- sizes and destination of all messages sent over a network
Program Monitors: Tracing
- Several difficulties
- Execution-time slowdown
- Other program perturbations by the execution of the additional tracing information
- Volume of data
- Disk speed, organization of the whole process
- Advantages:
- Very detailed
- Summarized information can be computed for arbitrary time intervals
- Useful for both performance tuning and debugging
- Easy identification of synchronization issues
Instrumentation
- Source-code modification
- Software exceptions
- Emulation
- Microcode modification
- Library approach
- Compiler modification
Instrumentation: Source Code Modification
- Programmer may add additional tracing statements to the source code manually
- Additional program statements will be executed after compilation
- Programmer can determine which parts he wants to instrument
- Disadvantage:
- Manual approach
- Time consuming
- Error prone
- Programmers mostly believe that they have a clear understanding of the program execution and instrument only small code areas
Some processors support software exceptions just before the execution of each instruction.
Exception routine can decode the instruction to determine its operands.
Accurate but:
- Slowed down execution time by a factor of about 1000.
- By far too detailed in most cases.
Instrumentation: Emulation
- Emulator is a program that makes the system on which it executes appear to the outside as if it were something completely different.
- Java Virtual Machine executes application programs written in the Java programming language by emulating the operation of a processor that implements Java byte-code instructions.
- Tracing then straightforward, but:
- slows down execution significantly
- Not clear how to implement selective tracing
Parallel programs most often use communication libraries
These libraries can be instrumented easily
Communication is two sided in many cases
Merging results is quite a challenge
Gives quite a good overview about the program behavior
Instrumentation: Compiler Modification
- Modify the executable code produced by the compiler
- Similar to basic block profiling
- Details about the content of the basic block can be obtained from the compiler
- Two versions:
- Compilation option
- Post-compilation software tool
Several parallel trace formats exist
- Different trace formats for different performance systems
- VTF (Vampir), EPILOG (Kojak), SLOG2 (JumpShot-4), TAU
- All public domain
- No real common format or one with special emphasis on scalability
Community has no portable scalable tracing system
- How to support open source and cross-platform tracing tools?
- Mainly concerned with robust analysis and visualization
- Target an open scalable trace format and get community support
Summary
- Monitor terminology and classification
- Trigger mechanisms
- Interval timers
- Program execution monitors
- Instrumentation
- Tools and their formats
|
{"Source-Url": "http://tu-dresden.de/die_tu_dresden/zentrale_einrichtungen/zih/lehre/ws1011/lars/lars_lecture_05_monitoring.pdf", "len_cl100k_base": 7664, "olmocr-version": "0.1.53", "pdf-total-pages": 46, "total-fallback-pages": 0, "total-input-tokens": 68107, "total-output-tokens": 9212, "length": "2e12", "weborganizer": {"__label__adult": 0.0004031658172607422, "__label__art_design": 0.0006704330444335938, "__label__crime_law": 0.00048828125, "__label__education_jobs": 0.0011129379272460938, "__label__entertainment": 0.00015795230865478516, "__label__fashion_beauty": 0.00021278858184814453, "__label__finance_business": 0.0003101825714111328, "__label__food_dining": 0.00034332275390625, "__label__games": 0.0007772445678710938, "__label__hardware": 0.0151824951171875, "__label__health": 0.0005059242248535156, "__label__history": 0.0004503726959228515, "__label__home_hobbies": 0.00016808509826660156, "__label__industrial": 0.00119781494140625, "__label__literature": 0.00034499168395996094, "__label__politics": 0.00033545494079589844, "__label__religion": 0.0006270408630371094, "__label__science_tech": 0.366455078125, "__label__social_life": 0.00010114908218383788, "__label__software": 0.018646240234375, "__label__software_dev": 0.59033203125, "__label__sports_fitness": 0.0003829002380371094, "__label__transportation": 0.0007543563842773438, "__label__travel": 0.0002005100250244141}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22878, 0.0488]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22878, 0.26773]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22878, 0.76597]], "google_gemma-3-12b-it_contains_pii": [[0, 162, false], [162, 288, null], [288, 894, null], [894, 1421, null], [1421, 1557, null], [1557, 2139, null], [2139, 2954, null], [2954, 3443, null], [3443, 3820, null], [3820, 4104, null], [4104, 4477, null], [4477, 8517, null], [8517, 9665, null], [9665, 9785, null], [9785, 9884, null], [9884, 10062, null], [10062, 10580, null], [10580, 11327, null], [11327, 11644, null], [11644, 12062, null], [12062, 12696, null], [12696, 13276, null], [13276, 14067, null], [14067, 14567, null], [14567, 14980, null], [14980, 15280, null], [15280, 15741, null], [15741, 16114, null], [16114, 16703, null], [16703, 17350, null], [17350, 17448, null], [17448, 18262, null], [18262, 18804, null], [18804, 19197, null], [19197, 19434, null], [19434, 19933, null], [19933, 20375, null], [20375, 20521, null], [20521, 20973, null], [20973, 21245, null], [21245, 21716, null], [21716, 21949, null], [21949, 22233, null], [22233, 22233, null], [22233, 22717, null], [22717, 22878, null]], "google_gemma-3-12b-it_is_public_document": [[0, 162, true], [162, 288, null], [288, 894, null], [894, 1421, null], [1421, 1557, null], [1557, 2139, null], [2139, 2954, null], [2954, 3443, null], [3443, 3820, null], [3820, 4104, null], [4104, 4477, null], [4477, 8517, null], [8517, 9665, null], [9665, 9785, null], [9785, 9884, null], [9884, 10062, null], [10062, 10580, null], [10580, 11327, null], [11327, 11644, null], [11644, 12062, null], [12062, 12696, null], [12696, 13276, null], [13276, 14067, null], [14067, 14567, null], [14567, 14980, null], [14980, 15280, null], [15280, 15741, null], [15741, 16114, null], [16114, 16703, null], [16703, 17350, null], [17350, 17448, null], [17448, 18262, null], [18262, 18804, null], [18804, 19197, null], [19197, 19434, null], [19434, 19933, null], [19933, 20375, null], [20375, 20521, null], [20521, 20973, null], [20973, 21245, null], [21245, 21716, null], [21716, 21949, null], [21949, 22233, null], [22233, 22233, null], [22233, 22717, null], [22717, 22878, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22878, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22878, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22878, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22878, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22878, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22878, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22878, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22878, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22878, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22878, null]], "pdf_page_numbers": [[0, 162, 1], [162, 288, 2], [288, 894, 3], [894, 1421, 4], [1421, 1557, 5], [1557, 2139, 6], [2139, 2954, 7], [2954, 3443, 8], [3443, 3820, 9], [3820, 4104, 10], [4104, 4477, 11], [4477, 8517, 12], [8517, 9665, 13], [9665, 9785, 14], [9785, 9884, 15], [9884, 10062, 16], [10062, 10580, 17], [10580, 11327, 18], [11327, 11644, 19], [11644, 12062, 20], [12062, 12696, 21], [12696, 13276, 22], [13276, 14067, 23], [14067, 14567, 24], [14567, 14980, 25], [14980, 15280, 26], [15280, 15741, 27], [15741, 16114, 28], [16114, 16703, 29], [16703, 17350, 30], [17350, 17448, 31], [17448, 18262, 32], [18262, 18804, 33], [18804, 19197, 34], [19197, 19434, 35], [19434, 19933, 36], [19933, 20375, 37], [20375, 20521, 38], [20521, 20973, 39], [20973, 21245, 40], [21245, 21716, 41], [21716, 21949, 42], [21949, 22233, 43], [22233, 22233, 44], [22233, 22717, 45], [22717, 22878, 46]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22878, 0.125]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
e6222e3e91fc0512db796f658076b89104e97b10
|
Optimising Level Generators for General Video Game AI
Olve Drageset, Mark H.M. Winands
Department of Data Science and Knowledge Engineering
Maastricht University
Maastricht, NL
o.drageset@student.maastrichtuniversity.nl, m.winands@maastrichtuniversity.nl
Raluca D. Gaina, Diego Perez-Liebana
Game AI Research Group
Queen Mary University of London
London, UK
{r.d.gaina, diego.perez}@qmul.ac.uk
Abstract—Procedural Content Generation is an active area of research, with more interest being given recently to methods able to produce interesting content in a general context (without task-specific knowledge). To this extent, we focus on procedural level generators within the General Video Game AI framework (GVGAI). This paper proposes several topics of interest. First, a comparison baseline for GVGAI level generators, which is more flexible and robust than the existing alternatives. Second, a composite fitness evaluation function for levels based on AI play-testing. Third, a new parameterized generator, and a Meta Generator for performing parameter search on such generators are introduced. We compare the Meta Generator against random and constructive generator baselines, using the new fitness function, on 3 GVGAI games: Butterflies, Freeway and The Snowman. The Meta Generator is suggested to perform on par with or better than the baselines, depending on the game. Encouraged by these results, the Meta Generator will be submitted to the 2019 GVGAI Level Generation competition.
Index Terms—GVGAI, level generation, genetic algorithm
I. INTRODUCTION
Procedural Content Generation (PCG), and especially procedural generation of video game levels, has been popular for decades. While its traditions stretch all the way back to the ASCII dungeons of Rogue, PCG has seen a new dawn in combination with different techniques from Artificial Intelligence [1]. In the last years, research has focused on methods that generate content in a general way, in an attempt to reduce the amount of domain knowledge used. The aim of this research is to focus more on the generation algorithms rather than in specific heuristics. A clear example of this is the General Video Game AI (GVGAI) framework. GVGAI is a benchmark that, among other challenges, proposes the investigation of general methods for procedural level generation. Within this context, the level generation track [2] prompts the generator to generate a level for previously unseen games.
The GVGAI framework provides several sample generators for levels, including genetic and constructor generators. A common practice for participants is to tune the parameters, architecture or fitness functions of these sample generators. This paper proposes a new generator that aims at facilitating testing and boosting novel generator architectures by using AI-assisted play testing.
In particular, this paper makes four contributions. Firstly, it proposes a fitness function for evaluating the quality of generated levels, that uses an array of weighted factors. Secondly, it proposes a method for boosting any fast and resource light level generator algorithm: Using this or any other fitness function for selecting the best out of many generated results. Building on these two, it proposes a Meta Generator that optimises the parameters of fast resource light generators, using a fitness function for levels generated to estimate the quality of the generator parameters. Lastly, we propose a fast parametrisable level generator for the Meta Generator to optimise, that builds on the principles of an existing benchmark generator from the GVGAI framework.
The structure of this paper is as follows: Section II gives a brief introduction of the GVGAI framework and the level generation competition track. Section III provides an overview of the related work in this area. Then, Section IV describes the proposed level generators employed in the experiments, which results are discussed in Section V. Finally, conclusions and opportunities for further work are detailed in Section VI.
Finally, the properties for the different entities in the game. The GVGA games used in this study. Their variety also shows the representing their initial states. Figure 1 shows the 3 different Level Mapping in a file is mapped to one or more sprites as indicated in sprites and ascii characters used in the level definition file. sets two files (game and level definition). The game description interactions. These elements are described in VGDL using Language (VGDL), which allows games to be easily created around sprites with certain properties and behaviours and their elements are described in VGDL using two files (game and level definition). The game description file is composed of four different sets. The Sprite Set defines properties for the different entities in the game. The Interaction Set describes the effects produced by sprite collisions, such as destroying sprites or causing score changes. The Termination Set defines the conditions that lead to an end-game state, determining also if the game is won or lost for the player(s). Finally, the Level Mapping specifies the connections between sprites and ascii characters used in the level definition file.
Levels are described in their own ascii files. Each character in a file is mapped to one or more sprites as indicated in the Level Mapping. Figure 2 shows examples of VGDL levels representing their initial states. Figure 1 shows the 3 different GVGA games used in this study. Their variety also shows the complexity and expressiveness that achievable with VGDL:
- **The Snowman**: This is a deterministic game in which the player must push different parts of a snowman (body, trunk and head) into a platform, in the natural body to head order. Parts of the level may be locked that can only be opened if the player collects a key. Points are awarded for correctly placing each part of the snowman.
- **Butterflies**: This is a stochastic game in which the player must capture as many butterflies as possible before all cocoons in the level are opened, in which case the game is lost (cocoons open on contact with butterflies to create new butterflies). When all butterflies are captured, the game is over and won. All butterflies move at random. Points are awarded for each butterfly captured.
- **Freeway**: A version of the original with the same name, this stochastic game puts the player in control of an agent that must cross consecutive roads with incoming traffic. Every time the player gets hit by a vehicle, they lose 1 life (out of 5 available). The player wins if they are able to reach the randomly positioned goal at the other end of the crossings. In contrast to the others, this game has no score, only a victory condition.
B. Game-Playing Agents
The framework exposes an API for planning agents, which includes access to a forward model (FM). This forward model can be used to simulate possible future states of the game when supplied with an action to execute from any state. In the competition setting, each agent has a certain established time to return an action before being disqualified.
GVGA includes a series of sample controllers to help practitioners in the creation of agents for the benchmarks. Among these controllers, one can find simple ones, like RANDOM (which executes random actions at each time step), DO NOTHING, where action NIL (or no-op) is applied at each step, and One Step Lookahead (OSLA). The latter sample controller explores every game state reachable from the current state using the FM, evaluating it with a heuristic that promotes proximity to certain sprites. In this study, DO NOTHING and OSLA are 2 of the 3 agents used to evaluate generated levels.
The third controller is YOLOBOT, which is not provided with the framework. Instead, YOLOBOT was a (multiple) competition winner submission developed by Joppen et al. [7]. This approach is a combination of two different methods: a heuristic-guided Best First Search used in deterministic games, and a Monte Carlo Tree Search (MCTS; [8]) used for stochastic environments. Used in conjunction with informed priors and rollouts, backtracking and pruning, this agent was able to win three editions of the Single-Player Planning GVGA competition. The reader is referred to [7] for details.
C. GVGA Level Generation
The Level Generation track was introduced in 2016 with the objective of proposing a challenge for automatic generation of levels for any game that is given [2]. Participants submit generators that must produce a level for games (unknown a priori) in no more than 5 hours of CPU computation. During this time, the generator can make use of planning agents to play-test potential levels to be returned.
Generators have access to the following game information:
- Avatar sprites, controlled by the player.
- Solid sprites, static in the game.
- Harmful sprites, which kill the player (or can create sprites that would kill the player).
- Collectible sprites, which can be picked up by the player.
- Other sprites that do not fall in the previous categories.
Additionally, the generator is provided with access to the level mapping, the interaction and termination sets. In return, the generator must provide a 2-dimensional array of characters that forms the level, in the same format as those shown in Fig. 2: VGDL levels: **The Snowman** (left), **Butterflies** (right).
Figure 2. The GVGAI Level Generation track was initially proposed in tandem with three distinct level generators [2]:
- **RANDOM**: This generator picks a level size by looking at how many sprite types exist. After surrounding the board in a solid frame, it places at least one of each sprite type on the board. It makes sure to keep around 80% of the space open. If there is additional space left over after these initial constraints are met, the remaining space is filled randomly from the set of sprites. This approach has the advantage that, without looking into the semantics of the sprites, all sprites that are vital for completing the level (goal sprites) are likely to be included in the level. It’s also likely that the avatar has space to move.
- **CONSTRUCTIVE**: This generator uses the same level size selection, and includes a frame of static sprites. It also builds connected walls coming out of the frame, while making sure that all open space on the board is connected. This makes for more interesting and deliberate-looking levels for games that rely on labyrinth- and room-like structure. The constructive generator places enemies at a distance from the avatar, to make sure the player doesn’t die immediately after spawning. It does not, however, guarantee that all sprites will be used at least once.
- **GENETIC**: This generator initializes a population of levels using the constructive or random generator, before performing evolution using a fitness function. It keeps one population for infeasible levels, and one for feasible levels, that do not mix. The fitness evaluation function is based on two factors, which will be compared to our own proposed fitness function in the methods section.
### III. Related Work
Previous entries to the GVGAI Level Generation competition [2] have not been very effective at creating interesting or even playable levels [6]. However, some approaches do succeed in producing quality results. Neufeld et al. [9] use a ($\mu + \lambda$) evolutionary algorithm to evolve the rules used by an Answer Set Programming (ASP) level generator in GVGAI. These levels are then evaluated using a simulation-based method: the fitness of each level is the difference of average scores obtained by vanilla Monte Carlo Tree Search and a random player. Their results showcase the benefits of the concept, although we identify the computational overhead of translating VGDL games into ASP rules as a drawback, especially in the context of the GVGAI competition. This paper uses a similar approach, compatible with the GVGAI competition and applied to a parameterized random generator.
Four out of six submissions to the GVGAI Level Generation competition are based on evolutionary algorithms, using AI simulations for evaluating generated levels [6], although they have not yet been successful in winning the competition. Given previous promising approaches [9], we focus on improving simulation-based evolutionary methods for this task.
A different approach used in the Level Generation competition is using design patterns within various techniques. Sharif et al. [10] analyse the games in GVGAI to identify interesting design patterns, such as solid sprites often forming rooms (almost fully enclosing a section of a level) or collectible sprites often being placed together. Beaupre et al. [11] later use such design pattern analysis to develop general generators which produce levels inspired by the human designs. They use the sample constructive generator provided with the GVGAI framework to generate an initial population of levels, which are evaluated based on the patterns they contain. This population is then evolved to match the pattern weights extracted from the existing GVGAI corpus of games. A final human evaluation of the resultant level shows preference towards the pattern-based levels. However, there is no indication as to the level quality in terms of playability. We choose to focus on simulation-based evaluations to take into consideration the impact on player game-play, but we consider design patterns additions as a path for further extending the current work.
Several authors explore the use of Relative Algorithm Performance Profiles (RAPP) [12] to evaluate generated games or levels: the difference in performance between proficient and less skilled players is often seen as an indicator of a game’s skill depth, with higher skill depth being a desired quality of generated game content. Nielsen et al. [12] compare the relative performance of seven different agents on a set of VGDL games and their results support the correlation between higher-quality games and a larger difference between good and less-skilled players. More recently, Liu et al. [13] use a similar measure to evolve game parameters instead, with similarly good results. Inspired by positive results, we use this same notion in the fitness function and measure the difference in win rate and score between YOLOBOT [7], a high-performing bot in the GVGAI planning competition, and two simple agents, One-Step Look Ahead (OSLA) and Do NOTHING. See Section II-B for details of agents used in this work.
One of the aspects we consider in this paper is the importance of using the right parameters for the generator. Manuel et al. [14] evolve level generators for Super Mario Bros interactively, with human supervision. They use both a measure of the playability of generated levels (using simulation-based evaluation) as well as the human preference input in order to evolve better levels. We take a similar approach, while excluding the human factor in order for our method to be compatible with the GVGAI competition and entirely autonomous, while also testing our Meta Generator’s performance on several games. While most generator optimizers use fairly simple evolutionary algorithms, Lucas et al. [15] propose a model-based approach for tuning game parameters. Both GVGAI games and agents can be stochastic, which introduces considerable noise in the evaluation. Additionally, simulation-based evaluations are expensive, as (potentially multiple) games have to be run in order to test the quality of the generated solutions. The NTuple Bandit Evolutionary Algorithm (NTBEA) is shown to perform well in noisy environments as well as being sample efficient [16], which could lead to better results within the short timespan allowed in the GVGAI competition. NTBEA has further been shown to produce good results when optimizing player experience (represented by score curves) in
GVGAI games [17], thus we consider this as the next step for improving our method further.
IV. LEVEL GENERATORS
This section describes the methods used in our experiments, and details our contributions: The POPULATION GENERATOR, the parameterized PERCENTAGE-WISE GENERATOR (PWG), the META GENERATOR, and the fitness evaluation function.
A. Population Generator Baselines
One of the problems when building level generators for the GVGAI Level Generation competition is that after the generator is built and the fitness evaluation function is designed, there is no good baseline measure to compare the generator’s output quality with. Out of the baseline generators given by the competition organizers, while the GA relies on a fitness function and validates the playability of all levels before returning them, the random and constructive generators produce a single level without playtesting or validating it. From this point on we will refer to the latter approach as one-shot generation. Our hypothesis is that, even without performing any genetic operations, testing and validating levels is a large part of the performance difference between the genetic algorithm and the one-shot generators. This motivated us to build a framework around continuous one-shot generators (called the Population Generator), as well as a fitness function for noisy evaluation of generated levels. This turns a one-shot generator into a continuously improving generator that can keep generating and testing levels for any amount of time. When time runs out, it returns the best level in the population, according to the fitness function. Fitness values are normalized across the entire population of levels generated for a game.
B. Percentage-Wise Generator
The Percentage-Wise Generator (PWG) is the base generator used in our experiments. The PWG was designed with a desire to minimize the assumptions we make about the game we are generating levels for, and to increase generality by minimizing the reliance on human-injected bias. The sample constructive generator in the GVGAI Level Generation competition is a counter example. Because of these considerations, the PWG is based loosely on the random level generator. The biggest difference is that it is parameterised, taking into account the following when generating levels:
- Percentage of each sprite that should be used.
- Whether the \((x, y)\) coordinates of each sprite should be sampled from a Gaussian distribution.
- Mean of the distribution.
- Standard deviation of the distribution.
- Size of the level and whether a border of static wall should be placed around the level.
Details of the parameter search space are depicted in Table I (float parameters are continuous, bounded between 0 and 1). The Mean and St.dev parameters are scaled by the Width and Height parameters. The PWG only uses information about three types of sprites: the avatar, walls, and open space. In terms of injected bias, the PWG knows that exactly one avatar must exist, that a frame of static walls is an option (enabling the frame is a boolean parameter), and that having more than half of the area of the level being covered by open space is a good place to start the optimisation process. It is not, however, told anything about the purpose or function of any of these sprites or their interactions. The GVGAI Level Generation competition explicitly gives access to this knowledge, and several other generators use it. We chose to avoid using it in order to make our method as general and requiring as little information as possible.
C. Meta Generator
The Meta Generator relies on optimising the parameters of other level generators (in our case, the PWG). The fitness evaluation of the sub-generators involves generating and evaluating levels using the fitness function described in Section IV-D. By addressing the generator optimisation task, the level generation task is implicitly addressed as well.
1) Motivation: The level search space for GVGAI games is large, and current methods for simulation-based level fitness evaluation are relatively expensive. Thus we hypothesize that genetic level generators rely on having a good population initialization in order to find a “good” (according to a fitness function) solution within the allocated time. It would therefore be optimal to have a much faster generator that delivers many good levels, before using a genetic algorithm to improve upon them to find one high-quality level. Having a generator that can quickly and consistently make levels that are even close to being good for any never-seen-before games is difficult.
The idea behind the Meta Generator is to optimise the parameters of a fast generator. The best levels generated during the search can be used as the initialization set for genetic search in the level space, or the best level found so far can be returned directly. Additionally, this system can be used to supply a player with a continuous stream of levels, by running the optimized Meta Generator in the background during play.
2) Generator Population and Level Population: When optimising the parameters for the PWG, we maintain a population of generator parameters. This population keeps track of which generators produced what levels, so that we can calculate the fitness of each generator as the average fitness of the levels it has produced. A combined population of all the levels generated and evaluated so far by the Meta Generator is also maintained. Just like for the Population Generators, the
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Type</th>
<th>Search space</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sprite usage (n)</td>
<td>Stdev, Boolean</td>
<td>true, false</td>
</tr>
<tr>
<td>(x)-Gaussian sampling (s)</td>
<td>Float</td>
<td>[0,1]</td>
</tr>
<tr>
<td>(y)-Gaussian sampling (s)</td>
<td>Float</td>
<td>[0,1]</td>
</tr>
<tr>
<td>(x)-Distribution mean (m)</td>
<td>Float</td>
<td>[0,1]</td>
</tr>
<tr>
<td>(x)-Distribution st.dev (s)</td>
<td>Float</td>
<td>[0,1]</td>
</tr>
<tr>
<td>(y)-Distribution st.dev (s)</td>
<td>Float</td>
<td>[0,1]</td>
</tr>
<tr>
<td>Level border</td>
<td>Boolean</td>
<td>true, false</td>
</tr>
<tr>
<td>Width</td>
<td>Integer</td>
<td>[4,18]</td>
</tr>
<tr>
<td>Height</td>
<td>Integer</td>
<td>[4,18]</td>
</tr>
</tbody>
</table>
information from all the levels generated is used to calculate the fitness of each individual level, normalized across the population of levels generated for a game.
3) Parameter Optimisation Algorithm: The Meta Generator uses mutation and crossover to search the parameter space of the PWG. It uses Upper Confidence Bounds (UCB; [18]) to select in which direction to guide its search. The UCB formula is depicted in Equation 1, split into 2 terms for exploitation (first) and exploration (second). \( V_i \) is the value estimation of the generator \( i \). This value is the average fitness of the levels generated so far, and the assumption is that it translates to the generator being a good candidate for crossover and mutation. \( C \) is the exploration parameter, which is set to \( \sqrt{2} \). \( N \) is how many levels have been generated in total, and \( n_i \) is how many levels have been generated by the generator.
\[
UCB = V_i + C \sqrt{\frac{\ln N}{n_i}}
\]
The UCB value balances exploration of areas that have not been explored and the exploitation of areas that are promising. As outlined in Algorithm 1, the Meta Generator focuses its search on the generator with the highest UCB value, using a \((1,\lambda)\) roulette wheel (fitness proportionate) selection strategy. This means the generator with the highest UCB value is picked, and \( \lambda \) other generators are selected and crossed with the top generator. Each time a generator is selected for reproduction, one more level is generated using its parameters before continuing. This encourages the noisy fitness evaluation of the more promising generator parameters to become more accurate, which means we avoid repeatedly using parameters for crossover that are objectively unfit, but inaccurately evaluated. In the case where all results of crossovers and mutations are worse than the previous population, forcing the existing generators to keep producing levels further ensures that we keep exploiting our current best generators.
D. Noisy Level Fitness Evaluation
The fitness evaluation function for levels is a compound measure of 8 factors, that are calculated from the data of several General Video Game AI agents playing the level (see Section II-B for details of agents used).
- \( f_1 \) Win Factor: This factor measures how much better YOLOBOT performs than OSLA and DO NOTHING, in terms of win percentage (every game in GVGAI can be won or lost).
- \( f_2 \) Score Factor: This factor measures how much better YOLOBOT performs than OSLA and DO NOTHING in terms of score. Looking at the difference between the performance of agents with various skill levels as an indicator of game skill depth has been tried before in adversarial games [13] and single player games [12].
- \( f_3 \) Danger Factor: This factor measures how close the AI is to death on average throughout the game. The danger score at a given frame is calculated by doing \( m \) random roll-outs of length \( n \), and seeing how many of them end up in death. The Danger Factor is
\[
\text{DANGER} = \frac{1}{n} \sum_{t=1}^{n} \mathbb{1}(t < t_d)
\]
\( t_d \) is the time at which the AI is to death, and \( \mathbb{1} \) is an indicator function.
\[
\text{FENGTH} = \frac{t_f - t_d}{t_f}
\]
noted that the solution takes longer. The assumption is that a longer solution is less likely to feel trivial to the player.
\( f_8 \) Solvability Factor: It indicates if any of the three
Algorithm 1 Meta Generator pseudo code. It optimizes level generator parameters, returns the level with the highest fitness.
\[
\begin{align*}
\text{Input:} & \quad \text{Parameterised one-shot generator} \ g \\
\text{Input:} & \quad \text{Fitness function} \ f \\
\text{Input:} & \quad \text{Time budget} \ t \\
\text{Output:} & \quad \text{A GVGAI Level.}
\end{align*}
\]
\[
\begin{align*}
1: & \quad gPop \leftarrow \text{initializePopulation}(g) \\
2: & \quad lPop \leftarrow \text{Empty} \\
3: & \quad \text{while} \ \text{time} < t \ \text{do} \\
4: & \quad \quad \text{generator} \leftarrow gPop.\text{maxUCB}() \\
5: & \quad \quad \text{level} \leftarrow \text{generator.generateLevel}() \\
6: & \quad \quad \text{fitness} \leftarrow f(\text{level}) \\
7: & \quad \quad lPop.\text{add}(\text{level, fitness}) \\
8: & \quad \quad gPop.\text{update}(g, \text{level}) \\
9: & \quad \quad \text{parents} \leftarrow \text{rouletteSelect}(\lambda, gPop) \\
10: & \quad \quad \text{offspring} \leftarrow \text{crossover}(g, \text{parents}) \\
11: & \quad \quad \text{offspring.mutate}() \\
12: & \quad \quad \text{for child in offspring do} \\
13: & \quad \quad \quad \text{levels} \leftarrow \text{child.generate}(n) \\
14: & \quad \quad \quad \text{fitnesses} \leftarrow f(\text{levels}) \\
15: & \quad \quad \quad lPop.\text{add}(\text{levels, fitnesses}) \\
16: & \quad \quad \text{gPop.} \text{add}(\text{child, levels}) \\
17: & \quad \quad \text{end for} \\
18: & \quad \text{end while} \\
19: & \quad \text{level} \leftarrow lPop.\text{best}() \\
20: & \quad \text{return level}
\end{align*}
\]
AI agents could solve the level in any of their attempts. All factors (except \( f_S \)) are measured over a number of simulations, and because the agent and the games are stochastic, results may vary. The total fitness score of a level is calculated according to Equation 2 (solvability is excluded), where \( w_n \) is the weight of the \( n \)-th factor. Each individual factor is normalized in the range \([0,1]\), according to all other observed values for that factor for the same game. The weights should preferably be adjusted in accordance with human preference, but, for the lack of such data, they were set, from \( w_1 \) to \( w_7 \), as follows: \((3,2,2,1,1,1,1)\).
\[
f_L = \frac{\sum_{n=1}^{7} f_n \cdot w_n}{\sum_{n=1}^{7} w_n}
\]
This puts fitness in the range \([0,1]\). If the level is unsolvable, the fitness becomes \( f_L = 1 \), putting it in the range \([-1,0]\). This is done to reflect that any solvable level is better than an unsolvable level: the player should be able to win.
V. EXPERIMENTS
The goal of the experiments is to compare the performance of the random and constructive generators against the Meta Generator proposed in this paper, when provided with a 5 hour budget as in the GVGAI Level Generation competition. Each of these 3 generators was set to generate levels for 3 Eh is repeated 5 times. The experiments were performed on IBM System X iDataPlex dx360 M3 Server nodes, where each had one Intel Xeon E5645 processor core allocated to it, and a maximum of 2GB of RAM of JVM Heap Memory.
A. Results
Figure 3 shows the average fitness (over 5 runs) of the best level generated so far (the one that would be returned at that point), by each of the generators. The difference between the performance of the three generators depends on the game.
1) Butterflies: Butterflies is a simple game in which a random spread of sprites can lead to a level that is challenging and plays well. It therefore makes sense that the Meta Generator only makes marginal gains on the baselines. When we analyse the levels generated by each of the generators, it becomes apparent that the small difference in fitness actually accounts for the fact that the Meta Generator is able to build levels of much larger size. The random and constructive generators restrict the size of their levels because of the small number of sprite types. A larger level makes for longer games, and due to the Length Factor \( (f_7) \) this adds to the fitness on solvable levels. We see the larger levels produced by the Meta Generator as more enjoyable for humans than the very compact ones produced by the baselines.
2) Freeway: Freeway is a more complicated game. In addition to also being stochastic, it has many more sprite types. In addition to this, traditional Freeway levels are all structured in a specific way: the car spawners are located in key places in order for the game to be recognizable and make sense to humans. While random scattering of sprites works well for Butterflies, it does not for Freeway. The constructive generator uses and builds in a specific way that is good for dungeon-style maps, but it is not effective in Freeway as we can see in Figure 3 b). From the lower middle fitness distribution in Figure 5 we can see that the constructive generator produces a considerable amount of low fitness (but playable) levels, as well as some levels on par with the other generators. Interestingly, when inspecting the final generation of Freeway levels, the constructive generator is the only one that managed to construct a level that is somewhat sensible (more on this later). The Meta Generator and its PWGs have no way of biasing the levels produced to have patterned structure and spacing the sprites in a certain way, which is what we imagine is most beneficial for Freeway.
3) The Snowman: The Snowman is the only puzzle game out of the three. It is also the game where the largest difference in fitness is observed. The Meta Generator outperforms the constructive and random generators, who do not generate any levels that are solvable by YOLOBOT. While the Meta Generator starts out poorly as well, it explores different generator parameter until it finds a playable level, and starts using the parameters of the successful PWG as a basis for further exploration. As can be observed in the right-most row of histograms in Figure 5, this leads to not only one or two playable levels, but a sizable population of playable levels. Upon further inspection, it turns out that the levels generated by the constructive and random generators are actually fairly
simple to solve for humans, but they are consistently too large and too cluttered for YOLOBOT to solve. The Meta Generator is the only one out of the three that manages to create smaller, relatively uncluttered levels that YOLOBOT can solve, because it is free to change the size of the level and cover percentage of all the sprite types.
4) Summing up: The constructive generator brings a stronger bias into its levels, and that seems to disadvantage it slightly going up against the less biased random and Meta generators. The random generator also has a strong bias in size and cover percentages that disadvantages it in a situation like this, where there is an opportunity to sample a large amount of different combinations and test their fruitfulness. The Meta Generator achieves a similar performance to random, although outperforming both baselines in The Snowman.
B. Generated Levels
Observe in Figure 4 examples of levels returned at the end of 5 hours of generation, by each of the generators. For Butterflies, we can see how the size restrictions imposed by the random and constructive generators limit them from discovering the benefit of a larger level. For Freeway, we can see that the basic idea of the game is completely deconstructed, as the floor tiles where the goal and player can spawn (light grey) are spread out completely, and are dangerously placed. In Freeway levels, YOLOBOT is superhuman at avoiding fast moving danger, so implementing a more human-like behaviour with longer reaction time similar to [2] might help bias the levels to be less hectic. The constructive generator (right) surprised by creating one level where 3 out of 4 spawn points are safe from traffic, but this is purely by chance. While the levels generated by the Meta generator on the two other games differ visually from the Population Generators, the similarity between the Random and Meta on Freeway is striking. For The Snowman, we observed that levels returned by the random and constructive generators were generally cluttered and large. The size is determined by the sprite set, and the percentage of the board that is to be filled with sprites is not variable. The Meta Generator tended to return sparser and smaller levels.
VI. CONCLUSIONS AND FUTURE WORK
In this paper we describe a fitness evaluation function based on several factors, which is used by 3 different generators to evaluate the quality of generated levels in the General Video Game AI framework (GVGAI): random, constructive and our proposed Meta Generator. The Meta Generator builds upon a parameterized version of the random generator and evolves its parameters in order to produce better levels. The random and constructive generators are tested in continuous runs over 5 hours (as per the GVGAI Level Generation competition rules) in 3 GVGAI games, Butterflies, Freeway and The Snowman and are shown to perform similarly or worse than the Meta Generator (using the same budget), depending on the game.
The constructive generator brings a strong bias into its levels, which seems to disadvantage it going up against the Random and Meta generators. The Meta Generator gains in fitness over both the Population Generators from its extra flexibility, and this flexibility seems to have a larger impact in games that are more difficult to produce levels for.
The next experiment lined up is to compare the performance of the Meta Generator against a strong genetic level generator, such as the original GVGAI genetic baseline [2], using comparable one-shot generators for population initialization. This Meta Generator will be entered into the 2019 GVGAI Level Generation competition, where it can be tested rigorously against other generators. A detailed comparison with previous competition entries is also considered for future work.
The parameter optimisation performed by the Meta Generator could be further improved. A more powerful method such as The N-Tuple Bandit Algorithm (NTBEA) [15] [20] could be used, which has been shown to work well for online parameter tuning [21]. One line of work would be using a population of NTBEAs larger than the thread pool. After a thread completes a step on one NTBEA instance, the thread picks which Meta Generator instance to work on next by using UCB to select from those available.
The weighting of the factors in the current fitness function could also be adjusted so that the fitness score aligns better with human experience. Browne and Maire [22] focused on human experience in their approach, which contributed to the creation of the award-winning board game Yavalath. It can be applied similarly to the evaluation of video game levels, by extracting features of human play together with their explicit preference indications for a level.
Fig. 5: The fitness distributions of all generators on all games. Each row contains one generator, each column one game. From top to bottom: Meta, Random, Constructive. From left to right: Butterflies, Freeway, The Snowman.
ACKNOWLEDGMENT
This work was partially funded by the EPSRC CDT in Intelligent Games and Game Intelligence (IGGI) EP/L015846/1.
REFERENCES
|
{"Source-Url": "https://rdgain.github.io/assets/pdf/papers/drageset2019generator.pdf", "len_cl100k_base": 7934, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 27491, "total-output-tokens": 9679, "length": "2e12", "weborganizer": {"__label__adult": 0.0020503997802734375, "__label__art_design": 0.0020885467529296875, "__label__crime_law": 0.0022220611572265625, "__label__education_jobs": 0.00466156005859375, "__label__entertainment": 0.0011920928955078125, "__label__fashion_beauty": 0.0012693405151367188, "__label__finance_business": 0.00128173828125, "__label__food_dining": 0.002017974853515625, "__label__games": 0.329345703125, "__label__hardware": 0.00406646728515625, "__label__health": 0.0024929046630859375, "__label__history": 0.002254486083984375, "__label__home_hobbies": 0.0005273818969726562, "__label__industrial": 0.0024433135986328125, "__label__literature": 0.0018215179443359375, "__label__politics": 0.0011796951293945312, "__label__religion": 0.0022525787353515625, "__label__science_tech": 0.2666015625, "__label__social_life": 0.0003409385681152344, "__label__software": 0.00991058349609375, "__label__software_dev": 0.3544921875, "__label__sports_fitness": 0.0025997161865234375, "__label__transportation": 0.0019483566284179688, "__label__travel": 0.0009050369262695312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40895, 0.01632]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40895, 0.23892]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40895, 0.90009]], "google_gemma-3-12b-it_contains_pii": [[0, 4045, false], [4045, 9414, null], [9414, 15947, null], [15947, 22101, null], [22101, 27161, null], [27161, 31760, null], [31760, 36536, null], [36536, 40895, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4045, true], [4045, 9414, null], [9414, 15947, null], [15947, 22101, null], [22101, 27161, null], [27161, 31760, null], [31760, 36536, null], [36536, 40895, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40895, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40895, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40895, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40895, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40895, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40895, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40895, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40895, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40895, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40895, null]], "pdf_page_numbers": [[0, 4045, 1], [4045, 9414, 2], [9414, 15947, 3], [15947, 22101, 4], [22101, 27161, 5], [27161, 31760, 6], [31760, 36536, 7], [36536, 40895, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40895, 0.0625]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
0a829eff7a0724cc3ee1d9cb295e48b7032bad2d
|
Abstract
In this paper, we consider the task of learning control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language barrier makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. We evaluate our approach on two game worlds, comparing against baselines using bag-of-words and bag-of-bigrams for state representations. Our algorithm outperforms the baselines on both worlds demonstrating the importance of learning expressive representations.
1 Introduction
In this paper, we address the task of learning control policies for text-based strategy games. These games, predecessors to modern graphical ones, still enjoy a large following worldwide. They often involve complex worlds with rich interactions and elaborate textual descriptions of the underlying states (see Figure 1). Players read descriptions of the current world state and respond with natural language commands to take actions. Since the underlying state is not directly observable, the player has to understand the text in order to act, making it challenging for existing AI programs to play these games (DePristo and Zubek, 2001).
In designing an autonomous game player, we have considerable latitude when selecting an adequate state representation to use. The simplest method is to use a bag-of-words representation derived from the text description. However, this scheme disregards the ordering of words and the finer nuances of meaning that evolve from composing words into sentences and paragraphs. For instance, in State 2 in Figure 1 the agent has to understand that going east will lead it to the castle whereas moving south will take it to the standing archway. An alternative approach is to convert text descriptions to pre-specified representations using annotated training data, commonly used in language grounding tasks (Matuszek et al., 2013).
State 1: The old bridge
You are standing very close to the bridge’s eastern foundation. If you go east you will be back on solid ground ... The bridge sways in the wind.
Command: Go east
State 2: Ruined gatehouse
The old gatehouse is near collapse. Part of its northern wall has already fallen down ... East of the gatehouse leads out to a small open area surrounded by the remains of the castle. There is also a standing archway offering passage to a path along the old southern inner wall.
Exits: Standing archway, castle corner, Bridge over the abyss
Figure 1: Sample gameplay from a Fantasy World. The player with the quest of finding a secret tomb, is currently located on an old bridge. She then chooses an action to go east that brings her to a ruined gatehouse (State 2).
Kushman et al., 2014). In contrast, our goal is to learn useful representations in conjunction with control policies. We adopt a reinforcement learning framework and formulate game sequences as Markov Decision Processes. An agent playing the game aims to maximize rewards that it obtains from the game engine upon the occurrence of certain events. The agent learns a policy in the form of an action-value function $Q(s,a)$ which denotes the long-term merit of an action $a$ in state $s$.
The action-value function is parametrized using a deep recurrent neural network, trained using the game feedback. The network contains two modules. The first one converts textual descriptions into vector representations that act as proxies for states. This component is implemented using Long Short-Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997). The second module of the network scores the actions given the vector representation computed by the first.
We evaluate our model using two Multi-User Dungeon (MUD) games (Curtis, 1992; Amir and Doyle, 2002). The first game is designed to provide a controlled setup for the task, while the second is a publicly available one and contains human generated text descriptions with significant language variability. We compare our algorithm against baselines of a random player and models that use bag-of-words or bag-of-bigrams representations for a state. We demonstrate that our model LSTM-DQN significantly outperforms the baselines in terms of number of completed quests and accumulated rewards. For instance, on a fantasy MUD game, our model learns to complete 96% of the quests, while the bag-of-words model and a random baseline solve only 82% and 5% of the quests, respectively. Moreover, we show that the acquired representation can be reused across games, speeding up learning and leading to faster convergence of Q-values.
2 Related Work
Learning control policies from text is gaining increasing interest in the NLP community. Example applications include interpreting help documentation for software (Branavan et al., 2010), navigating with directions (Vogel and Jurafsky, 2010; Kollar et al., 2010; Artzi and Zettlemoyer, 2013; Matuszek et al., 2013; Andreas and Klein, 2015) and playing computer games (Eisenstein et al., 2009; Branavan et al., 2011a).
Games provide a rich domain for grounded language analysis. Prior work has assumed perfect knowledge of the underlying state of the game to learn policies. Gorniak and Roy (2005) developed a game character that can be controlled by spoken instructions adaptable to the game situation. The grounding of commands to actions is learned from a transcript manually annotated with actions and state attributes. Eisenstein et al. (2009) learn game rules by analyzing a collection of game-related documents and precompiled traces of the game. In contrast to the above work, our model combines text interpretation and strategy learning in a single framework. As a result, textual analysis is guided by the received control feedback, and the learned strategy directly builds on the text interpretation.
Our work closely relates to an automatic game player that utilizes text manuals to learn strategies for Civilization (Branavan et al., 2011a). Similar to our approach, text analysis and control strategies are learned jointly using feedback provided by the game simulation. In their setup, states are fully observable, and the model learns a strategy by combining state/action features and features extracted from text. However, in our application, the state representation is not provided, but has to be inferred from a textual description. Therefore, it is not sufficient to extract features from text to supplement a simulation-based player.
Another related line of work consists of automatic video game players that infer state representations directly from raw pixels (Koutnik et al., 2013; Mnih et al., 2015). For instance, Mnih et al. (2015) learn control strategies using convolutional neural networks, trained with a variant of Q-learning (Watkins and Dayan, 1992). While both approaches use deep reinforcement learning for training, our work has important differences. In order to handle the sequential nature of text, we use Long Short-Term Memory networks to automatically learn useful representations for arbitrary text descriptions. Additionally, we show that decomposing the network into a representation layer and an action selector is useful for transferring the learnt representations to new game scenarios.
3 Background
Game Representation We represent a game by the tuple $\langle H, A, T, R, \Psi \rangle$, where $H$ is the set of all possible game states, $A = \{(a,o)\}$ is the set of...
all commands (action-object pairs), $T(h' \mid h, a, o)$ is the stochastic transition function between states and $R(h, a, o)$ is the reward function. The game state $H$ is hidden from the player, who only receives a varying textual description, produced by a stochastic function $Ψ : H \rightarrow S$. Specifically, the underlying state $h$ in the game engine keeps track of attributes such as the player’s location, her health points, time of day, etc. The function $Ψ$ (also part of the game framework) then converts this state into a textual description of the location the player is at or a message indicating low health. We do not assume access to either $H$ or $Ψ$ for our agent during both training and testing phases of our experiments. We denote the space of all possible text descriptions $s$ to be $S$. Rewards are generated using $R$ and are only given to the player upon completion of in-game quests.
**Q-Learning** Reinforcement Learning is a commonly used framework for learning control policies in game environments (Silver et al., 2007; Amato and Shani, 2010; Branavan et al., 2011b; Szita, 2012). The game environment can be formulated as a sequence of state transitions $(s, a, r, s')$ of a Markov Decision Process (MDP). The agent takes an action $a$ in state $s$ by consulting a state-action value function $Q(s, a)$, which is a measure of the action’s expected long-term reward. Q-Learning (Watkins and Dayan, 1992) is a model-free technique which is used to learn an optimal $Q(s, a)$ for the agent. Starting from a random Q-function, the agent continuously updates its Q-values by playing the game and obtaining rewards. The iterative updates are derived from the Bellman equation (Sutton and Barto, 1998):
$$Q_{t+1}(s, a) = E[r + \gamma \max_{a'} Q_t(s', a') \mid s, a]$$ \hspace{1cm} (1)
where $\gamma$ is a discount factor for future rewards and the expectation is over all game transitions that involved the agent taking action $a$ in state $s$.
Using these evolving Q-values, the agent chooses the action with the highest $Q(s, a)$ to maximize its expected future rewards. In practice, the trade-off between exploration and exploitation can be achieved following an $\epsilon$-greedy policy (Sutton and Barto, 1998), where the agent performs a random action with probability $\epsilon$.
**Deep Q-Network** In large games, it is often impractical to maintain the Q-value for all possible state-action pairs. One solution to this problem is to approximate $Q(s, a)$ using a parametrized function $Q(s, a; \theta)$, which can generalize over states and actions by considering higher-level attributes (Sutton and Barto, 1998; Branavan et al., 2011a). However, creating a good parametrization requires knowledge of the state and action spaces. One way to bypass this feature engineering is to use a Deep Q-Network (DQN) (Mnih et al., 2015). The DQN approximates the Q-value function with a deep neural network to predict $Q(s, a)$ for all possible actions $a$ simultaneously given the current state $s$. The non-linear function layers of the DQN also enable it to learn better value functions than linear approximators.
**4 Learning Representations and Control Policies**
In this section, we describe our model (DQN) and describe its use in learning good Q-value approximations for games with stochastic textual descriptions. We divide our model into two parts. The first module is a representation generator that converts the textual description of the current state into a vector. This vector is then input into the second module which is an action scorer. Figure 2 shows the overall architecture of our model. We learn the parameters of both the representation generator and the action scorer jointly, using the in-game reward feedback.
**Representation Generator ($\phi_R$)** The representation generator reads raw text displayed to the player and converts it into a canonical representation of the location. This vector is then input into the action scorer ($\phi_A$) to produce scores for all possible actions and argument objects.
**Action Scorer ($\phi_A$)** The action scorer takes the representation of the location, the current state and the current action, and produces a vector representation of the action.
agent and converts it to a vector representation \( v_s \). A bag-of-words (BOW) representation is not sufficient to capture higher-order structures of sentences and paragraphs. The need for a better semantic representation of the text is evident from the average performance of this representation in playing MUD-games (as we show in Section 6).
In order to assimilate better representations, we utilize a Long Short-Term Memory network (LSTM) (Hochreiter and Schmidhuber, 1997) as a representation generator. LSTMs are recurrent neural networks with the ability to connect and recognize long-range patterns between words in text. They are more robust than BOW to small variations in word usage and are able to capture underlying semantics of sentences to some extent. In recent work, LSTMs have been used successfully in NLP tasks such as machine translation (Sutskever et al., 2014) and sentiment analysis (Tai et al., 2015) to compose vector representations of sentences from word-level embeddings (Mikolov et al., 2013; Pennington et al., 2014). In our setup, the LSTM network takes in word embeddings \( w_k \) from the words in a description \( s \) and produces output vectors \( x_k \) at each step.
To get the final state representation \( v_s \), we add a mean pooling layer which computes the element-wise mean over the output vectors \( x_k \):
\[
v_s = \frac{1}{n} \sum_{k=1}^{n} x_k
\]
(2)
**Action Scorer (\( \phi_A \))** The action scorer module produces scores for the set of possible actions given the current state representation. We use a multi-layered neural network for this purpose (see Figure 2). The input to this module is the vector from the representation generator, \( v_s = \phi_R(s) \) and the outputs are scores for actions \( a \in A \). Scores for all actions are predicted simultaneously, which is computationally more efficient than scoring each state-action pair separately. Thus, by combining the representation generator and action scorer, we can obtain the approximation for the Q-function as \( Q(s, a) \approx \phi_A(\phi_R(s))[a] \).
An additional complexity in playing MUD-games is that the actions taken by the player are multi-word natural language commands such as eat apple or go east. Due to computational constraints, in this work we limit ourselves to consider commands to consist of one action (e.g. eat) and one argument object (e.g. apple). This assumption holds for the majority of the commands in our worlds, with the exception of one class of commands that require two arguments (e.g. move red-root right, move blue-root up). We consider all possible actions and objects available in the game and predict both for each state using the same network (Figure 2). We consider the Q-value of the entire command \((a, o)\) to be the average of the Q-values of the action \( a \) and the object \( o \). For the rest of this section, we only show equations for \( Q(s, a) \) but similar ones hold for \( Q(s, o) \).
**Parameter Learning** We learn the parameters \( \theta_R \) of the representation generator and \( \theta_A \) of the action scorer using stochastic gradient descent with RMSprop (Tieleman and Hinton, 2012). The complete training procedure is shown in Algorithm 1. In each iteration \( i \), we update the parameters to reduce the discrepancy between the predicted value of the current state \( Q(s_t, a_t; \theta_i) \) (where \( \theta_i = [\theta_R; \theta_A] \)) and the expected Q-value given the reward \( r_t \) and the value of the next state \( \max_o Q(s_{t+1}, a; \theta_{i-1}) \).
We keep track of the agent’s previous experiences in a memory \( D \). Instead of performing updates to the Q-value using transitions from the current episode, we sample a random transition \((\hat{s}, \hat{a}, s', r)\) from \( D \). Updating the parameters in this way avoids issues due to strong correlation when using transitions of the same episode (Mnih et al., 2015). Using the sampled transition and \( \hat{a} \), we obtain the following loss function to minimize:
\[
\mathcal{L}_i(\theta) = E_{\hat{s},\hat{a}}[(y_i - Q(\hat{s}, \hat{a}; \theta_i))^2]
\]
(3)
where \( y_i = E_{\hat{s},\hat{a}}[r + \gamma \max_{a'} Q(s', a'; \theta_{i-1}) | \hat{s}, \hat{a}] \) is the target Q-value with parameters \( \theta_{i-1} \) fixed from the previous iteration.
The updates on the parameters \( \theta \) can be performed using the following gradient of \( \mathcal{L}_i(\theta) \):
\[
\nabla_{\theta_i} \mathcal{L}_i(\theta_i) = E_{\hat{s},\hat{a}}[2(y_i - Q(\hat{s}, \hat{a}; \theta_i)) \nabla_{\theta_i} Q(\hat{s}, \hat{a}; \theta_i)]
\]
For each epoch of training, the agent plays several episodes of the game, which is restarted after every terminal state.
---
1We also experimented with considering just the output vector of the LSTM after processing the last word. Empirically, we find that mean pooling leads to faster learning, so we use it in all our experiments.
---
3The memory is limited and rewritten in a first-in-first-out (FIFO) fashion.
Mini-batch Sampling In practice, online updates to the parameters $\theta$ are performed over a mini batch of state transitions, instead of a single transition. This increases the number of experiences used per step and is also more efficient due to optimized matrix operations.
The simplest method to create these mini-batches from the experience memory $D$ is to sample uniformly at random. However, certain experiences are more valuable than others for the agent to learn from. For instance, rare transitions that provide positive rewards can be used more often to learn optimal Q-values faster. In our experiments, we consider such positive-reward transitions to have higher priority and keep track of them in $D$. We use prioritized sampling (inspired by [Moore and Atkeson (1993)]) to sample a fraction $\rho$ of transitions from the higher priority pool and a fraction $1 - \rho$ from the rest.
5 Experimental Setup
Game Environment For our game environment, we modify Evennia, an open-source library for building online textual MUD games. Evennia is a Python-based framework that allows one to easily create new games by writing a batch file describing the environment with details of rooms, objects and actions. The game engine keeps track of the game state internally, presenting textual descriptions to the player and receiving text commands from the player. We conduct experiments on two worlds - a smaller Home world we created ourselves, and a larger, more complex Fantasy world created by Evennia’s developers. The motivation behind Home world is to abstract away high-level planning and focus on the language understanding requirements of the game.
Table 1 provides statistics of the game worlds. We observe that the Fantasy world is moderately sized with a vocabulary of 1340 words and up to 100 different descriptions for a room. These descriptions were created manually by the game developers. These diverse, engaging descriptions are designed to make it interesting and exciting for human players. Several rooms have many alternative descriptions, invoked randomly on each visit by

**Algorithm 1** Training Procedure for DQN with prioritized sampling
```python
1: Initialize experience memory $D$
2: Initialize parameters of representation generator ($\phi_R$) and action scorer ($\phi_A$) randomly
3: for episode = 1, M do
4: Initialize game and get start state description $s_1$
5: for $t = 1, T$ do
6: Convert $s_t$ (text) to representation $v_{s_t}$ using $\phi_R$
7: if random() < $\epsilon$ then
8: Select a random action $a_t$
9: else
10: Compute $Q(s_t, a)$ for all actions using $\phi_A(v_{s_t})$
11: Select $a_t = \text{argmax } Q(s_t, a)$
12: Execute action $a_t$ and observe reward $r_t$ and new state $s_{t+1}$
13: Set priority $p_t = 1$ if $r_t > 0$, else $p_t = 0$
14: Store transition $(s_t, a_t, r_t, s_{t+1}, p_t)$ in $D$
15: Sample random mini batch of transitions $(s_j, a_j, r_j, s_{j+1}, p_j)$ from $D$, with fraction $\rho$ having $p_j = 1$
16: Set $y_j = \{(r_j, r_j + \gamma \max_{a'} Q(s_{j+1}, a'; \theta))$ if $s_{j+1}$ is terminal
17: Perform gradient descent step on the loss $L(\theta) = (y_j - Q(s_j, a_j; \theta))^2$
```
the player.
Comparatively, the Home world is smaller: it has a very restricted vocabulary of 84 words and the room descriptions are relatively structured. However, both the room descriptions (which are also varied and randomly provided to the agent) and the quest descriptions were adversarially created with negation and conjunction of facts to force an agent to actually understand the state in order to play well. Therefore, this domain provides an interesting challenge for language understanding.
In both worlds, the agent receives a positive reward on completing a quest, and negative rewards for getting into bad situations like falling off a bridge, or losing a battle. We also add small deterministic negative rewards for each non-terminating step. This incentivizes the agent to learn policies that solve quests in fewer steps. The supplementary material has details on the reward structure.
**Home World** We created *Home world* to mimic the environment of a typical house. The world consists of four rooms - a living room, a bedroom, a kitchen and a garden with connecting pathways. Every room is reachable from every other room. Each room contains a representative object that the agent can interact with. For instance, the kitchen has an *apple* that the player can *eat*. Transitions between the rooms are deterministic. At the start of each game episode, the player is placed in a random room and provided with a randomly selected quest. The text provided to the player contains both the description of her current state and that of the quest. Thus, the player can begin in one of 16 different states (4 rooms $\times$ 4 quests), which adds to the world’s complexity.
An example of a quest given to the player in text is *Not you are sleepy now but you are hungry now*. To complete this quest and obtain a reward, the player has to navigate through the house to reach the kitchen and eat the apple (i.e type in the command *eat apple*). More importantly, the player should interpret that the quest does not require her to take a nap in the bedroom. We created such misguiding quests to make it hard for agents to succeed without having an adequate level of language understanding.
**Fantasy World** The Fantasy world is considerably more complex and involves quests such as navigating through a broken bridge or finding the secret tomb of an ancient hero. This game also has stochastic transitions in addition to varying state descriptions provided to the player. For instance, there is a possibility of the player falling from the bridge if she lingers too long on it.
Due to the large command space in this game we make use of cues provided by the game itself to narrow down the set of possible objects to consider in each state. For instance, in the MUD example in Figure 1, the game provides a list of possible exits. If the game does not provide such clues for the current state, we consider all objects in the game.
**Evaluation** We use two metrics for measuring an agent’s performance: (1) the cumulative reward obtained per episode averaged over the episodes and (2) the fraction of quests completed by the agent. The evaluation procedure is as follows. In each epoch, we first train the agent on $M$ episodes of $T$ steps each. At the end of this training, we have a testing phase of running $M$ episodes of the game for $T$ steps. We use $M = 50$, $T = 20$ for the Home world and $M = 20$, $T = 250$ for the Fantasy world. For all evaluation episodes, we run the agent following an $\epsilon$-greedy policy with $\epsilon = 0.05$, which makes the agent choose the best action according to its Q-values 95% of the time. We report the agent’s performance at each epoch.
**Baselines** We compare our LSTM-DQN model with three baselines. The first is a *Random* agent that chooses both actions and objects uniformly at random from all available choices. The other two are BOW-DQN and BI-DQN, which use a bag-of-words and a bag-of-bigrams representation of the text, respectively, as input to the DQN action scorer. These baselines serve to illustrate the importance of having a good representation layer for the task.
**Settings** For our DQN models, we used $D = 100000$, $\gamma = 0.5$. We use a learning rate of 0.0005 for RMSprop. We anneal the $\epsilon$ for $\epsilon$-greedy from 1 to 0.2 over 100000 transitions. A mini-batch gradient update is performed every 4 steps of the gameplay. We roll out the LSTM (over words) for
---
6 An illustration is provided in the supplementary material.
7 We consider 222 possible command combinations of 6 actions and 37 object arguments.
8 In the case of the Fantasy world, the object choices are narrowed down using game clues as described earlier.
6 Results
**Home World** Figure 3 illustrates the performance of LSTM-DQN compared to the baselines. We can observe that the Random baseline performs quite poorly, completing only around 10% of quests on average, obtaining a low reward of around −1.58. The BOW-DQN model performs significantly better and is able to complete around 46% of the quests, with an average reward of 0.20. The improvement in reward is due to both greater quest success rate and a lower rate of issuing invalid commands (e.g. eat apple would be invalid in the bedroom since there is no apple). We notice that both the reward and quest completion graphs of this model are volatile. This is because the model fails to pick out differences between quests like Not you are hungry now but you are sleepy now and Not you are sleepy now but you are hungry now. The BI-DQN model suffers from the same issue although it performs slightly better than BOW-DQN by completing 48% of quests. In contrast, the LSTM-DQN model does not suffer from this issue and is able to complete 100% of the quests after around 50 epochs of training, achieving close to the optimal reward possible. This demonstrates that having an expressive representation for text is crucial to understanding the game states and choosing intelligent actions. In addition, we also investigated the impact of using a deep neural network for modeling the action scorer $\phi_A$. Figure 4 illustrates the performance of the BOW-DQN and BI-DQN models along with their simpler versions BOW-LIN and BI-LIN, which use a single linear layer for $\phi_A$. It can be seen that the DQN models clearly achieve better performance than their linear counterparts, which points to them modeling the control policy better.
**Fantasy World** We evaluate all the models on the Fantasy world in the same manner as before and report reward, quest completion rates and Q-
---
Averaged over the last 10 epochs.
values. The quest we evaluate on involves crossing the broken bridge (which takes a minimum of five steps), with the possibility of falling off at random (a 5% chance) when the player is on the bridge. The game has an additional quest of reaching a secret tomb. However, this is a complex quest that requires the player to memorize game events and perform high-level planning which are beyond the scope of this current work. Therefore, we focus only on the first quest.
From Figure 3 (bottom), we can see that the Random baseline does poorly in terms of both average per-episode reward and quest completion rates. BOW-DQN converges to a much higher average reward of $-12.68$ and achieves around 82% quest completion. Again, the BOW-DQN is often confused by varying (10 different) descriptions of the portions of the bridge, which reflects in its erratic performance on the quest. The BI-DQN performs very well on quest completion by finishing 97% of quests. However, this model tends to find sub-optimal solutions and gets an average reward of $-26.68$, even worse than BOW-DQN. One reason for this is the negative rewards the agent obtains after falling off the bridge. The LSTM-DQN model again performs best, achieving an average reward of $-11.33$ and completing 96% of quests on average. Though this world does not contain descriptions adversarial to BOW-DQN or BI-DQN, the LSTM-DQN obtains higher average reward by completing the quest in fewer steps and showing more resilience to variations in the state descriptions.
**Transfer Learning** We would like the representations learnt by $\phi_R$ to be generic enough and transferable to new game worlds. To test this, we created a second Home world with the same rooms, but a completely different map, changing the locations of the rooms and the pathways between them. The main differentiating factor of this world from the original home world lies in the high-level planning required to complete quests.
We initialized the LSTM part of an LSTM-DQN agent with parameters $\theta_R$ learnt from the original home world and trained it on the new world. Figure 3 (top right) demonstrates that the agent with transferred parameters is able to learn quicker than an agent starting from scratch initialized with random parameters (No Transfer), reaching the optimal policy almost 20 epochs earlier. This indicates that these simulated worlds can be used to learn good representations for language that transfer across worlds.
**Prioritized sampling** We also investigate the effects of different minibatch sampling procedures on the parameter learning. From Figure 3 (bottom right), we observe that using prioritized sampling significantly speeds up learning, with the agent achieving the optimal policy around 50 epochs faster than using uniform sampling. This shows promise for further research into different schemes of assigning priority to transitions.
**Representation Analysis** We analyzed the representations learnt by the LSTM-DQN model on the Home world. Figure 5 shows a visualization
You are halfway out on the unstable bridge. From the castle you hear a distant howling sound, like that of a large dog or other beast.
The bridge slopes precariously where it extends westwards towards the lowest point - the center point of the hang bridge. You clasp the ropes firmly as the bridge sways and creaks under you.
The ruins open up to the sky in a small open area, lined by columns. ... To the west is the gatehouse and entrance to the castle, whereas southwards the columns make way for a wide open courtyard.
The old gatehouse is near collapse. ... East the gatehouse leads out to a small open area surrounded by the remains of the castle. There is also a standing archway offering passage to a path along the old southern inner wall.
<table>
<thead>
<tr>
<th>Description</th>
<th>Nearest neighbor</th>
</tr>
</thead>
<tbody>
<tr>
<td>You are halfway out on the unstable bridge. From the castle you hear a distant howling sound, like that of a large dog or other beast.</td>
<td>The bridge slopes precariously where it extends westwards towards the lowest point - the center point of the hang bridge. You clasp the ropes firmly as the bridge sways and creaks under you.</td>
</tr>
<tr>
<td>The ruins open up to the sky in a small open area, lined by columns. ... To the west is the gatehouse and entrance to the castle, whereas southwards the columns make way for a wide open courtyard.</td>
<td>The old gatehouse is near collapse. ... East the gatehouse leads out to a small open area surrounded by the remains of the castle. There is also a standing archway offering passage to a path along the old southern inner wall.</td>
</tr>
</tbody>
</table>
Table 2: Sample descriptions from the Fantasy world and their nearest neighbors (NN) according to their vector representations from the LSTM representation generator. The NNs are often descriptions of the same or similar (nearby) states in the game.
of learnt word embeddings, reduced to two dimensions using t-SNE (Van der Maaten and Hinton, 2008). All the vectors were initialized randomly before training. We can see that semantically similar words appear close together to form coherent subspaces. In fact, we observe four different subspaces, each for one type of room along with its corresponding object(s) and quest words. For instance, food items like pizza and rooms like kitchen are very close to the word hungry which appears in a quest description. This shows that the agent learns to form meaningful associations between the semantics of the quest and the environment. Table [2] shows some examples of descriptions from Fantasy world and their nearest neighbors using cosine similarity between their corresponding vector representations produced by LSTM-DQN. The model is able to correlate descriptions of the same (or similar) underlying states and project them onto nearby points in the representation subspace.
7 Conclusions
We address the task of end-to-end learning of control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language variability makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. Our experiments demonstrate the importance of learning good representations of text in order to play these games well. Future directions include tackling high-level planning and strategy learning to improve the performance of intelligent agents.
Acknowledgements
We are grateful to the developers of Evennia, the game framework upon which this work is based. We also thank Nate Kushman, Clement Gehring, Gustavo Goretkin, members of MIT’s NLP group and the anonymous EMNLP reviewers for insightful comments and feedback. T. Kulkarni was graciously supported by the Leventhal Fellowship. We would also like to acknowledge MIT’s Center for Brains, Minds and Machines (CBMM) for support.
References
|
{"Source-Url": "http://arxiv.org/pdf/1506.08941v2.pdf", "len_cl100k_base": 7517, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 38645, "total-output-tokens": 9334, "length": "2e12", "weborganizer": {"__label__adult": 0.002368927001953125, "__label__art_design": 0.0025482177734375, "__label__crime_law": 0.0018281936645507812, "__label__education_jobs": 0.00907135009765625, "__label__entertainment": 0.00223541259765625, "__label__fashion_beauty": 0.0014705657958984375, "__label__finance_business": 0.0010166168212890625, "__label__food_dining": 0.0025081634521484375, "__label__games": 0.2200927734375, "__label__hardware": 0.002490997314453125, "__label__health": 0.0031948089599609375, "__label__history": 0.0019073486328125, "__label__home_hobbies": 0.0004100799560546875, "__label__industrial": 0.001880645751953125, "__label__literature": 0.01031494140625, "__label__politics": 0.001354217529296875, "__label__religion": 0.002277374267578125, "__label__science_tech": 0.3291015625, "__label__social_life": 0.00051116943359375, "__label__software": 0.01551055908203125, "__label__software_dev": 0.383056640625, "__label__sports_fitness": 0.002166748046875, "__label__transportation": 0.001888275146484375, "__label__travel": 0.0008258819580078125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38209, 0.03082]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38209, 0.33882]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38209, 0.89874]], "google_gemma-3-12b-it_contains_pii": [[0, 3002, false], [3002, 7727, null], [7727, 11983, null], [11983, 17018, null], [17018, 20233, null], [20233, 24965, null], [24965, 26888, null], [26888, 29939, null], [29939, 35417, null], [35417, 37462, null], [37462, 38209, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3002, true], [3002, 7727, null], [7727, 11983, null], [11983, 17018, null], [17018, 20233, null], [20233, 24965, null], [24965, 26888, null], [26888, 29939, null], [29939, 35417, null], [35417, 37462, null], [37462, 38209, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38209, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38209, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38209, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38209, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38209, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38209, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38209, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38209, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38209, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38209, null]], "pdf_page_numbers": [[0, 3002, 1], [3002, 7727, 2], [7727, 11983, 3], [11983, 17018, 4], [17018, 20233, 5], [20233, 24965, 6], [24965, 26888, 7], [26888, 29939, 8], [29939, 35417, 9], [35417, 37462, 10], [37462, 38209, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38209, 0.02817]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
8d9e36179b5ef43a742e28a2dfde5bcd3c731ab8
|
An adversary means opposition and competition, but not having an adversary means grief and loneliness.
— Zhuangzi (Chuang-tsu) c. 300 BC
It is possible that the operator could be hit by an asteroid and your $20 could fall off his cardboard box and land on the ground, and while you were picking it up, $5 could blow into your hand. You therefore could win $5 by a simple twist of fate.
— Penn Jillette, explaining how to win at Three-Card Monte (1999)
29 Adversary Arguments
29.1 Three-Card Monte
Until Times Square was turned into a glitzy sanitized tourist trap, you could often find dealers stealing tourists' money using a game called “Three Card Monte” or “Spot the Lady”. The dealer shows the tourist three cards, say the Queen of Hearts, the two of spades, and three of clubs. The dealer shuffles the cards face down on a table (usually slowly enough that the tourist can follow the Queen), and then asks the tourist to bet on which card is the Queen. In principle, the tourist's odds of winning are at least one in three, more if the tourist was carefully watching the movement of the cards.
In practice, however, the tourist never wins, because the dealer cheats. The dealer actually holds at least four cards; before he even starts shuffling the cards, the dealer palms the queen or sticks it up his sleeve. No matter what card the tourist bets on, the dealer turns over a black card (which might be the two of clubs, but most tourists won't notice that wasn't one of the original cards). If the tourist gives up, the dealer slides the queen under one of the cards and turns it over, showing the tourist ‘where the queen was all along’. If the dealer is really good, the tourist won't see the dealer changing the cards and will think maybe the queen was there all along and he just wasn’t smart enough to figure that out. As long as the dealer doesn’t reveal all the black cards at once, the tourist has no way to prove that the dealer cheated!
29.2 n-Card Monte
Now let’s consider a similar game, but with an algorithm acting as the tourist and with bits instead of cards. Suppose we have an array of $n$ bits and we want to determine if any of them is a 1. Obviously we can figure this out by just looking at every bit, but can we do better? Is there maybe some complicated tricky algorithm to answer the question “Any ones?” without looking at every bit? Well, of course not, but how do we prove it?
The simplest proof technique is called an adversary argument. The idea is that an all-powerful malicious adversary (the dealer) pretends to choose an input for the algorithm (the tourist). When the algorithm wants looks at a bit (a card), the adversary sets that bit to whatever value will make the algorithm do the most work. If the algorithm does not look at enough bits before terminating, then there will be several different inputs, each consistent with the bits already seen,¹
¹Even if the dealer is a sloppy magician, he'll cheat anyway. The dealer is almost always surrounded by shills; these are the “tourists” who look like they’re actually winning, who turn over cards when the dealer “isn’t looking”, who casually mention how easy the game is to win, and so on. The shills physically protect the dealer from any angry tourists who notice the dealer cheating, and shake down any tourists who refuse to pay after making a bet. Really, you cannot win this game, ever.
the should result in different outputs. Whatever the algorithm outputs, the adversary can 'reveal' an input that is has all the examined bits but contradicts the algorithm's output, and then claim that that was the input that he was using all along. Since the only information the algorithm has is the set of bits it examined, the algorithm cannot distinguish between a malicious adversary and an honest user who actually chooses an input in advance and answers all queries truthfully.
For the \( n \)-card monte problem, the adversary originally pretends that the input array is all zeros—whenever the algorithm looks at a bit, it sees a 0. Now suppose the algorithms stops before looking at all three bits. If the algorithm says 'No, there's no 1,' the adversary changes one of the unexamined bits to a 1 and shows the algorithm that it's wrong. If the algorithm says 'Yes, there's a 1,' the adversary reveals the array of zeros and again proves the algorithm wrong. Either way, the algorithm cannot tell that the adversary has cheated.
One absolutely crucial feature of this argument is that the adversary makes absolutely no assumptions about the algorithm. The adversary strategy can't depend on some predetermined order of examining bits, and it doesn't care about anything the algorithm might or might not do when it's not looking at bits. Any algorithm that doesn't examine every bit falls victim to the adversary.
29.3 Finding Patterns in Bit Strings
Let's make the problem a little more complicated. Suppose we're given an array of \( n \) bits and we want to know if it contains the substring 01, a zero followed immediately by a one. Can we answer this question without looking at every bit?
It turns out that if \( n \) is odd, we don't have to look at all the bits. First we look the bits in every even position: \( B[2], B[4], \ldots, B[n-1] \). If we see \( B[i] = 0 \) and \( B[j] = 1 \) for any \( i < j \), then we know the pattern 01 is in there somewhere—starting at the last 0 before \( B[j] \)—so we can stop without looking at any more bits. If we see only 1s followed by 0s, we don't have to look at the bit between the last 0 and the first 1. If every even bit is a 0, we don't have to look at \( B[1] \), and if every even bit is a 1, we don't have to look at \( B[n] \). In the worst case, our algorithm looks at only \( n-1 \) of the \( n \) bits.
But what if \( n \) is even? In that case, we can use the following adversary strategy to show that any algorithm does have to look at every bit. The adversary will attempt to produce an 'input' string \( B \) without the substring 01; all such strings have the form 11...100...0. The adversary maintains two indices \( \ell \) and \( r \) and pretends that the prefix \( B[1..\ell] \) contains only 1s and the suffix \( B[r..n] \) contains only 0s. Initially \( \ell = 0 \) and \( r = n+1 \).
The adversary maintains the invariant that \( r - \ell \), the length of the undecided portion of the 'input' string, is even. When the algorithm looks at a bit between \( \ell \) and \( r \), the adversary chooses whichever value preserves the parity of the intermediate chunk of the array, and then moves either \( \ell \) or \( r \). Specifically, here's what the adversary does when the algorithm examines bit \( B[i] \). (Note that I'm specifying the adversary strategy as an algorithm!)
It’s fairly easy to prove that this strategy forces the algorithm to examine every bit. If the algorithm doesn’t look at every bit to the right of \( r \), the adversary could replace some unexamined bit with a 1. Similarly, if the algorithm doesn’t look at every bit to the left of \( \ell \), the adversary could replace some unexamined bit with a zero. Finally, if there are any unexamined bits between \( \ell \) and \( r \), there must be at least two such bits (since \( r - \ell \) is always even) and the adversary can put a 01 in the gap.
In general, we say that a bit pattern is **evasive** if we have to look at every bit to decide if a string of \( n \) bits contains the pattern. So the pattern 1 is evasive for all \( n \), and the pattern 01 is evasive if and only if \( n \) is even. It turns out that the **only** patterns that are evasive for all values of \( n \) are the one-bit patterns 0 and 1.
### 29.4 Evasive Graph Properties
Another class of problems for which adversary arguments give good lower bounds is graph problems where the graph is represented by an adjacency matrix, rather than an adjacency list. Recall that the adjacency matrix of an undirected \( n \)-vertex graph \( G = (V, E) \) is an \( n \times n \) matrix \( A \), where \( A[i, j] = [(i, j) \in E] \). We are interested in deciding whether an undirected graph has or does not have a certain property. For example, is the input graph connected? Acyclic? Planar? Complete? A tree? We call a graph property **evasive** if we have to look look at all \( n^2 \) entries in the adjacency matrix to decide whether a graph has that property.
An obvious example of an evasive graph property is **emptiness**: Does the graph have any edges at all? We can show that emptiness is evasive using the following simple adversary strategy. The adversary maintains two graphs \( E \) and \( G \). \( E \) is just the empty graph with \( n \) vertices. Initially, \( G \) is the complete graph on \( n \) vertices. Whenever the algorithm asks about an edge, the adversary removes that edge from \( G \) (unless it’s already gone) and answers ‘no’. If the algorithm terminates without examining every edge, then \( G \) is not empty. Since both \( G \) and \( E \) are consistent with all the adversary’s answers, the algorithm must give the wrong answer for one of the two graphs.
### 29.5 Connectedness Is Evasive
Now let me give a more complicated example, **connectedness**. Once again, the adversary maintains two graphs, \( Y \) and \( M \) (‘yes’ and ‘maybe’). \( Y \) contains all the edges that the algorithm knows are definitely in the input graph. \( M \) contains all the edges that the algorithm thinks might be in the input graph, or in other words, all the edges of \( Y \) plus all the unexamined edges. Initially, \( Y \) is empty and \( M \) is complete.
Here’s the strategy that adversary follows when the adversary asks whether the input graph contains the edge \( e \). I’ll assume that whenever an algorithm examines an edge, it’s in \( M \) but not in \( Y \); in other words, algorithms never ask about the same edge more than once.
Notice that the graphs \( Y \) and \( M \) are both consistent with the adversary’s answers at all times. The adversary strategy maintains a few other simple invariants.
- **\( Y \) is a subgraph of \( M \).** This is obvious.
- **\( M \) is connected.** This is also obvious.
- **If \( M \) has a cycle, none of its edges are in \( Y \).** If \( M \) has a cycle, then deleting any edge in that cycle leaves \( M \) connected.
- **\( Y \) is acyclic.** This follows directly from the previous invariant.
- **If \( Y \neq M \), then \( Y \) is disconnected.** The only connected acyclic graph is a tree. Suppose \( Y \) is a tree and some edge \( e \) is in \( M \) but not in \( Y \). Then there is a cycle in \( M \) that contains \( e \), all of whose other edges are in \( Y \). This violated our third invariant.
We can also think about the adversary strategy in terms of minimum spanning trees. Recall the anti-Kruskal algorithm for computing the maximum spanning tree of a graph: Consider the edges one at a time in increasing order of length. If removing an edge would disconnect the graph, declare it part of the spanning tree (by adding it to \( Y \)); otherwise, throw it away (by removing it from \( M \)). If the algorithm examines all \( \binom{n}{2} \) possible edges, then \( Y \) and \( M \) are both equal to the maximum spanning tree of the complete \( n \)-vertex graph, where the weight of an edge is the time when the algorithm asked about it.
Now, if an algorithm terminates before examining all \( \binom{n}{2} \) edges, then there is at least one edge in \( M \) that is not in \( Y \). Since the algorithm cannot distinguish between \( M \) and \( Y \), even though \( M \) is connected and \( Y \) is not, the algorithm cannot possibly give the correct output for both graphs. Thus, in order to be correct, any algorithm must examine every edge—Connectedness is evasive!
### 29.6 An Evasive Conjecture
A graph property is **nontrivial** if there is at least one graph with the property and at least one graph without the property. (The only trivial properties are ‘Yes’ and ‘No.’) A graph property is **monotone** if it is closed under taking subgraphs — if \( G \) has the property, then any subgraph of \( G \) has the property. For example, emptiness, planarity, acyclicity, and non-connectedness are monotone. The properties of being a tree and of having a vertex of degree 3 are not monotone.
**Conjecture 1 (Aanderaa, Karp, and Rosenberg).** Every nontrivial monotone property of \( n \)-vertex graphs is evasive.
The Aanderaa-Karp-Rosenberg conjecture has been proven when \( n = p^e \) for some prime \( p \) and positive integer exponent \( e \)—the proof uses some interesting results from algebraic topology\(^2\)—but it is still open for other values of \( n \).
\(^2\)Let \( \Delta \) be a contractible simplicial complex whose automorphism group \( \text{Aut}(\Delta) \) is vertex-transitive, and let \( \Gamma \) be a vertex-transitive subgroup of \( \text{Aut}(\Delta) \). If there are normal subgroups \( \Gamma_1 < \Gamma_2 < \Gamma \) such that \( |\Gamma_1| = p^a \) for some prime \( p \) and integer \( a \), \( |\Gamma/\Gamma_1| = q^b \) for some prime \( q \) and integer \( b \), and \( \Gamma_2/\Gamma_1 \) is cyclic, then \( \Delta \) is a simplex.
No, this will not be on the final exam.
There are non-trivial non-evasive graph properties, but all known examples are non-monotone. One such property—‘scorpionhood’—is described in an exercise at the end of this lecture note.
### 29.7 Finding the Minimum and Maximum
Last time, we saw an adversary argument that finding the largest element of an unsorted set of \( n \) numbers requires at least \( n - 1 \) comparisons. Let’s consider the complexity of finding the largest and smallest elements. More formally:
Given a sequence \( X = \langle x_1, x_2, \ldots, x_n \rangle \) of \( n \) distinct numbers, find indices \( i \) and \( j \) such that \( x_i = \min X \) and \( x_j = \max X \).
How many comparisons do we need to solve this problem? An upper bound of \( 2n - 3 \) is easy: find the minimum in \( n - 1 \) comparisons, and then find the maximum of everything else in \( n - 2 \) comparisons. Similarly, a lower bound of \( n - 1 \) is easy, since any algorithm that finds the min and the max certainly finds the max.
We can improve both the upper and the lower bound to \( \lceil \frac{3n}{2} \rceil - 2 \). The upper bound is established by the following algorithm. Compare all \( \lfloor \frac{n}{2} \rfloor \) consecutive pairs of elements \( x_{2i-1} \) and \( x_{2i} \), and put the smaller element into a set \( S \) and the larger element into a set \( L \). If \( n \) is odd, put \( x_n \) into both \( L \) and \( S \). Then find the smallest element of \( S \) and the largest element of \( L \). The total number of comparisons is at most
\[
\left\lfloor \frac{n}{2} \right\rfloor + \left\lfloor \frac{n}{2} \right\rfloor - 1 + \left\lfloor \frac{n}{2} \right\rfloor - 1 = \left\lfloor \frac{3n}{2} \right\rfloor - 2.
\]
For the lower bound, we use an adversary argument. The adversary marks each element + if it might be the maximum element, and − if it might be the minimum element. Initially, the adversary puts both marks + and − on every element. If the algorithm compares two double-marked elements, then the adversary declares one smaller, removes the + mark from the smaller element, and removes the − mark from the larger one. In every other case, the adversary can answer so that at most one mark needs to be removed. For example, if the algorithm compares a double marked element against one labeled −, the adversary says the one labeled − is smaller and removes the − mark from the other. If the algorithm compares to +’s, the adversary must unmark one of the two.
Initially, there are \( 2n \) marks. At the end, in order to be correct, exactly one item must be marked + and exactly one other must be marked −, since the adversary can make any + the maximum and any − the minimum. Thus, the algorithm must force the adversary to remove \( 2n - 2 \) marks. At most \( \lfloor n/2 \rfloor \) comparisons remove two marks; every other comparison removes at most one mark. Thus, the adversary strategy forces any algorithm to perform at least \( 2n - 2 - \lfloor n/2 \rfloor = \lfloor 3n/2 \rfloor - 2 \) comparisons.
### 29.8 Finding the Median
Finally, let’s consider the median problem: Given an unsorted array \( X \) of \( n \) numbers, find its \( n/2 \)th largest entry. (I’ll assume that \( n \) is even to eliminate pesky floors and ceilings.) More formally:
Given a sequence \( \langle x_1, x_2, \ldots, x_n \rangle \) of \( n \) distinct numbers, find the index \( m \) such that \( x_m \) is the \( n/2 \)th largest element in the sequence.
To prove a lower bound for this problem, we can use a combination of information theory and two adversary arguments. We use one adversary argument to prove the following simple lemma:
**Lemma 1.** Any comparison tree that correctly finds the median element also identifies the elements smaller than the median and larger than the median.
**Proof:** Suppose we reach a leaf of a decision tree that chooses the median element $x_m$, and there is still some element $x_i$ that isn’t known to be larger or smaller than $x_m$. In other words, we cannot decide based on the comparisons that we’ve already performed whether $x_i < x_m$ or $x_i > x_m$. Then in particular no element is known to lie between $x_i$ and $x_m$. This means that there must be an input that is consistent with the comparisons we’ve performed, in which $x_i$ and $x_m$ are adjacent in sorted order. But then we can swap $x_i$ and $x_m$, without changing the result of any comparison, and obtain a different consistent input in which $x_i$ is the median, not $x_m$. Our decision tree gives the wrong answer for this ‘swapped’ input. □
This lemma lets us rephrase the median-finding problem yet again.
Given a sequence $X = \langle x_1, x_2, \ldots, x_n \rangle$ of $n$ distinct numbers, find the indices of its $n/2 - 1$ largest elements $L$ and its $n/2$th largest element $x_m$.
Now suppose a ‘little birdie’ tells us the set $L$ of elements larger than the median. This information fixes the outcomes of certain comparisons—any item in $L$ is bigger than any element not in $L$—so we can ‘prune’ those comparisons from the comparison tree. The pruned tree finds the largest element of $X \setminus L$ (the median of $X$), and thus must have depth at least $n/2 - 1$. In fact, the adversary argument in the last lecture implies that every leaf in the pruned tree must have depth at least $n/2 - 1$, so the pruned tree has at least $2^{n/2-1}$ leaves.
There are $\binom{n}{n/2-1} \approx 2^n / \sqrt{n}/2$ possible choices for the set $L$. Every leaf in the original comparison tree is also a leaf in exactly one of the $\binom{n}{n/2-1}$ pruned trees, so the original comparison tree must have at least $\binom{n}{n/2-1} 2^{n/2-1} \approx 2^{3n/2} / \sqrt{n}/2$ leaves. Thus, any comparison tree that finds the median must have depth at least
$$\left\lfloor \frac{n}{2} - 1 + \lg \binom{n}{n/2-1} \right\rfloor = \frac{3n}{2} - O(\log n).$$
A more complicated adversary argument (also involving pruning the comparison tree with little birdies) improves this lower bound to $2n - o(n)$.
A similar argument implies that at least $n-k+\lceil \lg \binom{n}{k-1} \rceil = \Omega((n-k)+k \log(n/k))$ comparisons are required to find the $k$th largest element in an $n$-element set. This bound is tight up to constant factors for all $k \leq n/2$; there is an algorithm that uses at most $O(n + k \log(n/k))$ comparisons. Moreover, this lower bound is exactly tight when $k = 1$ or $k = 2$. In fact, these are the only values of $k \leq n/2$ for which the exact complexity of the selection problem is known. Even the case $k = 3$ is still open!
**Exercises**
1. (a) Let $X$ be a set containing an odd number of $n$-bit strings. Prove that any algorithm that decides whether a given $n$-bit string is an element of $X$ must examine every bit of the input string in the worst case.
(b) Give a one-line proof that the bit pattern 01 is evasive for all even $n$.
6
(c) Prove that the bit pattern 11 is evasive if and only if \( n \mod 3 = 1 \).
*(d) Prove that the bit pattern 111 is evasive if and only if \( n \mod 4 = 0 \) or 3.
2. Suppose we are given the adjacency matrix of a directed graph \( G \) with \( n \) vertices. Describe an algorithm that determines whether \( G \) has a sink by probing only \( O(n) \) bits in the input matrix. A sink is a vertex that has an incoming edge from every other vertex, but no outgoing edges.
*3. A scorpion is an undirected graph with three special vertices: the sting, the tail, and the body. The sting is connected only to the tail; the tail is connected only to the sting and the body; and the body is connected to every vertex except the sting. The rest of the vertices (the head, eyes, legs, antennae, teeth, gills, flippers, wheels, etc.) can be connected arbitrarily. Describe an algorithm that determines whether a given \( n \)-vertex graph is a scorpion by probing only \( O(n) \) entries in the adjacency matrix.
4. Prove using an adversary argument that acyclicity is an evasive graph property. [Hint: Kruskal.]
5. Prove that finding the second largest element in an \( n \)-element array requires exactly \( n - 2 + \lceil \log n \rceil \) comparisons in the worst case. Prove the upper bound by describing and analyzing an algorithm; prove the lower bound using an adversary argument.
6. Let \( T \) be a perfect ternary tree where every leaf has depth \( \ell \). Suppose each of the \( 3^\ell \) leaves of \( T \) is labeled with a bit, either 0 or 1, and each internal node is labeled with a bit that agrees with the majority of its children.
(a) Prove that any deterministic algorithm that determines the label of the root must examine all \( 3^\ell \) leaf bits in the worst case.
(b) Describe and analyze a randomized algorithm that determines the root label, such that the expected number of leaves examined is \( o(3^\ell) \). (You may want to review the notes on randomized algorithms.)
*7. UIUC has just finished constructing the new Reingold Building, the tallest dormitory on campus. In order to determine how much insurance to buy, the university administration needs to determine the highest safe floor in the building. A floor is considered safe if a drunk student can fall from a window on that floor and land without breaking; if the egg breaks, the floor is considered unsafe. Any floor that is higher than an unsafe floor is also considered unsafe. The only way to determine whether a floor is safe is to drop an egg from a window on that floor.
You would like to find the lowest unsafe floor \( L \) by performing as few tests as possible; unfortunately, you have only a very limited supply of eggs.
(a) Prove that if you have only one egg, you can find the lowest unsafe floor with \( L \) tests.
[Hint: Yes, this is trivial.]
(b) Prove that if you have only one egg, you must perform at least $L$ tests in the worst case. In other words, prove that your algorithm from part (a) is optimal. [Hint: Use an adversary argument.]
(c) Describe an algorithm to find the lowest unsafe floor using two eggs and only $O(\sqrt{L})$ tests. [Hint: Ideally, each egg should be dropped the same number of times. How many floors can you test with $n$ drops?]
(d) Prove that if you start with two eggs, you must perform at least $\Omega(\sqrt{L})$ tests in the worst case. In other words, prove that your algorithm from part (c) is optimal.
*(e) Describe an algorithm to find the lowest unsafe floor using $k$ eggs, using as few tests as possible, and prove your algorithm is optimal for all values of $k$.
|
{"Source-Url": "http://web.engr.illinois.edu/~jeffe/teaching/algorithms/notes/29-adversary.pdf", "len_cl100k_base": 6043, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 28577, "total-output-tokens": 6605, "length": "2e12", "weborganizer": {"__label__adult": 0.000701904296875, "__label__art_design": 0.0006265640258789062, "__label__crime_law": 0.001461029052734375, "__label__education_jobs": 0.00717926025390625, "__label__entertainment": 0.0003457069396972656, "__label__fashion_beauty": 0.00031948089599609375, "__label__finance_business": 0.0006079673767089844, "__label__food_dining": 0.0011987686157226562, "__label__games": 0.0045623779296875, "__label__hardware": 0.002285003662109375, "__label__health": 0.00177001953125, "__label__history": 0.0008797645568847656, "__label__home_hobbies": 0.00035381317138671875, "__label__industrial": 0.0013608932495117188, "__label__literature": 0.0018796920776367188, "__label__politics": 0.000705718994140625, "__label__religion": 0.0010385513305664062, "__label__science_tech": 0.466796875, "__label__social_life": 0.0002732276916503906, "__label__software": 0.00615692138671875, "__label__software_dev": 0.496826171875, "__label__sports_fitness": 0.0008902549743652344, "__label__transportation": 0.0014057159423828125, "__label__travel": 0.0003502368927001953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23784, 0.0316]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23784, 0.55289]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23784, 0.92487]], "google_gemma-3-12b-it_contains_pii": [[0, 3401, false], [3401, 6772, null], [6772, 9913, null], [9913, 13265, null], [13265, 16724, null], [16724, 20161, null], [20161, 23017, null], [23017, 23784, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3401, true], [3401, 6772, null], [6772, 9913, null], [9913, 13265, null], [13265, 16724, null], [16724, 20161, null], [20161, 23017, null], [23017, 23784, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23784, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23784, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23784, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23784, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23784, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23784, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23784, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23784, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23784, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23784, null]], "pdf_page_numbers": [[0, 3401, 1], [3401, 6772, 2], [6772, 9913, 3], [9913, 13265, 4], [13265, 16724, 5], [16724, 20161, 6], [20161, 23017, 7], [23017, 23784, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23784, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
149c18a1de6df904018ffb499a1f75fb79958b63
|
CMSC424: Database Design
Instructor: Amol Deshpande
amol@cs.umd.edu
Databases
- Data Models
- Conceptual representation of the data
- Data Retrieval
- How to ask questions of the database
- How to answer those questions
- Data Storage
- How/where to store data, how to access it
- Data Integrity
- Manage crashes, concurrency
- Manage semantic inconsistencies
Query Optimization
- Introduction
- Example of a Simple Type of Query
- Transformation of Relational Expressions
- Statistics Estimation
- Optimization Algorithms
Why?
- Many different ways of executing a given query
- Huge differences in cost
Example:
- `select * from person where ssn = "123"`
- Size of `person` = 1GB
- Sequential Scan:
- Takes 1GB / (20MB/s) = 50s
- Use an index on SSN (assuming one exists):
- Approx 4 Random I/Os = 40ms
Query Optimization
- Many choices
- Using indexes or not, which join method (hash, vs merge, vs NL)
- What join order?
- Given a join query on R, S, T, should I join R with S first, or S with T first?
- This is an optimization problem
- Similar to say traveling salesman problem
- Number of different choices is very very large
- Step 1: Figuring out the solution space
- Step 2: Finding algorithms/heuristics to search through the solution space
---
Query Optimization
- Equivalent relational expressions
- Drawn as a tree
- List the operations and the order

Query Optimization
- Execution plans
- Evaluation expressions annotated with the methods used
```
π_{customer_name}(σ_{branch.city = Brooklyn}(use index 1) ∧ σ_{balance < 1000}(use linear scan))
```
Query Optimization
**Steps:**
- Generate all possible execution plans for the query
- First generate all equivalent expressions
- Then consider all annotations for the operations
- Figure out the cost for each of them
- Compute cost for each operation
- Using the formulas discussed before
- One problem: How do we know the number of result tuples for, say, $\sigma_{\text{balance}>2500}(\text{account})$
- Add them!
- Choose the best
---
**Query Optimization**
- Introduction
- Example of a Simple Type of Query
- Transformation of Relational Expressions
- Statistics Estimation
- Optimization Algorithms
A Simple Case
- Queries with only selections on a single relation with no indexes
```sql
select *
from person
where substr(name, 1, 1) in ['A', 'B', 'C'] and zipcode = 94720 and
date-of-birth > to_date('1978/05/31', 'yyyy/mm/dd')
```
**CPU Costs**
- 100ns
- 1ns
- 1000ns
- Relation contains:
- 1,000,000 tuples
A Simple Case
- Possible execution plan:
- For each tuple
- Evaluate `substr predicate`
- If true, evaluate `zipcode predicate`
- If true, evaluate `date-of-birth predicate`
- If true, output the tuple
- 6 different possibilities
- How to choose one?
A Simple Case
- Compute cost of each possibility
- Say, substr() $\rightarrow$ zipcode $\rightarrow$ date-of-birth
- Need some more information
- selectivity: fraction of tuples expected to pass the predicates
- Let $\text{selectivity}(\text{substr predicate}) = \frac{3}{26}$
- Let $\text{selectivity}(\text{zipcode predicate}) = \frac{1}{100}$
- And, $\text{selectivity}(\text{date-of-birth predicate}) = \frac{1}{3}$
- How are selectivities computed?
- Must keep track of some additional information about the relations
- Given that:
- Cost of the above plan =
- $1,000,000 \times 100\text{ns}$
- $+ 1,000,000 \times \frac{3}{26} \times 1\text{ns}$
- $+ 1,000,000 \times \frac{1}{26} \times \frac{1}{100} \times 1000\text{ns}$
- $= \text{approx} 100.5\text{ ms}$
- Cost of the plan: zipcode $\rightarrow$ substr() $\rightarrow$ date-of-birth:
- Approx 12.92 ms
- About a factor of 10 better.
A Simple Case
- Compute cost of each possibility
- Say, $\text{substr()} \rightarrow \text{zipcode} \rightarrow \text{date-of-birth}$
- Cost of the plan: $\text{zipcode} \rightarrow \text{substr()} \rightarrow \text{date-of-birth}$:
- Approx 12.92 ms
- About a factor of 10 better.
- General algorithm:
- Don’t need to check all $n!$ Possibilities
- Sort the predicates in the decreasing order by rank:
$\frac{1 - \text{selectivity(predicate)}}{\text{cost of the predicate}}$
Query Optimization
- General case:
- Need:
- A way to enumerate all plans
- A way to find the cost of each plan
- Sub problem: Estimating the selectivities of various operations
- A way to search through the plans efficiently
Query Optimization
- Introduction
- Example of a Simple Type of Query
- Transformation of Relational Expressions
- Statistics Estimation
- Optimization Algorithms
Equivalence of Expressions
- Two relational expressions equivalent iff:
- Their result is identical on all legal databases
- Equivalence rules:
- Allow replacing one expression with another
- Examples:
1. $\sigma_{\theta_1,\theta_2}(E) = \sigma_{\theta_1}(\sigma_{\theta_2}(E))$
2. Selections are commutative
$\sigma_{\theta_1}(\sigma_{\theta_2}(E)) = \sigma_{\theta_2}(\sigma_{\theta_1}(E))$
Equivalence Rules
- Examples:
3. \( \Pi_{\theta_1}(\Pi_{\theta_2}(\ldots(\Pi_{\theta_n}(E))\ldots)) = \Pi_{\theta_1}(E) \)
5. \( E_1 \bowtie_\theta E_2 = E_2 \bowtie_\theta E_1 \)
7(a). If \( \theta_0 \) only involves attributes from \( E_1 \),
\[
\sigma_{\theta_0}(E_1 \bowtie_\theta E_2) = (\sigma_{\theta_0}(E_1)) \bowtie_\theta E_2
\]
- And so on…
- Many rules of this type
Pictorial Depiction
Example
- Find the names of all customers with an account at a Brooklyn branch whose account balance is over $1000.
\[ \Pi_{\text{customer\_name}}(\sigma_{\text{branch\_city} = \text{"Brooklyn"} \land \text{balance} > 1000}) (\text{branch} \Join (\text{account} \Join \text{depositor})) \]
- Apply the rules one by one
\[ \Pi_{\text{customer\_name}}((\sigma_{\text{branch\_city} = \text{"Brooklyn"} \land \text{balance} > 1000}) (\text{branch} \Join \text{account})) \Join \text{depositor}) \]
\[ \Pi_{\text{customer\_name}}(((\sigma_{\text{branch\_city} = \text{"Brooklyn"}} (\text{branch})) \Join (\sigma_{\text{balance} > 1000} (\text{account}))) \Join \text{depositor}) \]
Example
(a) Initial expression tree
(b) Tree after multiple transformations
Equivalence of Expressions
- The rules give us a way to enumerate all equivalent expressions
- Note that the expressions don’t contain physical access methods, join methods etc...
- Simple Algorithm:
- Start with the original expression
- Apply all possible applicable rules to get a new set of expressions
- Repeat with this new set of expressions
- Till no new expressions are generated
Equivalence of Expressions
- Works, but is not feasible
- Consider a simple case:
- \( R1 \oplus (R2 \ominus (R3 \cdot \ldots \cdot Rn)) \ldots \)
- Just join commutativity and associativity will give us:
- At least:
- \( n^2 \cdot 2^n \)
- At worst:
- \( n! \cdot 2^n \)
- Typically the process of enumeration is combined with the search process
Evaluation Plans
- We still need to choose the join methods etc..
- Option 1: Choose for each operation separately
- Usually okay, but sometimes the operators interact
- Consider joining three relations on the same attribute:
- $R_1 \bowtie_a (R_2 \bowtie_a R_3)$
- Best option for $R_2$ join $R_3$ might be hash-join
- But if $R_1$ is sorted on $a$, then sort-merge join is preferable
- Because it produces the result in sorted order by $a$
- Also, we need to decide whether to use pipelining or materialization
- Such issues are typically taken into account when doing the optimization
Query Optimization
- Introduction
- Example of a Simple Type of Query
- Transformation of Relational Expressions
- Optimization Algorithms
- Statistics Estimation
Optimization Algorithms
- Two types:
- Exhaustive: That attempt to find the best plan
- Heuristical: That are simpler, but are not guaranteed to find the optimal plan
- Consider a simple case
- Join of the relations $R_1, \ldots, R_n$
- No selections, no projections
- Still very large plan space
Searching for the best plan
- **Option 1:**
- Enumerate all equivalent expressions for the original query expression
- Using the rules outlined earlier
- Estimate cost for each and choose the lowest
- Too expensive!
- Consider finding the best join-order for $r_1 \Join r_2 \Join \ldots \Join r_n$.
- There are $(2(n - 1))!/(n - 1)!$ different join orders for above expression. With $n = 7$, the number is 665280, with $n = 10$, the number is greater than 176 billion!
Searching for the best plan
- **Option 2:**
- Dynamic programming
- There is too much commonality between the plans
- Also, costs are additive
- Caveat: Sort orders (also called “interesting orders”)
- Reduces the cost down to $O(n3^n)$ or $O(n2^n)$ in most cases
- Interesting orders increase this a little bit
- Considered acceptable
- Typically $n < 10$.
- Switch to heuristic if not acceptable
Dynamic Programming Algo.
- Join R1, R2, R3, R4, R5
<table>
<thead>
<tr>
<th>R1 $\Join$ R2</th>
<th>R1 $\Join$ R3</th>
<th>R1 $\Join$ R4</th>
<th>R4 $\Join$ R5</th>
</tr>
</thead>
<tbody>
<tr>
<td>cost: 100</td>
<td>cost: 300</td>
<td>cost: 300</td>
<td>cost: 300</td>
</tr>
<tr>
<td>plan: HJ</td>
<td>plan: SMJ</td>
<td></td>
<td>plan: HJ</td>
</tr>
</tbody>
</table>
Options:
1. Join R1R2 with R3 using HJ
cost = 100 + cost of this join
2. Join R1R2 with R3 using SMJ
cost = 100 + cost of this join
3. Join R1R3 with R2 using HJ
cost = 300 + cost of this join
...
Left Deep Join Trees
- In **left-deep join trees**, the right-hand-side input for each join is a relation, not the result of an intermediate join
- Early systems only considered these types of plans
- Easier to pipeline
Heuristic Optimization
- Dynamic programming is expensive
- Use heuristics to reduce the number of choices
- Typically rule-based:
- Perform selection early (reduces the number of tuples)
- Perform projection early (reduces the number of attributes)
- Perform most restrictive selection and join operations before other similar operations.
- Some systems use only heuristics, others combine heuristics with partial cost-based optimization.
Steps in Typical Heuristic Optimization
1. Deconstruct conjunctive selections into a sequence of single selection operations (Equiv. rule 1.).
2. Move selection operations down the query tree for the earliest possible execution (Equiv. rules 2, 7a, 7b, 11).
3. Execute first those selection and join operations that will produce the smallest relations (Equiv. rule 6).
4. Replace Cartesian product operations that are followed by a selection condition by join operations (Equiv. rule 4a).
5. Deconstruct and move as far down the tree as possible lists of projection attributes, creating new projections where needed (Equiv. rules 3, 8a, 8b, 12).
6. Identify those subtrees whose operations can be pipelined, and execute them using pipelining.)
Query Optimization
- Introduction
- Example of a Simple Type of Query
- Transformation of Relational Expressions
- Optimization Algorithms
- Statistics Estimation
Cost estimation
- Computing operator costs requires information like:
- Primary key?
- Sorted or not, which attribute
- So we can decide whether need to sort again
- How many tuples in the relation, how many blocks?
- RAID?? Which one?
- Read/write costs are quite different
- How many tuples match a predicate like "age > 40"?
- E.g. Need to know how many index pages need to be read
- Intermediate result sizes
- E.g. (R JOIN S) is input to another join operation – need to know if it fits in memory
- And so on…
Cost estimation
- Some information is static and is maintained in the metadata
- Primary key?
- Sorted or not, which attribute
- So we can decide whether need to sort again
- How many tuples in the relation, how many blocks?
- RAID?? Which one?
- Read/write costs are quite different
- Typically kept in some tables in the database
- “all_tab_columns” in Oracle
- Most systems have commands for updating them
Cost estimation
- However, others need to be estimated somehow
- How many tuples match a predicate like "age > 40"?
- E.g. Need to know how many index pages need to be read
- Intermediate result sizes
- The problem variously called:
- “intermediate result size estimation”
- "selectivity estimation"
- Very important to estimate reasonably well
- E.g. consider "select * from R where zipcode = 20742"
- We estimate that there are 10 matches, and choose to use a secondary index (remember: random I/Os)
- Turns out there are 10000 matches
- Using a secondary index very bad idea
- Optimizer also often choose Nested-loop joins if one relation very small… underestimation can result in very bad
Selectivity Estimation
- Basic idea:
- Maintain some information about the tables
- More information → more accurate estimation
- More information → higher storage cost, higher update cost
- Make uniformity and randomness assumptions to fill in the gaps
- Example:
- For a relation "people", we keep:
- Total number of tuples = 100,000
- Distinct "zipcode" values that appear in it = 100
- Given a query: "zipcode = 20742"
- We estimated the number of matching tuples as: 100,000/100 = 1000
- What if I wanted more accurate information?
- Keep histograms...
Histograms
- A condensed, approximate version of the “frequency distribution”
- Divide the range of the attribute value in “buckets”
- For each bucket, keep the total count
- Assume uniformity within a bucket
![Histogram Chart]
<table>
<thead>
<tr>
<th>Zipcode Range</th>
<th>Count</th>
</tr>
</thead>
<tbody>
<tr>
<td>20000-20199</td>
<td>50,000</td>
</tr>
<tr>
<td>20200-20399</td>
<td>40,000</td>
</tr>
<tr>
<td>20400-20599</td>
<td>30,000</td>
</tr>
<tr>
<td>20600-20799</td>
<td>20,000</td>
</tr>
<tr>
<td>20800-20999</td>
<td>10,000</td>
</tr>
</tbody>
</table>
Histograms
- Given a query: zipcode = "20742"
- Find the bucket (Number 3)
- Say the associated count = 45000
- Assume uniform distribution within the bucket: 45,000/200 = 225
- What if the ranges are typically not full?
- I.e., only a few of the zipcodes are actually in use?
- With each bucket, also keep the number of zipcodes that are valid
- Now the estimate would be: 45,000/80 = 562.50
- More Information → Better estimation
Histograms
- Very widely used in practice
- One-dimensional histograms kept on almost all columns of interest
- i.e., the columns that are commonly referenced in queries
- Sometimes: multi-dimensional histograms also make sense
- Less commonly used as of now
- Two common types of histograms:
- Equi-depth
- The attribute value range partitioned such that each bucket contains about the same number of tuples
- Equi-width
- The attribute value range partitioned in equal-sized buckets
- VOptimal histograms
- No such restrictions
- More accurate, but harder to use or update
Next...
- Estimating sizes of the results of various operations
- Guiding principle:
- Use all the information available
- Make uniformity and randomness assumptions otherwise
- Many formulas, but not very complicated…
- In most cases, the first thing you think of
Basic statistics
- Basic information stored for all relations
- \( n_r \): number of tuples in a relation \( r \).
- \( b_r \): number of blocks containing tuples of \( r \).
- \( l_r \): size of a tuple of \( r \).
- \( f_r \): blocking factor of \( r \) — i.e., the number of tuples of \( r \) that fit into one block.
- \( V(A, r) \): number of distinct values that appear in \( r \) for attribute \( A \); same as the size of \( \prod_A(r) \).
- \( MAX(A, r) \): th maximum value of \( A \) that appears in \( r \)
- \( MIN(A, r) \)
- If tuples of \( r \) are stored together physically in a file, then:
\[
b_r = \left\lfloor \frac{n_r}{f_r} \right\rfloor
\]
Selection Size Estimation
- \( \sigma_{A=v}(r) \)
- \( n_r / V(A, r) \): number of records that will satisfy the selection
- Equality condition on a key attribute: size estimate = 1
- \( \sigma_{A}v(r) \) (case of \( \sigma_{A}v(r) \) is symmetric)
- Let \( c \) denote the estimated number of tuples satisfying the condition.
- If \( \min(A, r) \) and \( \max(A, r) \) are available in catalog
- \( c = 0 \) if \( v < \min(A, r) \)
- \( c = n_r \frac{v - \min(A, r)}{\max(A, r) - \min(A, r)} \)
- If histograms available, can refine above estimate
- In absence of statistical information \( c \) is assumed to be \( n_r / 2 \).
Size Estimation of Complex Selections
- **Selectivity**$(\theta_i)$ = the probability that a tuple in $r$ satisfies $\theta_i$.
- If $s_i$ is the number of satisfying tuples in $r$, then selectivity $(\theta_i) = s_i/n_r$.
- **Conjunction:** $\sigma_{\theta_1 \land \theta_2 \land \ldots \land \theta_n}(r)$. Assuming independence, estimate of tuples in the result is:
$$n_r \times \frac{s_1 \times s_2 \times \cdots \times s_n}{n_r^n}$$
- **Disjunction:** $\sigma_{\theta_1 \lor \theta_2 \lor \ldots \lor \theta_n}(r)$. Estimated number of tuples:
$$n_r \times \left(1 - \frac{s_1}{n_r}\right) \times \left(1 - \frac{s_2}{n_r}\right) \times \cdots \times \left(1 - \frac{s_n}{n_r}\right)$$
- **Negation:** $\sigma_{\neg \theta}(r)$. Estimated number of tuples: $n_r - \text{size}(\sigma_{\theta}(r))$
Joins
- R JOIN S: R.a = S.a
- $|R| = 10,000$; $|S| = 5000$
- **CASE 1:** a is key for S
- Each tuple of R joins with exactly one tuple of S
- So: $|R \ JOIN S| = |R| = 10,000$
- Assumption: Referential integrity holds
- What if there is a selection on R or S
- Adjust accordingly
- Say: S.b = 100, with selectivity 0.1
- THEN: $|R \ JOIN S| = |R| \times 0.1 = 100$
- **CASE 2:** a is key for R
- Similar
Joins
- R JOIN S: R.a = S.a
- |R| = 10,000; |S| = 5000
- CASE 3: a is not a key for either
- Reason with the distributions on a
- Say: the domain of a: V(A, R) = 1000 (the number of distinct values a can take)
- THEN, assuming uniformity
- For each value of a
- We have 10,000/100 = 100 tuples of R with that value of a
- We have 5000/100 = 50 tuples of S with that value of a
- All of these will join with each other, and produce 100 *50 = 5000
- So total number of results in the join:
- 5000 * 100 = 500000
- We can improve the accuracy if we know the distributions on a better
- Say using a histogram
Other Operations
- Projection: \( \Pi_A(R) \)
- If no duplicate elimination, THEN |\( \Pi_A(R) \)| = |R|
- If distinct used (duplicate elimination performed): |\( \Pi_A(R) \)| = \( V(A, R) \)
- Set operations:
- Union ALL: |\( R \cup S \)| = |\( R \)| + |\( S \)|
- Intersect ALL: |\( R \cap S \)| = min{\( |R|, |S| \)}
- Except ALL: |\( R – S \)| = |\( R \)| (a good upper bound)
- Union, Intersection, Except (with duplicate elimination)
- Somewhat more complex reasoning based on the frequency distributions etc…
- And so on …
Query Optimization
- Introduction
- Example of a Simple Type of Query
- Transformation of Relational Expressions
- Optimization Algorithms
- Statistics Estimation
- Summary
Query Optimization
- Integral component of query processing
- Why?
- One of the most complex pieces of code in a database system
- Active area of research
- E.g. XML Query Optimization?
- What if you don’t know anything about the statistics
- Better statistics
- Etc …
|
{"Source-Url": "https://www.cs.umd.edu/class/fall2009/cmsc424/lecture-queryoptimization.pdf", "len_cl100k_base": 5593, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 48365, "total-output-tokens": 6906, "length": "2e12", "weborganizer": {"__label__adult": 0.0004169940948486328, "__label__art_design": 0.0008392333984375, "__label__crime_law": 0.0005578994750976562, "__label__education_jobs": 0.0218505859375, "__label__entertainment": 0.00011873245239257812, "__label__fashion_beauty": 0.0002598762512207031, "__label__finance_business": 0.0009775161743164062, "__label__food_dining": 0.0004591941833496094, "__label__games": 0.0005650520324707031, "__label__hardware": 0.0009584426879882812, "__label__health": 0.0007948875427246094, "__label__history": 0.0007071495056152344, "__label__home_hobbies": 0.0002225637435913086, "__label__industrial": 0.000743865966796875, "__label__literature": 0.00054168701171875, "__label__politics": 0.00033020973205566406, "__label__religion": 0.0005936622619628906, "__label__science_tech": 0.09869384765625, "__label__social_life": 0.00027751922607421875, "__label__software": 0.049041748046875, "__label__software_dev": 0.81982421875, "__label__sports_fitness": 0.0002846717834472656, "__label__transportation": 0.0005555152893066406, "__label__travel": 0.0003635883331298828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19489, 0.02898]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19489, 0.81239]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19489, 0.77671]], "google_gemma-3-12b-it_contains_pii": [[0, 375, false], [375, 826, null], [826, 1455, null], [1455, 1658, null], [1658, 2284, null], [2284, 2883, null], [2883, 3834, null], [3834, 4570, null], [4570, 5142, null], [5142, 5548, null], [5548, 6312, null], [6312, 7076, null], [7076, 7862, null], [7862, 8655, null], [8655, 9610, null], [9610, 9833, null], [9833, 11026, null], [11026, 11737, null], [11737, 12887, null], [12887, 13908, null], [13908, 14356, null], [14356, 15243, null], [15243, 16587, null], [16587, 17832, null], [17832, 19035, null], [19035, 19489, null]], "google_gemma-3-12b-it_is_public_document": [[0, 375, true], [375, 826, null], [826, 1455, null], [1455, 1658, null], [1658, 2284, null], [2284, 2883, null], [2883, 3834, null], [3834, 4570, null], [4570, 5142, null], [5142, 5548, null], [5548, 6312, null], [6312, 7076, null], [7076, 7862, null], [7862, 8655, null], [8655, 9610, null], [9610, 9833, null], [9833, 11026, null], [11026, 11737, null], [11737, 12887, null], [12887, 13908, null], [13908, 14356, null], [14356, 15243, null], [15243, 16587, null], [16587, 17832, null], [17832, 19035, null], [19035, 19489, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19489, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19489, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19489, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19489, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 19489, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19489, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19489, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19489, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19489, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19489, null]], "pdf_page_numbers": [[0, 375, 1], [375, 826, 2], [826, 1455, 3], [1455, 1658, 4], [1658, 2284, 5], [2284, 2883, 6], [2883, 3834, 7], [3834, 4570, 8], [4570, 5142, 9], [5142, 5548, 10], [5548, 6312, 11], [6312, 7076, 12], [7076, 7862, 13], [7862, 8655, 14], [8655, 9610, 15], [9610, 9833, 16], [9833, 11026, 17], [11026, 11737, 18], [11737, 12887, 19], [12887, 13908, 20], [13908, 14356, 21], [14356, 15243, 22], [15243, 16587, 23], [16587, 17832, 24], [17832, 19035, 25], [19035, 19489, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19489, 0.02455]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
150013604b56996f8a9c19cd54df7a4518bd719d
|
Course Notes for CS310 – Run Time Analysis
Nurit Haspel (notes partially adapted from Prof. Carl Offner)
Reading Material for this Class
• K&T chapter 2, 5, S&W chapter 1.4 (runtime analysis).
• Remember what you learned about recursions in CS210.
Logarithms
This is a reminder. You should have seen logarithms in Math 140 if not before. Please refresh your memory, this is important. Logarithms are involved in many important runtime results: Sorting, binary search etc. We will see many examples today and later on in the course. Logarithms grow slowly, much more slowly than any polynomial but faster than a constant.
Definition: \( \log_N K = N \) if \( B^K = N \). B is the base of the log.
Examples:
• \( \log_2 8 = 3 \) because \( 2^3 = 8 \).
• \( \log_{10} 100 = 2 \) because \( 10^2 = 100 \).
• \( 2^{10} = 1024 \) (1K), so \( \log_2 1024 = 10 \).
• \( 2^{20} = 1M \), so \( \log 1M = 20 \).
• \( 2^{30} = 1G \) so \( \log 1G = 30 \).
Things to Remember:
• It requires \( \log_N K \) digits to represent \( K \) numbers in base \( N \).
• It requires approx. \( \log_2 K \) multiplications by 2 to get from 1 to \( N \).
• It requires approx. \( \log_2 K \) divisions by 2 to get from \( N \) to 1.
• \( \log(nm) = \log(n) + \log(m) \)
• \( \log(n/m) = \log(n) - \log(m) \)
• \( \log(n^k) = k \log(n) \)
• \( \log_a(b) = \frac{\log_b}{\log_a} \)
• If the base of log is not specified, assume it is base 2 (although for runtime analysis it doesn’t matter. Why?)
• \( \log \): base 2
• \( \ln \): base e
Computers work in binary, so in order to calculate how many numbers a certain amount of memory can represent we use \( \log_2 \). When it comes to runtime, the base is not important (see homework). So in runtime calculations I will just use \( \log(N) \) with no base. You may assume it is \( \log_2 \).
Computers Work in Binary
- 16 bits of memory can represent $2^{16}$ different numbers = $2^{10+6} = 2^{10}\times 2^6 = 64K$.
- 32 bits of memory can represent $2^{32}$ different numbers = $2^{30+2} = 2^{30}\times 2^2 = 4G$. (many of today’s operating systems address space).
- 64 bits?? (most of today’s computers address space).
Runtime
Definitions
The basics of big-oh are hopefully covered in CS210. We will do a considerable amount of runtime analysis here.
When we develop an algorithm we want to know how many resources it requires. Let $T$ and $N$ be positive numbers. $N$ is the size of the problem (It is not always 100% clear what the “size of the problem” is. More on that later). $T$ measures a resource: Runtime, CPU cycles, disk space, memory etc.
Definition 1 Big-O (read – big Oh)
$T(N)$ is $O(F(N))$ if there are positive constants $c$ and $N_0$ such that $T(N) \leq c \times F(N)$ for all $N \geq N_0$. In other words, $T(N)$ is bounded by a multiple of $F(N)$ from above for every big enough $N$. See Figure 1 (a).
Definition 2 Big-$\Omega$ (read – big Omega)
$T(N)$ is $\Omega(F(N))$ if there are positive constants $c$ and $N_0$ such that $T(N) \geq c \times F(N)$ for all $N \geq N_0$. In other words, $T(N)$ is bounded by a multiple of $F(N)$ from below for every big enough $N$. See Figure 1 (b).
For a good estimate on the runtime it’s good to have both the $O$ and the $\Omega$ estimates (upper and lower bounds).
Definition 3 Big-$\Theta$ (read – big Theta)
$T(N)$ is $\Theta(F(N))$ if there are positive constants $c_0, c_1$ and $N_0$ such that $c_0 \times F(N) \leq T(N) \leq c_1 \times F(N)$ for all $N \geq N_0$.
In other words, $T(N)$ is bounded both from above and from below by a multiple of $F(N)$ for every big enough $N$. It does NOT mean that they are equal, but that they are in some way equivalent.
Example: Show that $2N + 4 = O(N)$. To solve this, you have to actually give two constants, $c$ and $N_0$ such that $2N + 4 \leq c \times N$ for every $N \geq N_0$. Obviously, there are many possible solutions. For example, $c = 4$ and $N_0 = 2$ are good constants since $2N + 4 < 4N$ for every $N \geq 2$. Similarly, $c = 10$ and $N_0 = 1$ can also be used. Notice that the bound does not have to be tight, as long as it holds for any large enough $N$.
Order of growth can be important. For example – sorting algorithms can perform quadratically or as $n \times \log(n)$. Very big difference for large inputs (do the math!). We care less about constants, so $100N = O(N)$. $100N + 200 = O(N)$. The constant can be important when choosing between two similar run-time algorithms. For example – quicksort vs. mergesort.
Important Rules
**Computer Programs:** We assume that all the atomic operations (basic arithmetic operations, if-statements, assignments, comparisons etc.) take $O(1)$ (constant) time. We don’t care exactly how much time they really take, and we make the simplifying (and generally incorrect) assumption they all take the same amount of time. The reason the specific times are not important is that they usually depend on the machine specs, but more importantly – they do not depend on the input size (which is the actual meaning of $O(1)$!). We just say they all take at most $C$ time, where $C$ is a large enough constant for this assumption to be true.
**Polynomials:** When the runtime is estimated as a polynomial we care about the leading term only. Thus $3n^3 + n^2 + 2n + 17 = O(n^3)$ because eventually the leading cubic term is bigger than the rest.
**Common Functions You Should Remember:** Polynomials always grow faster than logarithms. Exponents always grow faster than polynomials. See Figure 2 and the following table:
<table>
<thead>
<tr>
<th>Function</th>
<th>Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>$c$</td>
<td>Constant</td>
</tr>
<tr>
<td>$\log N$</td>
<td>Logarithmic</td>
</tr>
<tr>
<td>$\log^2 N$</td>
<td>Log-squared</td>
</tr>
<tr>
<td>$N$</td>
<td>Linear</td>
</tr>
<tr>
<td>$N \log N$</td>
<td>$N \log N$</td>
</tr>
<tr>
<td>$N^2$</td>
<td>Quadratic</td>
</tr>
<tr>
<td>$N^3$</td>
<td>Cubic</td>
</tr>
<tr>
<td>$2^N$</td>
<td>Exponential</td>
</tr>
</tbody>
</table>
Table 1 is a useful example that shows actual runtimes as a function of $n$ and $f(n)$. Remember that the absolute runtime are not as important here as the concept of runtime growth.
Figure 2: Growth of several important functions
Table 1: Runtime vs. input size of various input sizes.
<table>
<thead>
<tr>
<th>$n$</th>
<th>$f(n)$</th>
<th>$\lg n$</th>
<th>$n$</th>
<th>$n \lg(n)$</th>
<th>$n^2$</th>
<th>$2^n$</th>
<th>$n!$</th>
</tr>
</thead>
<tbody>
<tr>
<td>10</td>
<td>0.003μs</td>
<td>0.01μs</td>
<td>0.033μs</td>
<td>0.1μs</td>
<td>1μs</td>
<td>3.63 ms</td>
<td></td>
</tr>
<tr>
<td>20</td>
<td>0.004μs</td>
<td>0.02μs</td>
<td>0.086μs</td>
<td>0.4μs</td>
<td>1ms</td>
<td>77.1 years</td>
<td></td>
</tr>
<tr>
<td>30</td>
<td>0.005μs</td>
<td>0.03μs</td>
<td>0.147μs</td>
<td>0.9μs</td>
<td>1 sec</td>
<td>8.4 x 10^{15} yrs</td>
<td></td>
</tr>
<tr>
<td>40</td>
<td>0.005μs</td>
<td>0.04μs</td>
<td>0.0213μs</td>
<td>1.6μs</td>
<td>18.3 min</td>
<td></td>
<td></td>
</tr>
<tr>
<td>50</td>
<td>0.006μs</td>
<td>0.05μs</td>
<td>0.0282μs</td>
<td>2.5μs</td>
<td>13 days</td>
<td></td>
<td></td>
</tr>
<tr>
<td>100</td>
<td>0.007μs</td>
<td>0.1μs</td>
<td>0.64μs</td>
<td>10μs</td>
<td>4 x 10^{14} yrs</td>
<td></td>
<td></td>
</tr>
<tr>
<td>$10^3$</td>
<td>0.010μs</td>
<td>1μs</td>
<td>9.96μs</td>
<td>1ms</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>$10^4$</td>
<td>0.013μs</td>
<td>10μs</td>
<td>130μs</td>
<td>100ms</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>$10^5$</td>
<td>0.017μs</td>
<td>100μs</td>
<td>1.67ms</td>
<td>10 sec</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>$10^6$</td>
<td>0.020μs</td>
<td>1ms</td>
<td>19.93ms</td>
<td>16.7 min</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>$10^7$</td>
<td>0.023μs</td>
<td>0.01 sec</td>
<td>0.23 sec</td>
<td>1.16 days</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>$10^8$</td>
<td>0.027μs</td>
<td>0.1 sec</td>
<td>2.66 sec</td>
<td>115.7 days</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>$10^9$</td>
<td>0.030μs</td>
<td>1 sec</td>
<td>29.9 sec</td>
<td>31.7 yrs</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Adding and Multiplying Functions
- **Rule for sums** (e.g. - two consecutive blocks of code): If $T_1(N) = O(F(N))$ and $T_2(N) = O(G(N))$ then $T_1 + T_2 = O(\max(F(N), G(N)))$. The biggest contribution dominates the sum.
- **Rule for products** (e.g. - an inner loop run by an outer loop): If $T_1(N) = O(F(N))$ and $T_2(N) = O(G(N))$ then $T_1 \times T_2 = O(F(N) \times G(N))$.
**Example:** $(n^2 + 2n + 17) \times (2n^2 + n + 17) = O(n^2 \times n^2) = O(n^4)$. (Remember to ignore all but the leading term). If we sum over a large number of terms, we multiply the number of terms by the estimated size of one term.
**Example:** Sum of $i$ from 1 to $N$. Average size of an element: $\frac{N}{2}$. There are $N$ terms so the sum is $O(N^2)$. Exact term: $\frac{N \times (N-1)}{2}$.
**Loops**
The runtime of a loop is the runtime of the statements in the loop * number of iterations.
**Example:** bubble sort
```c
/* sort array of ints in A[0] to A[n-1] */
int bubblesort(int A[], int n)
{
int i, j, temp;
for(i = 0; i < n-1; i++) /* n passes of loop */
/* n-i passes of loop */
for(j = n-1; j > i; j--)
temp = A[j-1];
A[j] = temp;
}
}
```
To calculate the runtime work from inside out:
- Calculate the body of inner loop (constant – an if statement and three assignments).
- Estimate the number of passes of the inner loop: n-i passes.
- Estimate the number of passes of the outer loop: n passes. Each pass counts $n, n-1, n-2, \ldots, 1$.
- Overall $1 + 2 + 3 + \ldots + n$ passes of constant operations: $\frac{n \times (n-1)}{2} = O(n^2)$ – see above! I told you this sum will show up a lot.
This is not the fastest sorting algorithm but it’s simple and works in-place. Good for small size input. We’ll talk a bit about sorting later on (but only briefly. It was CS210 material).
**Recursive Functions**
That’s a slightly trickier one, but not much more so. In recursive functions we don’t have all the work done in just one call, as is the case in iterative functions. Therefore, we can’t just count the number of operations as we did in the example before. There are some nice tricks that can help us figure it out.
Let us define $T(n)$ as a function that measures the runtime. $T(n)$ may not be given explicitly in closed form, especially in recursive functions, so we don’t know what it is yet. It can be polynomial,
logarithmic, exponential etc. We have to find a way to derive the closed form from the recursive function.
**Example:** factorial
```java
int factorial (int n)
{
if(n<=1) return 1;
return n*factorial(n-1);
}
```
Now all we have to do is translate the Java program above into a mathematical formula which expresses its runtime. Let us analyze, line by line, what the function does. It’s quite easy, since recursive functions are usually very short.
- The if statement takes $O(1)$.
- What about the rest? The statement return n*factorial(n-1) performs one multiplication followed by a recursive call. In other words, it calls the same function but on an input of size $n-1$.
- Since we defined $T(n)$ as the function that expresses the runtime of factorial(n), then $T(n-1)$ expresses the runtime of factorial(n-1).
Rearranging a bit, we can divide the operations into two parts:
1. The operations done explicitly in the function itself – the if statement and the multiplication. They all take a constant amount of time. Since the exact number is not important, we can bundle them all under a big enough constant $C$.
2. The recursive call – we don’t have an explicit runtime for this part yet, but as mentioned above, we can express it as $T(n-1)$.
Putting it all together, the runtime of factorial can be expressed as $T(n) = T(n-1) + C$. It is not the final answer yet, but we’re getting there. We can apply the same logic to analyzing the runtime of factorial(n-1) and so: $T(n-1) = c + T(n-2) \Rightarrow T(n) = 2c + T(n-2)$.
After n such equations we reach $T(1) = k$ (just the if-statement. Notice that $k$ is not the same as $C$. Minor detail, but can be important sometimes). Eventually, $T(n) = (n-1) \cdot c + k = O(n)$. The iterative function performs the same. This is not always the case.
This is an example of a linear function. Let’s stop for a second and thing: What does “linear runtime” really mean? A linear function (program, algorithm) requires resources that scale linearly with the input size. Say a linear algorithm runs for 5 seconds on an input of size 10. How much time will it (approximately) run on an input of size 20? To answer the question, let’s go back to the definition of Big-O: $f(n) = O(n) \Rightarrow f(n) = c \cdot n$ for some $c$. This means $f(2n) \approx c \cdot 2n$. In other words, if the function is linear, doubling the input size roughly doubles the runtime. The exact time depends on the constant, the machine specs etc.
If a quadratic algorithm $f(n) = O(n^2)$ runs for 5 seconds on an input of size 10. How much time will it (approximately) run on an input of size 20? Let us do the same trick as before: $f(n) = O(n^2) \Rightarrow f(n) = c \cdot n^2$. This means $f(2n) \approx c \cdot (2n)^2 = 4cn^2$. Doubling the input size increases the runtime 4-fold vs. 2-fold for a linear function! This goes to show why runtime is important. It may not look much for small input, but think of a function whose input is in the millions or more.
A Problematic Example: The Fibonacci series. The well known Fibonacci series, where each number is the sum of the previous two numbers: 0 1 1 2 3 5 8 13 ... The formula is: \( f(n) = f(n-1) + f(n-2) \), where the boundary conditions are \( f(0) = 0, f(1) = 1 \) This is a recursive definition, and this is the way we are used to thinking about it. So, it’s only natural The following recursive program calculates the \( n^{th} \) term in the Fibonacci series (assume \( n \) is non-negative and the first term is the zero-th):
```c
int fib(int n)
{
if(n == 0) return 0;
if(n == 1) return 1;
return fib(n-2)+fib(n-1);
}
```
We can visualize it as follows:
The problem is the double recursion which runs on the same input so we do a lot of redundant work. The call tree looks like a big binary tree (see Figure 3). Double recursion is not bad, as long as we split the work too! For example: Merge sort sorts recursively two halves of an array and merge. When we call merge sort recursively we do it twice, but on different input! The work is split between recursive calls in a smart way that does not involve any redundant calls.
We are not going to do an exact runtime analysis at this point, but the runtime is exponential. More accurately – \( O(1.618^n) \). Why this weird number? More on that later in the course if time allows. In the case of fibonacci we easily make it more efficient by going against our instincts and write an iterative function
```c
int fib2(int n)
{
int f1 = 0;
int f2 = 1;
int fi;
if(n == 0) return 0;
if(n == 1) return 1;
for(int i = 2 ; i <= n ; i++ )
{
fi = f1 + f2;
f1 = f2;
f2 = fi;
}
return fi;
}
```
f1 = f2;
f2 = f1;
}
return f1;
}
What is the runtime now? It is an iterative function, therefore we don’t need a recurrence formula, just counting operations.
**Binary Search**
**Definition:** Search for an element in a sorted array. Return array index where the element is found or a negative value if not found. Start in the middle of the array. If the element is smaller than that, search in the smaller (left) half. Otherwise – search in the larger (right) half. Where I come from it’s sometimes called ”lion in the desert” algorithm (due to some obscure CS/mathematicians’ joke):
Q: How do you catch a lion in the desert?
A: Cut the desert into two equal halves with a lion-proof fence. Pick the half which has the lion in it and recursively catch the lion in that half of the desert.
Illustration:
<table>
<thead>
<tr>
<th>Key</th>
<th>List</th>
</tr>
</thead>
<tbody>
<tr>
<td>8</td>
<td>1 2 3 4 5 6 7 8 9</td>
</tr>
<tr>
<td>8>4</td>
<td>1 2 3 4 5 6 7 8 9</td>
</tr>
<tr>
<td>8>6</td>
<td>1 2 3 4 5 6 7 8 9</td>
</tr>
<tr>
<td>8=8</td>
<td>1 2 3 4 5 6 7 8 9</td>
</tr>
</tbody>
</table>
**Binary Search Implementation**
It is implemented in Java as part of the Collections API.
```java
static <T> int binarySearch(T[] a, T key, Comparator<? super T> c)
static int binarySearch(Object[] a, Object key)
```
The version without the Comparator uses “natural order” of the array elements, i.e., calls compareTo of the element type to compare elements. Thus the elements need to be Comparable – the
element type implements Comparable<ElementType> in the generics setup. Or the old Comparable works here too.
```java
/**
* Performs the standard binary search
* using two comparisons per level.
* This is a driver that calls the recursive method.
* @return index where item is found or NOT_FOUND if not found.
*/
public static <AnyType extends Comparable<? super AnyType>> int binarySearch( AnyType[] a, AnyType x )
{
return binarySearch( a, x, 0, a.length -1 );
}
/**
* Hidden recursive routine.
*/
private static <AnyType extends Comparable<? super AnyType>>
int binarySearch( AnyType[] a, AnyType x, int low, int high )
{
if( low > high )
return NOT_FOUND;
int mid = ( low + high ) / 2;
if( a[ mid ].compareTo( x ) < 0 )
return binarySearch( a, x, mid + 1, high );
else if( a[ mid ].compareTo( x ) > 0 )
return binarySearch( a, x, low, mid - 1 );
else
return mid;
}
```
The Comparable <? super T > specifies that T ISA Comparable < Y >, where Y is T or any superclass of it. This allows the use of a compareTo implemented at the top of an inheritance hierarchy (i.e., in the base class) to compare elements of an array of subclass elements. For example, we commonly use a unique id for equals, hashCode and compareTo across a hierarchy, and only want to implement it once in the base class.
You should be able to solve it by now... The answer is: \( T(N) = O(\log N) \). I expect you to be able to figure it out yourselves, though.
**Recurrence Formula:**
- \( T(n) = C \) if \( n \) is 1
- \( T(n) = T(\frac{n}{2}) + c \) otherwise
Notice that \( c \) and \( C \) are not the same constant!
**Mergesort**
You probably discussed MergeSort in CS210. It is a divide-and-conquer method to sort an array.
1. If the array has at most one item – return.
2. Split it in half, call merge sort recursively on each half.
3. Merge the two sorted halves.
The Mergesort Algorithm
```java
/**
* Mergesort algorithm.
* @param a an array of Comparable items.
*/
public static <AnyType extends Comparable<? super AnyType>> void mergeSort( AnyType [] a )
{
AnyType [] tmpArray = (AnyType[]) new Comparable[ a.length ];
mergeSort( a, tmpArray, 0, a.length - 1 );
}
/**
* Internal method that makes recursive calls.
* @param a an array of Comparable items.
* @param tmpArray an array to place the merged result.
* @param left the left-most index of the subarray.
* @param right the right-most index of the subarray.
*/
private static <AnyType extends Comparable<? super AnyType>> void mergeSort( AnyType [] a, AnyType [] tmpArray,
int left, int right )
{
if( left < right )
{
int center = ( left + right ) / 2;
mergeSort( a, tmpArray, left, center );
mergeSort( a, tmpArray, center + 1, right );
merge( a, tmpArray, left, center + 1, right );
}
}
/**
* Internal method that merges two sorted halves of a subarray.
* @param a an array of Comparable items.
* @param tmpArray an array to place the merged result.
* @param leftPos the left-most index of the subarray.
* @param rightPos the index of the start of the second half.
* @param rightEnd the right-most index of the subarray.
*/
private static <AnyType extends Comparable<? super AnyType>> void merge( AnyType [] a, AnyType [] tmpArray,
int leftPos, int rightPos, int rightEnd )
{
}
int leftEnd = rightPos - 1;
int tmpPos = leftPos;
int numElements = rightEnd - leftPos + 1;
// Main loop
while( leftPos <= leftEnd && rightPos <= rightEnd )
if( a[ leftPos ].compareTo( a[ rightPos ] ) <= 0 )
tmpArray[ tmpPos++ ] = a[ leftPos++ ];
else
tmpArray[ tmpPos++ ] = a[ rightPos++ ];
while( leftPos <= leftEnd ) // Copy rest of first half
tmpArray[ tmpPos++ ] = a[ leftPos++ ];
while( rightPos <= rightEnd ) // Copy rest of right half
tmpArray[ tmpPos++ ] = a[ rightPos++ ];
// Copy tmpArray back
for( int i = 0; i < numElements; i++, rightEnd-- )
a[ rightEnd ] = tmpArray[ rightEnd ];
Linear-time Merging of Sorted Arrays: We get two sorted halves and merge them. See Figure 4:
Recurrence Formula:
\[ T(n) = \begin{cases}
C & \text{if } n \text{ is 1} \\
2 \cdot T\left(\frac{n}{2}\right) + cn & \text{otherwise}
\end{cases} \]
Again, \( c \) and \( C \) are not the same constant!
Remember how we analyzed recursive functions in the beginning of the semester. We first separate the function into the explicit (non-recursive) part and the recursive part. We then define a function \( T(N) \) which describes the runtime of the function on an input of size \( N \). We calculate the runtime as a function of the explicit and recursive parts, and get an equation of \( T(N) \), which we then try to solve. In this case, the top-level code does the following:
1. Boundary condition: \( O(1) \)
2. Two recursive calls to half the input, \( 2T\left(\frac{N}{2}\right) \)
3. Merge. Looking at the merge code, we loop over the two sorted halves, advancing two pointers and copying one value to the merged array each time. Overall we perform \( O(N) \) operations.
So, the runtime can be expressed as:
\[
T(N) = 2 \cdot T(N/2) + O(N)
\]
\[
= 2 \cdot (2 \cdot T(N/4) + O(N/2)) + O(N)
\]
\[
= 4 \cdot T(N/4) + O(N) + O(N)
\]
\[
= 4 \cdot (2 \cdot T(N/8) + O(N/4)) + O(N) + O(N)
\]
\[
= 8 \cdot T(N/8) + O(N) + O(N) + O(N)
\]
\[
= \ldots = 2 \log N \cdot T(1) + O(N) + O(N) + \ldots + O(N)
\]
\[
= N \cdot O(1) + O(N) + O(N) + \ldots + O(N).
\]
The terms are expanded logN times, each produces an $O(N)$, log N terms of $O(N) = O(N \log N)$. This kind of formula is very common in divide-and-conquer algorithms.
This is another Identity that comes up frequently in algorithmic analysis. One basic way to solve it is to form a recursion tree. We saw an illustration of another example above (Figure 3). The recursion tree for the MergeSort formula is shown below in Figure 5. If $N = 2^p$ then there are $p$ rows with $cN$ on the right, and one last row with $dN$ on the right. Since $p = \log n$, this means that the total cost is $cN \log N + dN$. In other words, this is what we call an $O(N \log N)$ algorithm.
**Best, Average, Worst Case**
When analyzing the runtime of an algorithm, we are usually interested in the following:
**the worst-case time** This is an upper-bound to the run time of an algorithm. The worst-case time is useful because it gives a guarantee: you know that no matter what the input is, you will certainly do at least that well.
**the best-case time** You usually don’t get the best-case time in practice. But it does tell something – it tells you that using this algorithm, you will never do better than the best-case time.
Figure 5: Recurrence tree for MergeSort.
the average-case time Average the times that the algorithm takes over all possible inputs of length \( n \). To do this, we need some assumption of the statistical distribution of the inputs. (For instance, if we know that for our particular application certain inputs will never occur, we can ignore them in figuring out the average.)
Average-case analysis is the most difficult to figure out in general, but it is also the most useful. In many cases – binary search, mergesort, finding maximum/minimum etc., the average runtime and the worst-case runtime are the same. In other cases (quicksort is probably the best known example to you) the average is better than the worst-case. It basically tells us, indirectly, that the worst case is quite unlikely to occur.
|
{"Source-Url": "https://www.cs.umb.edu:443/cs310/notes/runtime_notes.pdf", "len_cl100k_base": 7384, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 38300, "total-output-tokens": 7862, "length": "2e12", "weborganizer": {"__label__adult": 0.0003592967987060547, "__label__art_design": 0.000461578369140625, "__label__crime_law": 0.0005097389221191406, "__label__education_jobs": 0.007175445556640625, "__label__entertainment": 0.00010395050048828124, "__label__fashion_beauty": 0.00020253658294677737, "__label__finance_business": 0.00027370452880859375, "__label__food_dining": 0.0005936622619628906, "__label__games": 0.0010662078857421875, "__label__hardware": 0.0015439987182617188, "__label__health": 0.0007987022399902344, "__label__history": 0.00039458274841308594, "__label__home_hobbies": 0.00022208690643310547, "__label__industrial": 0.0006961822509765625, "__label__literature": 0.0003654956817626953, "__label__politics": 0.0003437995910644531, "__label__religion": 0.0006661415100097656, "__label__science_tech": 0.07135009765625, "__label__social_life": 0.00016450881958007812, "__label__software": 0.007274627685546875, "__label__software_dev": 0.90380859375, "__label__sports_fitness": 0.0005364418029785156, "__label__transportation": 0.0008082389831542969, "__label__travel": 0.00026607513427734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23025, 0.03556]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23025, 0.72546]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23025, 0.81889]], "google_gemma-3-12b-it_contains_pii": [[0, 1826, false], [1826, 4499, null], [4499, 5997, null], [5997, 6965, null], [6965, 9461, null], [9461, 12468, null], [12468, 14179, null], [14179, 15541, null], [15541, 17318, null], [17318, 18915, null], [18915, 21005, null], [21005, 22218, null], [22218, 22259, null], [22259, 23025, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1826, true], [1826, 4499, null], [4499, 5997, null], [5997, 6965, null], [6965, 9461, null], [9461, 12468, null], [12468, 14179, null], [14179, 15541, null], [15541, 17318, null], [17318, 18915, null], [18915, 21005, null], [21005, 22218, null], [22218, 22259, null], [22259, 23025, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23025, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23025, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23025, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23025, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 23025, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23025, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23025, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23025, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23025, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23025, null]], "pdf_page_numbers": [[0, 1826, 1], [1826, 4499, 2], [4499, 5997, 3], [5997, 6965, 4], [6965, 9461, 5], [9461, 12468, 6], [12468, 14179, 7], [14179, 15541, 8], [15541, 17318, 9], [17318, 18915, 10], [18915, 21005, 11], [21005, 22218, 12], [22218, 22259, 13], [22259, 23025, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23025, 0.09688]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
a0563aff0971ec1057b67ce1f3cf115e861b4390
|
1 Introduction
We continue our analysis of integer data structures, focusing this lecture on fusion trees. This structure employs some neat bit tricks and word-level parallelism. In particular, we discuss the following techniques necessary to understand the workings of a fusion tree: sketching, which allows certain \( w \)-bit words to be compressed to less than \( w \) bits, parallel comparison, where multiple words can be compared for the cost of a single word, and finally the computation of the most significant set bit of a word in constant-time.
2 Overview of Fusion Trees
We first describe the key results of fusion trees, as well as the model we will be assuming for the majority of the exposition. As with the van Emde Boas trees described in the previous lecture, we will be working under the word RAM model (transdichotomous RAM with C-style operations) and we are looking to store \( n \ w \)-bit integers statically. Under these assumptions, the fusion tree covered here and detailed by Fredman & Willard in their original papers ([1], [2]) performs predecessor/successor operations in \( O(\log_w n) \) time, and require \( O(n) \) space, cf. the van Emde Boas trees which run in \( O(\log w) \) time and require \( O(n) \) space. Other models and variants of interest include:
- AC\(^0\) RAM version [3]: the model is restricted to operations with constant-depth (but unbounded fan-in and fan-out) circuits. In particular, multiplication is not permitted, since it requires a circuit of logarithmic depth in the number of bits (this model was more relevant in the past when multiplication was very costly relative to other operations like additions; this is no longer the case due to optimizations such as pipelining);
- Dynamic version via exponential trees [4]: this version achieves \( O(\log_w n + \lg \lg n) \) deterministic update time, i.e. a \( \lg \lg n \) overhead over the static version;
- Dynamic version via hashing [5]: this version achieves \( O(\log_w n) \) expected update time. This is based on performing sketching ‘more like’ hashing. OPEN: Can this bound be achieved with high probability?
3 The General Idea
The underlying structure of a fusion tree is a B-tree, with branching factor \( w^{1/5} \); actually, any small constant power suffices, since the height of the tree will be \( \Theta(\log_w n) \). The main issue in obtaining this bound arises when searching a node during a predecessor search: we would like to achieve \( O(1) \) time for this operation, which appears impossible since it seems to require at least reading in \( O(w^{1/5} \cdot w) = O(w^{6/5}) \) bits. However, this (and predecessor/successor) can actually be done in the desired time bound with \( k^{O(1)} \) preprocessing. The main idea is to distinguish the set of keys in a node with less than \( w \) bits, which is the basis behind the next section. The rest of this lecture is all about how to achieve \( O(1) \) time for a predecessor/success search in a single fusion tree node.
4 Sketching
Since for each node in a fusion tree there are at most \( k = w^{1/5} \) keys, it is feasible to represent these keys by only \( w^{1/5} \) bits and still be comparable, given the lower bound of \( \lceil \log w^{1/5} \rceil \) bits. Indeed, this can be accomplished as follows. Let the keys be \( x_0 \leq x_1 \leq \cdots \leq x_{k-1} \); each key \( x_i \) can be represented as a path in a binary tree of depth \( w \), where the left branch at the \( i \)-th node from the root is taken if the \( i \)-th most significant bit of \( x_i \) is 0, and otherwise the right branch is taken. Then if all \( k \) keys are overlaid on the same tree, then it is evident that the resulting paths branch out at at most \( k - 1 \) nodes (this is easily formalized by induction). In essence, at most \( k - 1 = w^{1/5} - 1 \) bits matter in ordering the \( x_i \). See figure 1 for an example.
```
Figure 1: An example of the sketch function with 4 keys. The levels corresponding to the 3 bits sketched are circled.
```
In particular, let the bits of the corresponding nodes be in positions \( b_0, b_1, \ldots, b_{r-1} \) (where \( r \leq w^{1/5} \)).
Then the perfect sketch of $x$ (denoted by $\text{sketch}(x)$) is the $r$-bit string where the $i$-th bit is the $b_i$-th bit of $x$. Clearly, the sketch operation preserves order among the $x_i$, since each sketch keeps the bits that distinguish all the $x_i$ in the right order. Sketching also allows all keys to be read in constant time, since each sketch has $O(w^{1/5})$ bits so the total size of all sketches is $O(kw^{1/5}) = O(w^{2/5}) = o(w)$ bits. Under some models, such as the AC$^0$ model, the perfect sketch operation is a single operation [3]. Later in this lecture we will see how to perform a sufficient approximation using multiplication and standard C-style operations.
However, this raises another problem. The search for a given query $q$ may be such that $q$ is not equal to any of the $x_i$ (since there are no restrictions on the values of the arguments to predecessor/successor). Hence, the path of $q$ in the binary tree may diverge from the other paths of $x_i$ at a node which does not correspond to one of the bits $b_0, \ldots, b_{r-1}$; in that case, the location of $\text{sketch}(q)$ among the $\text{sketch}(x_i)$ will not necessarily be equivalent to the location of $q$ among the $x_i$. In fact, the location of $\text{sketch}(q)$ can be completely different from the location of $q$. This can be resolved by the technique of desketchifying as discussed next.
## 5 Desketchifying
By modifying the search for $q$, we can still obtain the predecessor or successor of $q$ without any additional (asymptotic) runtime overhead. Let $x_i$ and $x_{i+1}$ be the sketch neighbours of $q$, i.e. $\text{sketch}(x_i) \leq \text{sketch}(q) \leq \text{sketch}(x_{i+1})$. Then we determine the longest common prefix (equivalently, the lowest common ancestor in the fusion tree) of the actual elements between either $q$ and $x_i$, or $q$ and $x_{i+1}$. Suppose this prefix $p$ has length $y$; then the node $n$ corresponding to this prefix is the highest such that the path for $q$ diverges from the path of every key in the fusion node. In particular, there are no keys in the child subtree of $n$ which contains the path of $q$. Since the other child subtree of $n$ contains a key of the fusion node (either $x_i$ or $x_{i+1}$) it must contain the predecessor (or successor) of $q$. This can be determined as follows:
- If the $(y+1)$-st bit of $q$ is 1, then $q$’s predecessor belongs in the $p0$ subtree, so we search for the predecessor of $e = p011 \cdots 1$, i.e. the right-most tree node in the $p0$ subtree.
- If the $(y+1)$-st bit of $q$ is 0, then $q$’s successor belongs in the $p1$ subtree, so we search for the successor of $e = p100 \cdots 0$, i.e. the left-most tree node in the $p1$ subtree.
In both cases, the search will successfully find the requisite key because all the sketch bits in the prefix of $e$ and the target will match, and all the sketch bits in the suffix of $e$ (following the first $y$ bits) will be either the highest (when searching for predecessor) or lowest (when searching for successor). Once one of the predecessor/successor is found, the other can be determined simply by checking the appropriate adjacent sketch word in the fusion node. See figure 2 for an example.
There are still several issues remaining before our fusion tree will run with the desired time bounds under the word RAM model. First, we demonstrate how to perform an approximation of the perfect sketch in reasonable time. Then we show how to achieve constant runtime of two particular subroutines: finding the location of a $w^{1/5}$-bit integer among $w^{1/5}$ such integers, encoded as a $w^{2/5}$-bit word; and determining the most significant set bit of a $w$-bit word (this can be used to determine the length of the longest common prefix of two strings by XORing them together). This will conclude our description of the fusion tree’s operation.
Figure 2: An example when the search query is not among the keys of the fusion node. The paths
to the keys are bolded, whereas the path to the query \( q \) is dashed; the levels corresponding to the
bits sketched are circled as before. Here, the sketch neighbours of \( q \) are \( x_0 \) and \( x_1 \), but \( x_0 \) is neither
a predecessor nor successor of \( q \).
6 Approximating Sketch
Although a perfect sketch is computable in \( O(1) \) time as an \( AC^0 \) operation, we want a way to
compute an approximate sketch on a Word RAM using just multiplication and other standard
operations. The hard part about computing a sketch is getting all the bits we care about consecutive and succinct. So this approximate sketch will have all the important bits, spread out in some predictable pattern (independent of the word \( x \) we are sketching) of length \( O(w^{4/5}) \), with some additional garbage between them. But we will be able to apply masks to get just the bits we care about.
Let \( x' \) be \( x \) masked so it just has the important bits. So
\[
x' = x \AND \sum_{i=0}^{r-1} 2^{b_i}
\]
where \( b_i \) represents the \( i^{th} \) important bit. Now multiply \( x' \) by some mask \( m \) (that will have set bits in positions \( m_j \)) to get
\[
x' \cdot m = \left( \sum_{i=0}^{r-1} x_b 2^{b_i} \right) \left( \sum_{j=0}^{r-1} 2^{m_j} \right) = \sum_{i=0}^{r-1} \sum_{j=0}^{r-1} x_b 2^{b_i+m_j}
\]
Claim 1. For any important bits \( b_0, b_1, \ldots, b_{r-1} \), we can choose \( m_0, m_1, \ldots, m_{r-1} \) such that
1. \( b_j + m_j \) are distinct for all \( j \). This means that there are no collisions when we add up all the bits.
2. $b_0 + m_0 < b_1 + m_1 < \ldots b_{r-1} + m_{r-1}$. This means that order of our important bits in $x$ is preserved in $x' \cdot m$.
3. $(b_{r-1} + m_{r-1}) - (b_0 + m_0) = O(w^{4/5})$. Thus the span of the bits will be small.
Proof. We’ll choose some $m_0', m_1', \ldots, m_{r-1}' < r_3$ such that $b_j + m_j'$ are all distinct modulo $r^3$. We’ll prove this by induction. Suppose we have picked $m_0', m_1', \ldots, m_{t-1}'. Then $m_t'$ must be different than $m_i' + b_j' - b_k \forall i, j, k$. There are $t$ choices for $i$ (since $i$ can be any of the previous choices), and $r$ choices for $j, k$. Thus, there are a total of $tr^2 < r^3$ things for $m_t'$ to avoid, and we have $r^3$ choices, so we can always choose $m_t'$ to avoid collisions. So this satisfies property (1).
To satisfy (2) and (3) we intuitively want to spread out $m_i + b_i$ by intervals of $r^3$. To do this we let
$$m_i = m_i' + (w - b_i + ir^3 \text{ rounded down to nearest multiple of } r^3) \equiv m_i' \pmod{r^3}$$
We claim without proof that spacing will make
$$m_0 + b_0 < \ldots < m_{r-1} + b_{r-1}$$
and also since $m_0 + b_0 \approx w$ and $m_{r-1} + b_{r-1} \approx w + r^4$ we will have $(b_{r-1} + m_{r-1}) - (b_0 + m_0) \approx r^4 = O(w^{4/5})$. So properties (2) and (3) will be satisfied.
\[\square\]
7 Parallel Comparison
We need to be able to find where sketch($q$) lies among the sketches of keys sketch($x_0$) $<$ sketch($x_1$) $< \ldots$ sketch($x_{k-1}$) at a given node in constant time. We can do this parallel comparison with standard operations. We will use something called the ”node sketch.”
Node Sketch: We store all the sketches of the $x_i$’s at a node in a single word by prepending a 1 to each and concatenating them. The result will look like this: 1sketch($x_0$) $\ldots$ 1sketch($x_{k-1}$).
In order to compare sketch($q$) to all the key sketches with one subtraction, we take sketch($q$) and make $k$ copies of it in a word 0sketch($q$) $\ldots$ 0sketch($q$). We denote this by sketch($q$)$^k$. If the sketches were 5 bits long, we would multiply sketch($q$) by 000001 $\ldots$ 000001 to get sketch($q$)$^k$.
Then we subtract this value from the node sketch. This lets us subtract 0sketch($q$) from each 1sketch($x_i$) with a single operation: since 1sketch($x_i$) is always bigger than 0sketch($q$), carrying will not cause the subtractions to interfere with each other. In fact, the first bit of each block will be 1 if and only if sketch($q$) $\leq$ sketch($x_i$). After subtracting, we AND the result with 100000 $\ldots$ 100000 to mask all but the first bit of each block.
The sketch($x_i$)’s are sorted in each node, so for some index $k$ we have sketch($q$) $>$ sketch($x_i$) when $i < k$ and sketch($q$) $\leq$ sketch($x_i$) otherwise. We need to find this index $k$. Equivalently, we need to find the number of bits which are equal to 1 in the above result. This is a special case of finding the index of the most significant 1 bit. To do this, we can multiply by 000001 $\ldots$ 000001: all the bits which were set to 1 will collide in the first block of the result, so we can find their sum by
looking at that block. We can AND the result with 1111 and shift right to get the total number of 1s.
In summary, we have:
1. Compute the node sketch.
2. Compute $\text{sketch}(q)^k$.
3. Subtract $\text{sketch}(q)^k$ from the node sketch.
4. AND the difference with 100000...100000.
5. Find the most significant bit / number of 1 bits of the result. This is the index of the 0 to 1 transition and the rank of the sketch.
8 Most Significant Set Bit
We conclude with the computation of the index of the most significant set bit of a $w$-bit word in $O(1)$ time, under the word RAM model. The solution is particularly messy, but it will use all the techniques that we have just seen for fusion trees. The first insight is to split the word $x$ into $\sqrt{w}$ clusters of $\sqrt{w}$ bits. Our strategy is to identify the first non-empty cluster (this is the hardest part), and then the index of the first 1-bit within that cluster.
To illustrate the following procedures, let $\sqrt{w} = 4$ and
$x = 0101 \ 0000 \ 1000 \ 1101$
1. Identifying non-empty clusters. This is done in $O(1)$ time with a series of bit tricks.
(a) Identify which clusters have the first bit set. Compute bitwise AND between $x$ and a constant $F$ to get $t_1$
\[
x = \begin{array}{cccc}
0101 & 0000 & 1000 & 1101 \\
\sqrt{w} & \sqrt{w} & \sqrt{w} & \sqrt{w}
\end{array}
\]
\[
F = \begin{array}{cccc}
1000 & 1000 & 1000 & 1000 \\
\sqrt{w} & \sqrt{w} & \sqrt{w} & \sqrt{w}
\end{array}
\]
\[
t_1 = \begin{array}{cccc}
0000 & 0000 & 1000 & 1000 \\
\sqrt{w} & \sqrt{w} & \sqrt{w} & \sqrt{w}
\end{array}
\]
(b) Identify if the remaining bits (not first bit of a cluster) are set. Compute bitwise XOR between the previous result and $x$ to get $t_2$.
\[
x = \begin{array}{cccc}
0101 & 0000 & 1000 & 1101 \\
\sqrt{w} & \sqrt{w} & \sqrt{w} & \sqrt{w}
\end{array}
\]
\[
t_1 = \begin{array}{cccc}
0000 & 0000 & 1000 & 1000 \\
\sqrt{w} & \sqrt{w} & \sqrt{w} & \sqrt{w}
\end{array}
\]
\[
t_2 = \begin{array}{cccc}
0000 & 0000 & 0000 & 0000 \\
\sqrt{w} & \sqrt{w} & \sqrt{w} & \sqrt{w}
\end{array}
\]
Now we subtract $t_2$ from $F$, and if the 1-bit in a cluster of $F$ end up getting borrowed (so that it becomes a 0), then we know that there was something in the corresponding cluster
$$F = \begin{array}{cccc}
1000 & 1000 & 1000 & 1000 \\
\end{array}$$
$$t_2 = \begin{array}{cccc}
0000 & 0000 & 0000 & 0000 \\
\end{array}$$
$$t_3 = \begin{array}{cccc}
0xxx & 1000 & 1000 & 0xxx \\
\end{array}$$
Finally XOR this result with $F$, to indicate that the remaining bits for a particular cluster are set
$$F = \begin{array}{cccc}
1000 & 1000 & 1000 & 1000 \\
\end{array}$$
$$t_3 = \begin{array}{cccc}
0xxx & 1000 & 1000 & 0xxx \\
\end{array}$$
$$t_4 = \begin{array}{cccc}
1000 & 0000 & 0000 & 1000 \\
\end{array}$$
(c) Now just OR the results from the previous steps, and this will tell us which clusters have set bits in them.
$$t_1 = \begin{array}{cccc}
0000 & 0000 & 1000 & 1000 \\
\end{array}$$
$$t_4 = \begin{array}{cccc}
1000 & 0000 & 0000 & 1000 \\
\end{array}$$
$$y = \begin{array}{cccc}
1000 & 0000 & 1000 & 1000 \\
\end{array}$$
We can view $y$ as the summary vector of all the $\sqrt{w}$ clusters.
2. Compute perfect sketch of $y$. We will need to do this for the next step, where we perform a parallel comparison and need multiple copies of $\text{sketch}(y)$ in a single word. Above we computed $y$ which tells us which clusters have bits in them. Unfortunately these bits are spread, but we can compress them into a $\sqrt{w}$ word by using a perfect sketch. Fortunately, we know exactly how the $b_i$s (the bits that we care about for the sketch) are spaced in this case. We care about the first bit of each $\sqrt{w}$ cluster, which is every other $\sqrt{w}$ bit. So
$$b_i = \sqrt{w} - 1 + i\sqrt{w}$$
To compute the sketch, we claim (without exact proof) that we can use
$$m_j = w - (\sqrt{w} - 1) - j\sqrt{w} + j$$
If we do this, then
$$b_j + m_j = w + (i - j)\sqrt{w} + j$$
will be distinct (no collisions) for $0 \leq i, j < \sqrt{w}$ and also conveniently
$$b_i + m_i = w + i$$
So to get our perfect sketch of $\text{sketch}(y)$, we just need to multiply $y \cdot m$ and shift it right by $w$.
7
3. Find first 1-bit in $\text{sketch}(y)$. This will tell us the first non-empty cluster of $x$. We use perform a parallel comparison of $\text{sketch}(y)$ to all of the $\sqrt{w}$ powers of 2. In our example these are
$$
\begin{align*}
0001 \\
0010 \\
0100 \\
1000
\end{align*}
$$
This will tell us the first power of 2 that is greater than $\text{sketch}(y)$, which tells us the first set bit in $\text{sketch}(y)$. Because we reduced $y$ to $\text{sketch}(y)$ which is $\sqrt{w}$ bits, the words generated for parallel comparison take up $\sqrt{w}(\sqrt{w} + 1) < 2w$ bits, less than two words, so we can do this parallel comparison in $O(1)$ time.
4. Now that we know the first cluster $c$ of $x$ that has a set bit, we will find the first set bit $d$ of $c$. To do this, first shift $x$ right by $c \cdot \sqrt{w}$, bitwise AND the result with $11 \ldots 11$ to get just the bits in that cluster. Now we perform the exact same type of parallel comparison as in the previous step, to find the first set bit $d$.
5. Finally, we compute the index of the most significant set bit to be $c\sqrt{w} + d$.
Each step along the way takes $O(1)$ time, which makes this take $O(1)$ time overall.
References
|
{"Source-Url": "http://courses.csail.mit.edu/6.851/spring21/scribe/lec12.pdf", "len_cl100k_base": 5513, "olmocr-version": "0.1.48", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 27523, "total-output-tokens": 6485, "length": "2e12", "weborganizer": {"__label__adult": 0.0004439353942871094, "__label__art_design": 0.0004875659942626953, "__label__crime_law": 0.000518798828125, "__label__education_jobs": 0.0006823539733886719, "__label__entertainment": 0.00013971328735351562, "__label__fashion_beauty": 0.00021338462829589844, "__label__finance_business": 0.00044155120849609375, "__label__food_dining": 0.0005779266357421875, "__label__games": 0.0008702278137207031, "__label__hardware": 0.004016876220703125, "__label__health": 0.0010290145874023438, "__label__history": 0.0005207061767578125, "__label__home_hobbies": 0.00022041797637939453, "__label__industrial": 0.001010894775390625, "__label__literature": 0.0003914833068847656, "__label__politics": 0.0003948211669921875, "__label__religion": 0.0009126663208007812, "__label__science_tech": 0.301513671875, "__label__social_life": 0.000110626220703125, "__label__software": 0.0096893310546875, "__label__software_dev": 0.673828125, "__label__sports_fitness": 0.0004706382751464844, "__label__transportation": 0.0011987686157226562, "__label__travel": 0.00032520294189453125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19042, 0.09204]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19042, 0.39325]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19042, 0.86022]], "google_gemma-3-12b-it_contains_pii": [[0, 2137, false], [2137, 4176, null], [4176, 8069, null], [8069, 9735, null], [9735, 12876, null], [12876, 14950, null], [14950, 17086, null], [17086, 19042, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2137, true], [2137, 4176, null], [4176, 8069, null], [8069, 9735, null], [9735, 12876, null], [12876, 14950, null], [14950, 17086, null], [17086, 19042, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19042, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19042, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19042, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19042, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19042, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19042, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19042, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19042, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19042, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19042, null]], "pdf_page_numbers": [[0, 2137, 1], [2137, 4176, 2], [4176, 8069, 3], [8069, 9735, 4], [9735, 12876, 5], [12876, 14950, 6], [14950, 17086, 7], [17086, 19042, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19042, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.