diff --git "a/output.csv" "b/output.csv" new file mode 100644--- /dev/null +++ "b/output.csv" @@ -0,0 +1,5169 @@ +text,source +"# Requirements + +We do not write code for the sake of writing code: we develop software to _help people perform tasks_. +These people can be ""anyone on Earth"" for widely-used software such as GitHub, ""people who do a specific job"" for internal applications, +or even a single person whose life can be helped with software. +In order to know _what_ software to develop, we must know what user needs: their _requirements_. + + +## Objectives + +After this lecture, you should be able to: +- Define what user requirements are +- Formalize requirements into _personas_ and _user stories_ +- Develop software based on formalized requirements +- Understand _implicit_ vs _explicit_ requirements + + +## What are requirements? + +A requirement is something users need. +For instance, a user might want to move from one place to another. They could then use a car. +But perhaps the user has other requirements such as not wanting or being able to drive, in which case a train could satisfy their requirements. +Users may also have more specific requirements, such as specifically wanting a low-carbon means of transportation, +and having to travel between places that are far from public transport, in which case an electric car powered with low-carbon electricity could be a solution. + +Requirements are _not_ about implementation details. +A specific kind of electric motor no a specific kind of steel for the car doors are not user requirements. +However, users may have needs that lead to such choices being made by the system designers. + +Sometimes you may encounter a division between ""functional"" and ""non-functional"" requirements, with the latter also being known as ""quality attributes"". +""Functional"" requirements are those that directly relate to features, whereas ""non-functional"" ones include accessibility, security, performance, and so on. +The distinction is not always clear, but it is sometimes made anyway. + +Defining requirements typically starts by discussing with users. +This is easier if the software has a small and well-defined set of users, such as software specifically made for one person to do their job in a company. +It is harder if the software has a large and ill-defined set of users, such as a search engine or an app to listen to music. + +It is important, when listening to what users say, to keep track of what they _need_ rather than what they _want_. +What users claim they want is heavily influenced by what they already know and use. +For instance, someone used to traveling by horse who needs to go across a mountain might ask for a ""flying horse"", when in fact their need is crossing the mountain, +and thus a train with a tunnel or a plane would satisfy their requirements. +Similarly, before the modern smart phone, users would have asked for old-style phones purely because they did not even know a smart phone was feasible, +even if now they actually like their smart phone more than they liked their old phone. + +What users want can be ambiguous, especially when there are many users. +If you select cells with ""100"" and ""200"" in a spreadsheet program such as Microsoft Excel and expand the selection, what should happen? +Should Excel fill the new cells with ""300"", ""400"", and so on? Should it repeat ""100"" and ""200"" ? +What if you expand a selection containing the lone cell ""Room 120"" ? Should it be repeated or should it become ""Room 121"", ""Room 122"", and so on? +There is no perfect answer to these questions, as any answer will leave some users unsatisfied, but a developer must make a choice. + +Not listening to users' requirements can be costly, as Microsoft found out with Windows 8. +Windows 8's user interface was a major reinvention of the Windows interface, using a ""start screen"" rather than a menu, with full-screen apps that could be tiled rather than moved. +It was a radical departure from the Windows that users knew, and it was a commercial failure. Microsoft had to bring back the start menu, and abandoned the concept of tiled full-screen apps. +However, Apple later did something similar for a new version of their iPad operating system, and it worked quite well there, perhaps because users have different expectations on desktops and tablets. + + +## How can we formalize requirements? + +Once you've discussed with plenty of users and gotten many ideas on what requirements users have, how do you consolidate this feedback into actionable items? +This is what formalizing requirements is about, and we will discuss formalizing _who_ with ""personas"" and _what_ with ""user stories"". + +Who is the software for? Sometimes the answer to that question is obvious because it is intended for a small and specific set of users, but most large pieces of software are intended for many people. +In fact, large pieces of software typically have way too many users for any kind of personalized process to scale. +Instead, you can use _personas_ to represent groups of users. + +A persona is an _abstract_ person that represents a group of similar users. +For instance, in a music app, one persona might be Alice, a student, who uses the app in her commute on public transport. +Alice is not a real person, and she does not need specific features such as a hair color or a nationality. +Instead, Alice is an abstract representation of many people who could use the app and all have similar features from the app's point of view, +namely that they use the app on public transport while commuting to their school, university, or other similar place. +Alice's requirements might lead to features such as downloading podcasts in advance at home, and listening with the screen turned off. +Another persona for the same app might be Bob, a pensioner, who uses the app while cooking and cleaning. +Bob is not a real person either, but instead represents a group of potential users who aren't so familiar with the latest technology and want to use the app while performing tasks at home. + +While one can create personas for many potential groups of users, not all of them will make the cut. +Still for the music app example, another persona could be Carol, a ""hacker"" who wants to listen to pirated music. +Carol would need features such as loading existing music tracks into the app and bypassing copyright protection. +Is the app intended for people like Carol? That's up to the developers to decide. + +One last word about personas: avoid over-abstracting. Personas are useful because they represent real people in ways that are helpful to development. +If your personas end up sounding like ""John, a user who uses the app"" or ""Jane, a person who has a phone"", they will not be useful. +Similarly, if you already know who exactly is in a group of users, there is no need to abstract it. ""Sam, a sysadmin"" is not a useful persona if your app has exactly one sysadmin: use the real person instead. + +--- + +#### Exercise +What personas could a video chat app have? +
+Example solutions (click to expand) +

+ +Anne, a manager who frequently talks to her team while working remotely. + +Basil, a pensioner who wants to video chat his grandkids to stay in touch. + +Carlos, a doctor who needs to talk to patients as part of a telemedicine setup. + +

+
+ +--- + +What can users do? After defining who the software is for, one must decide what features to build. +_User stories_ are a useful tool to formalize features based on requirements, including who wants the feature, what the feature is, and what the context is. +Context is key because the same feature could be implemented in wildly different ways based on context. +For instance, ""sending emails with information"" is a feature a software system might have. +If the context is that users want to archive information, the emails should include very detailed information, but their arrival time matters little. +If the context is that users want a notification as soon as something happens, the emails should be sent immediately, and great care should be taken to avoid ending up in a spam filter. +If the context is that users want to share data with their friends who don't use the software, the emails should have a crisp design that contains only the relevant information so they can be easily forwarded. + +There are many formats for user stories; in this course, we will use the three-part one ""_as a ... I want to ... so that ..._"". +This format includes the user who wants the feature, which could be a persona or a specific role, the feature itself, and some context explaining why that user wants that feature. +For instance, ""As a student, I want to watch course recordings, so that I can catch up after an illness"". +This user story lets developers build a feature that is actually useful to the person: it would not be useful to this student, for instance, +to build a course recording feature that is an archive accessible once the course has ended, since presumably the student will not be ill for the entire course duration. + +Going back to our music app example, consider the following user story: ""As Alice, I want to download podcasts in advance, so that I can save mobile data"". +This implies the app should download the entire podcast in advance, but Alice still does have mobile data, she just doesn't want to use too much of it. +A similar story with a different context could be ""As a commuter by car, I want to download podcasts in advance, so that I can use the app without mobile data"". +This is a different user story leading to a different feature: now the app cannot use mobile data at all, because the commuter simply does not have data at some points of their commute. + +To evaluate user stories, remember the ""INVEST"" acronym: +- *I*ndependent: the story should be self-contained +- *N*egotiable: the story is not a strict contract but can evolve +- *V*aluable: the story should bring clear value to someone +- *E*stimable: the developers should be able to estimate how long the story will take to implement +- *S*mall: the story should be of reasonable size, not a huge ""catch-all"" story +- *T*estable: the developers should be able to tell what acceptance criteria the story has based on its text, so they can test their implementation + +Stories that are too hard to understand and especially too vague will fail the ""INVEST"" test. + +#### Exercise +What user stories could a video chat app have? +
+Example solutions (click to expand) +

+ +As Anne, I want to see my calendar within the app, so that I can schedule meetings without conflicting with my other engagements. + +As Basil, I want to launch a meeting from a message my grandkids send me, so that I do not need to spend time configuring the app. + +As a privacy enthusiast, I want my video chats to be encrypted end-to-end, so that my data cannot be leaked if the app servers get hacked. + +

+
+ +#### Exercise +Which of the following are good user stories and why? +1. As a user, I want to log in quickly, so that I don't lose time +2. As a Google account owner, I want to log in with my Gmail address +3. As a movie buff, I want to view recommended movies at the top on a dark gray background with horizontal scroll +4. As an occasional reader, I want to see where I stopped last time, so that I can continue reading +5. As a developer, I want to improve the log in screen, so that users can log in with Google accounts +
+Solutions (click to expand) +

+ +1 is too vague, 2 is acceptable since the reason is implicit and obvious, 3 is way too specific, 4 is great, and 5 is terrible as it relates to developers not users. + +

+
+ + +## How can we develop from requirements? + +You've listened to users, you abstracted them into personas and their requirements into user stories, and you developed an application based on that. +You're convinced your personas would love your app, and your implementation answers the needs defined by the user stories. +After spending a fair bit of time and money, you now have an app that you can demo to real users... and they don't like it. +It's not at all what they envisioned. What went wrong? +What you've just done, asking users for their opinion on the app, is _validation_: checking if what you specified is what users want. +This is different from _verification_, which is checking if your app correctly does what you specified. + +One key to successful software is to do validation early and often, rather than leaving it until the end. +If what you're building isn't what users want, you should know as soon as possible, instead of wasting resources building something nobody will use. +To do this validation, you will need to build software in a way that can be described by users, using a _common language_ with them and _integrating_ them into the process so they can give an opinion. +We will see two ways to do this. + +First, you need a _common language_ with your users, as summarized by Eric Evans in his 2003 book ""Domain-Driven Design"". +Consider making some candy: you could ask people in advance whether they want some candy containing NaCl, $C_{24}$ $H_{36}$ $O_{18}$, $C_{36}$ $H_{50}$ $O_{25}$, $C_{125}$ $H_{188}$ $O_{80}$, and so on. +This is a precise definition that you could give to chemists, who would then implement it. However, it's unlikely an average person will understand until your chemists actually build it. +Instead, you could ask people if they want candy with ""salted caramel"". This is less precise, as one can imagine different kinds of salted caramel, but much more understandable by the end users. +You don't need to create salted caramel biscuits for people to tell you whether they like the idea or not, and a discussion about your proposed candy will be much more fruitful using the term ""salted caramel"" +than using any chemical formula. + +In his book, Evans suggests a few specific terms that can be taught to users, such as ""entity"" for objects that have an identity tied to a specific property, +""value object"" for objects that are data aggregates without a separate identity, and so on. +The point is not which exact terms you use, but the idea that you should design software in a way that can be easily described to users. + +Consider what a user might call a ""login service"", which identifies users by what they know as ""email addresses"". +A programmer used to the technical side of things might call this a `PrincipalFactory`, since ""principals"" is one way to call a user identity, and a ""factory"" is an object that creates objects. +The identifiers could be technically called ""IDs"". +However, asking a user ""What should happen when the ID is not found? Should the principal returned by the factory be `null`?"" will yield puzzled looks. +Most users do not know any of these terms. +Instead, if the object in the code is named `LoginService`, and takes in objects of type `Email` to identify users, the programmer can now ask the user +""What should happen when the email is not found? Should the login process fail?"" and get a useful answer. + +Part of using the _right_ vocabulary to discuss with people is also using _specific_ vocabulary. Consider the term ""person"". +If you ask people at a university what a ""person"" is, they will mumble some vague answer about people having a name and a face, because they do not deal with general ""people"" in their jobs. +Instead, they deal with specific kinds of people. For instance, people in the financial service deal with ""employees"" and ""contractors"", and could happily teach you exactly what those concepts are, +how they differ, what kind of attributes they have, what operations are performed on their data, and so on. +Evans calls this a ""bounded context"": within a specific business domain, specific words have specific meanings, and those must be reflected in the design. +Imagine trying to get a financial auditor, a cafeteria employee, and a professor to agree on what a ""person"" is, and to discuss the entire application using a definition of ""person"" that includes all possible attributes. +It would take forever, and everyone would be quite bored. +Instead, you can talk to each person separately, represent these concepts separately in code, and have operations to link them together via a common identity, +such as one function `get_employee(email)`, one function `get_student(email)`, and so on. + +Once you have a common language, you can also write test scenarios in a way users can understand. +This is _behavior-driven development_, which can be done by hand or with the help of tools such as [Cucumber](https://cucumber.io/) or [Behave](https://behave.readthedocs.io/). +The idea is to write test scenarios as three steps: ""_given ... when ... then ..._"", which contain an initial state, an action, and a result. + +For instance, here is a Behave example from their documentation: +``` +Scenario Outline: Blenders + Given I put in a blender, + when I switch the blender on + then it should transform into + +Examples: Amphibians + | thing | other thing | + | Red Tree Frog | mush | + +Examples: Consumer Electronics + | thing | other thing | + | iPhone | toxic waste | + | Galaxy Nexus | toxic waste | +``` +Using this text, one can write functions for ""putting a thing in a blender"", ""switching on the blender"", and ""checking what is in the blender"", +and Behave will run the functions for the provided arguments. +Users don't have to look at the functions themselves, only at the text, and they can then state whether this is what they expected. +Perhaps only blue tree frogs, not red ones, need to turn into mush. Or perhaps the test scenario is precisely what they need, and developers should go implement the actual feature. + +The overall workflow is as follows: +1. Discuss with users to get their requirements +2. Translate these requirements into user stories +3. Define test scenarios based on these user stories +4. Get feedback on these scenarios from users +5. Repeat as many times as needed until users are happy, at which point implementation can begin + +#### Exercise +What language would you use to discuss a course registration system at a university (or any other system you use frequently), and what test scenarios could you define? +
+Example solutions (click to expand) +

+ +A course registration system could have students, which are entities identified by a university e-mail who have a list of courses and grades associated with the courses, and +lecturers, which are associated with the courses they teach and can edit said courses. Courses themselves might have a name, a code, a description, and a number of credits. + +Some testing scenarios might be ""given that a user is already enrolled in a course, when the user tries to enroll again, then that has no effect"", or ""given that a lecturer +is in charge of a course, when the lecturer sets grades for a student in the course, then that student's grade is updated"". + +

+
+ + +## What implicit requirements do audiences have? + +Software engineers design systems for all kinds of people, from all parts of the world, with all kinds of needs. +Often some people have requirements that are _implicit_, yet are just as required as the requirements they will explicitly tell you about. +Concretely, we will see _localization_, _internationalization_, and _accessibility_. +Sometimes these are shortened to ""l10n"", ""i18n"", and ""a11y"", each having kept their start and end letter, with the remaining letters being reduced to their number. +For instance, there are 10 letters between ""l"" and ""n"" in ""localization"", thus ""l10n"". + +_Localization_ is all about translations. Users expect all text in programs to be in their language, even if they don't explicitly think about it. +Instead of `print(""Hello "" + user)`, for instance, your code should use a constant that can be changed per language. +It is tempting to have a `HELLO_TEXT` constant with the value `""Hello ""` for English, but this does not work for all languages because text might also come after the user name, not only before. +The code could thus use a function that takes care of wrapping the user name with the right text: `print(hello_text(user))`. + +Localization may seem simple, but it also involves double-checking assumptions in your user interface and your logic. +For instance, a button that can hold the text ""Log in"" may not be wide enough when the text is the French ""Connexion"" instead. +A text field that is wide enough for each of the words in ""Danube steamship company captain"" might overflow with the German ""Donaudampfschifffahrtsgesellschaftkapitän"". +Your functions that provide localized text may need more information than you expect. +English nouns have no grammatical gender, so all nouns can use the same text, but French has two, German has three, and Swahili has [eighteen](https://en.wikipedia.org/wiki/Swahili_grammar#Noun_classes)! +English nouns have one ""singular"" and one ""plural"" form, and so do many languages such as French, but this is not universal; Slovenian, for instance, has one ending for 1, one for 2, one for 3 and 4, and one for 5 and more. + +Translations can have bugs just like software can. +For instance, the game Champions World Class Soccer's German translation infamously has a bug in which ""shootout"" is not, as it should, translated to ""schiessen"", +but instead to ""scheissen"", which has an entirely different meaning despite being one letter swap apart. +Another case of German issues is [in the game Grandia HD](https://www.nintendolife.com/news/2019/08/random_amazingly_grandia_hds_translation_gaffe_is_only_the_second_funniest_german_localisation_mistake): +missing an attack displays the text ""fräulein"", which is indeed ""miss"" but in an entirely different context. + +Localization is not something you can do alone unless you are translating to your native language, since nobody knows all features of all languages. +Just like other parts of software, localization needs testing, in this case by native speakers. + +_Internationalization_ is all about cultural elements other than language. +Consider the following illustration: + +

+ +What do you see? Since you're reading English text, you might think this is an illustration of a t-shirt going from dirty to clean through a washing machine. +But someone whose native language is read right to left, such as Arabic, might see the complete opposite: a t-shirt going from clean to dirty through the machine. +This is rather unfortunate, but it is the way human communication works: people have implicit expectations that are sometimes at odds. Software thus needs to adapt. +Another example is putting ""Vincent van Gogh"" in a list of people sorted by last name. Is Vincent under ""G"" for ""Gogh"" or under ""V"" for ""van Gogh""? +That depends on who you ask: Dutch people expect the former, Belgians the latter. Software needs to adapt, otherwise at least one of these two groups will be confused. +Using the culture-specific format for dates is another example: ""10/01"" mean very different dates to an American and to someone from the rest of the world. + +One important part of localization is people's names. +Many software systems out there are built with odd assumptions about people's names, such as ""they are composed entirely of letters"", or ""family names have at least 3 letters"", or simply ""family names are something everyone has"". +In general, [programmers believe many falsehoods about names](https://shinesolutions.com/2018/01/08/falsehoods-programmers-believe-about-names-with-examples/), +which leads to a lot of pain for people who have names that do not comply to those falsehoods, such as people with hyphens or apostrophes in their names, people whose names are one or two letters long, +people from cultures that do not have a concept of family names, and so on. +Remember that _a person's name is never invalid_. + +Like localization, internationalization is not something you can do alone. Even if you come from the region you are targeting, that region is likely to contain many people from many cultures. +Internationalization needs testing. + +_Accessibility_ is a property software has when it can be used by everyone, even people who cannot hear, cannot see, do not have hands, and so on. +Accessibility features include closed captions, text-to-speech, dictation, one-handed keyboards, and so on. User interface frameworks typically have accessibility features documented along with their other features. +These are not only useful to people who have permanent disabilities, but also to people who are temporarily unable to use some part of their body. +For instance, closed captions are convenient for people in crowded trains who forgot their headphones. Text-to-speech is convenient for people who are doing something else while using an app on their phone. +Features designed for people without hands are convenient for parents holding their baby. + +Accessibility is not only a good thing to do from a moral standpoint: it is often legally required, especially in software for government agencies. +It is also good even from a selfish point of view: a more accessible app has more potential customers. + + +## Are all requirements ethical? + +We've talked about requirements from users under the assumption that you should do whatever users need. But is that always the case? + +Sometimes, requirements conflict with your ethics as an engineer, and such conflicts should not be brushed aside. +Just because ""a computer is doing it"" does not mean the task is acceptable or that it will be performed without any biases. +The computer might be unbiased, but it is executing code written by a human being, which replicates that human's assumptions and biases, as well as all kinds of issues in the underlying data. + +For instance, there have been many variants of ""algorithms to predict if someone will be a criminal"", using features such as people's faces. +If someone were to go through a classroom and tell every student ""you're a criminal"" or ""you're not a criminal"", one would reasonably be upset and suspect all kinds of bad reasons for these decisions. +The same must go for a computer: there is no magical ""unbiased"", ""objective"" algorithm, and as a software engineer you should always be conscious of ethics. + + +## Summary + +In this lecture, you learned: +- Defining and formalizing user needs with requirements, personas, and user stories +- Developing based on requirements through domain-driven design and behavior-driven development +- Understanding implicit requirements such as localization, internationalization, accessibility, and ethics + +You can now check out the [exercises](exercises/)! +",CS-305: Software engineering +"# Performance + +Producing correct results is not the only goal of software. +Results must be produced in reasonable time, otherwise users may not wait for a response and give up. +Providing a timely response is made harder by the interactions between software components and between software and hardware, as well as the inherent conflicts between different definitions of performance. +As a software engineer, you will need to find out what kind of performance matters for your code, measure the performance of your code, define clear and useful performance goals, and ensure the code meets these goals. +You will need to keep performance in mind when designing and writing code, and to debug the performance issues that cause your software to not meet its goals. + + +## Objectives + +After this lecture, you should be able to: +- Contrast different metrics and scales for performance +- Create benchmarks that help you achieve performance objectives +- Profile code to find performance bottlenecks +- Optimize code by choosing the right algorithms and tradeoffs for a given scenario + + +## What is performance? + +There are two key metrics for performance: _throughput_, the number of requests served per unit of time, and _latency_, the time taken to serve one request. + +In theory, we would like latency to be constant per request and throughput constant over time, but in practice, this is usually never the case. +Some requests fundamentally require more work than others, such ordering a pizza with 10 toppings compared to a pizza with only cheese and tomatoes. +Some requests go through different code paths because engineers made tradeoffs, such as ordering a gluten-free pizza taking more time +because instead of maintaining a separate gluten-free kitchen, the restaurant has to clean its only kitchen. +Some requests compete with other requests for resources, such as ordering a pizza right after a large table has placed an order in a restaurant with only one pizza oven. + +Latency thus has many sub-metrics in common use: mean, median, 99th percentile, and even latency just for the first request, i.e., the time the system takes to start. +Which of these metrics you target depends on your use case, and has to be defined in collaboration with the customer. +High percentiles such as the 99th percentile make sense for large systems in which many components are involved in any request, as [Google describes](https://research.google/pubs/pub40801/) in their ""Tail at Scale"" paper. +Some systems actually care about the ""100th percentile"", i.e., the _worst-case execution time_, because performance can be a safety goal. +For instance, when an airplane pilot issues a command, the plane must reply quickly. +It would be catastrophic for the plane to execute the command minutes after the pilot has issued it, even if the execution is semantically correct. +For such systems, worst-case execution time is typically over-estimated through manual or automated analysis. +Some systems also need no variability at all, i.e., _constant-time_ execution. This is typically used for cryptographic operations, whose timing must not reveal any information about secret keys or passwords. + +In a system that serves one request at a time, throughput is the inverse of mean latency. +In a system that serves multiple requests at a time, there is no such correlation. +For instance, one core of a dual-core CPU might process a request in 10 seconds while another core has served 3 requests in that time, thus the throughput was 4 requests per 10 seconds but the latency varied per request. + +We used ""time"" as an abstract concept above, but in practice you will use concrete units such as minutes, seconds, or even as low as nanoseconds. +Typically, you will care about the _bottleneck_ of the system: the part that takes the most time. +There is little point in discussing other parts of the system since their performance is not key. +For instance, if it takes 90 seconds to cook a pizza, it is not particularly useful to save 0.5 seconds when adding cheese to the pizza, or to save 10 nanoseconds in the software that sends the order to the kitchen. + +If you're not sure of how long a nanosecond is, check out [Grace Hopper's thought-provoking illustration](https://www.youtube.com/watch?v=gYqF6-h9Cvg) (the video has manually-transcribed subtitles you can enable). +Grace Hopper was a pioneer in programming languages. With her team, she designed COBOL, the first language that could execute English-like commands. +As she [recalled](https://www.amazon.com/Grace-Hopper-Admiral-Computer-Contemporary/dp/089490194X) it, +""_I had a running compiler and nobody would touch it. They told me computers could only do arithmetic_"". Thankfully, she ignored the naysayers, or we wouldn't be programming in high-level languages today! + +_Amdahl's law_ states that the speedup achieved by optimizing one component is at most that component's share of overall execution time. +For instance, if a function takes 2% of total execution time, it is not possible to speed up the system more than 2% by optimizing that function, even if the function itself is made orders of magnitude faster. + +As you can tell, the word ""performance"" hides a lot of details, and ""improving performance"" is usually too abstract a goal. +Optimizing a system requires clear goals defined in terms of specific workloads and metrics, and an understanding of which parts of the system currently take the most time. +Performance objectives for services used by customers are typically defined in terms of _Service Level Indicators_, which are measures such as ""median request latency"", +_Service Level Objectives_, which define goals for these measures such as ""under 50ms"", and _Service Level Agreements_, which add consequences to objectives such as ""or we reimburse 20% of the customer's spending that month"". + +Picking useful objectives is a key first step. ""As fast as possible"" is usually not useful since it requires too many tradeoffs, +as [pizza chain Domino's found out](https://www.nytimes.com/1993/12/22/business/domino-s-ends-fast-pizza-pledge-after-big-award-to-crash-victim.html) after their promise of reimbursing late pizzas led to +an increase in car crashes for their drivers. +Systems should be fast enough, not perfect. What ""enough"" is depends on the context. +Some software systems have specific performance requirements. +Other times the performance requirements are implicit, making them harder to define. +For instance, in 2006 Google [found](http://glinden.blogspot.com/2006/11/marissa-mayer-at-web-20.html) +that traffic dropped 20% if their search results took 0.9s to show instead of 0.4s. +In 2017, Akamai [found](https://www.akamai.com/uk/en/about/news/press/2017-press/akamai-releases-spring-2017-state-of-online-retail-performance-report.jsp) +that even a 100ms increase in page load time could lead to a 7% drop in sales. In general, if users feel that an interface is ""too slow"", they won't like it, and if they have another choice they may use it instead. + + +## How can one measure performance? + +The process of measuring performance is called _benchmarking_, and it is a form of testing for performance. +Like tests, benchmarks are usually automated, and come with issues such as how to isolate a specific component rather than always benchmark the entire system. +Benchmarks for individual functions are typically called _microbenchmarks_, while benchmarks for entire systems are ""end to end"". + +In theory, benchmarking is a simple process: start a timer, perform an action, stop the timer. +But in practice, there are many tricky parts. How precise is the timer? What's the timer's resolution? How variable is the operation? How many measurements should be taken for the results to be valid? +Should outliers be discarded? What's the best API on each OS to time actions? How to avoid ""warm up"" effects? And caching effects? Are compiler optimizations invalidating the benchmark by changing the code? + +You should never write your own benchmarking code. (Unless you become an expert in benchmarking, at which point you will know you can break this rule.) +Instead, use an existing benchmarking framework, such as JMH for Java, BenchmarkDotNet for .NET, or pytest-benchmark for Python. + +In practice, to benchmark some code, you first need to define the workload you will use, i.e., the inputs, and the metric(s) you care about. +Then you need a _baseline_ for your benchmark. Absolute performance numbers are rarely useful, instead one usually compares to some other numbers, such as the performance of the previous version of the software. +For low-level code or when benchmarking operations that take very little time, you may also need to set up your system in specific ways, such as configuring the CPU to not vary its frequency. + +It is very easy to accidentally write benchmark code that is meaningless in a non-obvious way. +Always ask your colleagues to review your benchmark code, and provide the code whenever you provide the results. +Try small variations of the code to see if you get small variations in the benchmark results. +If you have any, reproduce previous results to ensure that your overall setup works. + +Beware of the ""small input problem"": it is easy to miss poorly-performing algorithms by only benchmarking with small inputs. +For instance, if an operation on strings is in `O(n^3)` of the string length and you only use a string of size 10, that is 1000 operations, which is instantaneous on any modern CPU. +And yet the algorithm is likely not great, since `O(n^3)` is usually not an optimal solution. + +Make sure you are actually measuring what you want to measure, rather than intermediate measurements that may or may not be representative of the full thing. +For instance, [researchers found](https://dl.acm.org/doi/abs/10.1145/3519939.3523440) that some garbage collectors for Java had optimized for +the time spent in the garbage collector at the expense of the time taken to run an app overall. By trying to minimize GC time at all costs, some GCs actually increased the overall running time. + +Finally, remember that it is not useful to benchmark and improve code that is already fast enough. +While microbenchmarking can be addictive, you should spend your time on tasks that have impact to end users. +Speeding up an operation from 100ms to 80ms can feel great, but if users only care about the task taking less than 200ms, they would probably have preferred you spend time on adding features or fixing bugs. + + +## How can one debug performance? + +Just like benchmarking is the performance equivalent of testing, _profiling_ is the performance equivalent of debugging. +The goal of profiling is to find out how much time each operation in a system takes. + +There are two main kinds of profilers: _instrumenting_ profilers that add code to the system to record the time before and after each operation, +and _sampling_ profilers that periodically stop the program to take a sample of what operation is running at that time. +A sampling profiler is less precise, as it could miss some operations entirely if the program happens to never be running them when the profiler samples, +but it also has far less overhead than an instrumenting profiler. + +It's important to keep in mind that the culprit of poor performance is usually a handful of big bottlenecks in the system that can be found and fixed with a profiler. +The average function in any given codebase is fast enough compared to those bottlenecks, and thus it is important to profile before optimizing to ensure an engineer's time is well spent. +For instance, a hobbyist once managed to [cut loading times by 70%](https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times-by-70/) in an online game by finding the bottlenecks and fixing them. +Both of them were `O(n^2)` algorithms that could be fixed to be `O(n)` instead. + +There are plenty of profilers for each platform, and some IDEs come with their own. For instance, Java has VisualVM. + + +## How can one improve performance? + +The typical workflow is as follows: +1. Double-check that the code is actually correct and that it is slower than what users need (otherwise there is no point in continuing) +2. Identify the bottleneck by profiling +3. Remove any unnecessary operations +4. Make some tradeoffs, if necessary +5. Go to step 1 + +Identifying the problem always requires measurements. Intuition is often wrong when it comes to performance. +See online questions such as [""Why is printing ""B"" dramatically slower than printing ""#""?""](https://stackoverflow.com/questions/21947452/why-is-printing-b-dramatically-slower-than-printing) +or [""Why does changing 0.1f to 0 slow down performance by 10x?""](https://stackoverflow.com/questions/9314534/why-does-changing-0-1f-to-0-slow-down-performance-by-10x). + +Once a problem is found, the first task is to remove any unnecessary operations the code is doing, such as algorithms that can be replaced by more efficient ones without losing anything other than development time. +For instance, a piece of code may use a `List` when a `Set` would make more sense because the code mostly checks for membership. +Or a piece of code may be copying data when it could use a reference to the original data instead, especially if the data can be made immutable. +This is also the time to look for existing faster libraries or runtimes, such as [PyPy](https://www.pypy.org/) for Python, that are usually less flexible but are often enough. + +If necessary, after fixing obvious culprits that do not need complex tradeoffs, one must make serious tradeoffs. +Unfortunately, in practice, performance tradeoffs occur on many axes: latency, throughput, code readability and maintainability, and sub-axes such as specific kinds of latency. + +### Separating common cases + +One common tradeoff is to increase code size to reduce latency by adding specialized code for common requests. +Think of how public transport is more efficient than individual cars, but only works for specific routes. +Public transport cannot entirely replace cars, but for many trips it can make a huge difference. +If common requests can be handled more efficiently than in the general case, it is often worthwhile to do so. + +### Caching + +It is usually straightforward to reduce latency by increasing memory consumption through _caching_, i.e., remembering past results. +In fact, the reason why modern CPUs are so fast is because they have multiple levels of caches on top of main memory, which is slow. + +Beware, however: caching introduces potential correctness issues, as the cache may last too long, or return obsolete data that should have been overwritten with the results of a newer query. + +### Speculation and lazy loading + +You go to an online shop, find a product, and scroll down. ""Customers who viewed this item also viewed..."" and the shop shows you similar items you might want to buy, even before you make further searches. +This is _speculation_: potentially lowering latency and increasing throughput if the predictions are successful, at the cost of increasing resource use and possibly doing useless work if the predictions are wrong. + +The opposite of speculation is _lazy loading_: only loading data when it is absolutely needed, and delaying any loading until then. +For instance, search engines do not give you a page with every single result on the entire Internet for your query, because you will most likely find what you were looking for among the first results. +Instead, the next results are only loaded if you need them. Lazy loading reduces the resource use for requests that end up not being necessary, but requires more work for requests that are actually necessary +yet were not executed early. + +### Streaming and batching + +Instead of downloading an entire movie before watching it, you can _stream_ it: your device will load the movie in small chunks, and while you're watching the first few chunks the next ones can be downloaded in the background. +This massively decreases latency, but it also decreases throughput since there are more requests and responses for the same amount of overall data. + +The opposite of streaming is _batching_: instead of making many requests, make a few big ones. +For instance, the postal service does not come pick up your letters every few minutes, since most trips would be a waste; instead, they come once or twice a day to pick up all of the letters at once. +This increases throughput, at the expense of increasing latency. + + +## Summary + +In this lecture, you learned: +- Defining performance: latency, throughput, variability, and objectives +- Benchmarking and profiling: measuring and understanding performance and bottlenecks +- Tradeoffs to improve performance: common cases, caching, speculation vs lazy loading, and streaming vs batching + +You can now check out the [exercises](exercises/)! +",CS-305: Software engineering +"# Debugging + +> **Prerequisite**: You are _strongly_ encouraged, though not strictly required, to use an IDE for the exercises about debugging in this lecture. +> Alternatively, you can use Java's built-in command-line debugger, `jdb`, but it is far less convenient than a graphical user interface. + +Writing code is only one part of software engineering; you will often spend more time _reading_ code than writing code. +This may be to understand a piece of code so you can extend it, or _debug_ existing code by finding and fixing bugs. +These tasks are a lot easier if you write code with readability and debuggability in mind, and if you know how to use debugging tools. + + +## Objectives + +After this lecture, you should be able to: + +- Develop _readable_ code +- Use a _debugger_ to understand and debug code +- Isolate the _root cause_ of a bug +- Develop _debuggable_ code + + +## What makes code readable? + +Take a look at [planets.py](exercises/lecture/planets.py) in the in-lecture exercise folder. +Do you find it easy to read and understand? Probably not. +Did you spot the fact that sorting doesn't work because the code uses a function that returns the sorted list, but does not use its return value, rather than an in-place sort? +It's a lot harder to spot that bug when you have to use so much brain power just to read the code. + +Unfortunately, hard-to-read code doesn't only happen in exercises within course lecture notes. +Consider the following snipped from the ScalaTest framework: + +```scala +package org.scalatest.tools + +object Runner { + def doRunRunRunDaDoRunRun(...): Unit +} +``` + +What does this method do? The method name isn't going to help you. It's a reference to [a song from 1963](https://en.wikipedia.org/wiki/Da_Doo_Ron_Ron). +Sure, it's a funny reference, but wouldn't you rather be reading a name that told you what the method does? +Worse, [the method has 21 parameters](https://github.com/scalatest/scalatest/blob/282da2aff8f439906fe5e7d9c3112c84019545e7/jvm/core/src/main/scala/org/scalatest/tools/Runner.scala#L1044-L1066). +You can of course understand what the method does if you spend enough time on it, but should you have to? + +Let's talk about five components of code readability: naming, documentation, comments, formatting, and consistency. + +### Names + +First, an example of _naming_ without even using code: measurement errors. +If you measure the presence of something that's absent, that's a `type I error`. If you measure the absence of something that's present, that's a `type II error`. +These kinds of errors happen all the time in, for instance, tests to detect viruses. +However, it's easy to forget which is type `I` and which is type `II`. +And even if you remember, a typo duplicating a character can completely change the meaning of a sentence. +Instead, you can use `false positive` and `false negative`. +These names are easier to understand, easier to remember, and more resilient to spelling mistakes. + +Names in code can also make the difference between code that's easy to understand and code that's hard to even mentally parse. +A variable named `isUserOnline` in Java is fine... as long as it really is for an ""is the user online?"" boolean variable, that is. If it's an integer, it's a lot more confusing. +And if you're writing Python instead, it's also a problem since Python uses underscores to separate words, a.k.a. ""snake case"", so it should be `is_user_online`. +A variable named `can_overwrite_the_data` is quite a long name, which isn't a problem by itself, but it's redundant: of course we're overwriting ""data"", the name is too vague. + +Names are not only about variable, method, or class names. Consider the following method call: + +```java +split(""abc"", ""a"", true, true) +``` + +What does this method do? Good question. What if the call looked like this instead, with constants or enums? + +```java +split(""abc"", ""a"", + SplitOptions.REMOVE_EMPTY_ENTRIES, + CaseOptions.IGNORE_CASE) +``` + +This is the same method call, but we've given names instead of values, and now the meaning is clearer. +These constants are only a workaround for Java's lack of named parameters, though. In Scala, and other languages like C#, you could call the method like this: + +```scala +split(""abc"", + separator = ""a"", + removeEmptyEntries = true, + ignoreCase = true) +``` + +The code is now much more readable thanks to explicit names, without having to write extra code. + +A cautionary tale in good intentions with naming is _Hungarian notation_. +Charles Simonyi, a Microsoft engineer from Hungary, had the good idea to start his variable names with the precise data type the variables contained, such as `xPosition` for a position on the X axis, +or `cmDistance` for a distance in centimeters. This meant anyone could easily spot the mistake in a like such as `distanceTotal += speedLeft / timeLeft`, since dividing speed by time does not make a distance. +This became known as ""Hungarian notation"", because in Simonyi's native Hungary, names are spelled with the family name first, e.g., ""Simonyi Károly"". +Unfortunately, another group within Microsoft did not quite understand what Simonyi's goal was, and instead thought it was about the variable type. +So they wrote variable names such as `cValue` for a `char` variable, `lIndex` for a `long` index, and so on, which makes the names harder to read without adding any more information than is already in the type. +This became known as ""Systems Hungarian"", because the group was in the operating systems division, and unfortunately Systems Hungarian made its way throughout the ""Win32"" Windows APIs, +which were the main APIs for Windows until recently. Lots of Windows APIs got hard-to-read names because of a misunderstanding! +Once again, naming is not the only way to solve this issue, only one way to solve it. In the F# programming language, you can declare variables with units, such as `let distance = 30`, +and the compiler will check that comparisons and computations make sense given the units. + +--- +#### Exercise +The following names all look somewhat reasonable, why are they poor? +- `pickle` (in Python) +- `binhex` (in Python) +- `LinkedList` (in Java's `java.util`) +- `vector` (in C++'s `std`) +- `SortedList` (in C#'s `System.Collections`) +
+Answers (click to expand) +

+ +`pickling` is a rather odd way to refer to serializing and deserializing data as ""preserving"" it. + +`binhex` sounds like a name for some binary and hexadecimal utilities, but it's actually for a module that handles an old Mac format. + +A linked list and a doubly linked list are not the same thing, yet Java names the latter as if it was the former. + +A vector has a specific meaning in mathematics; C++'s `vector` is really a resizeable array. + +`SortedList` is an acceptable name for a sorted list class. But the class with that name is a sorted map! + +

+
+ +--- + + +Overall, names are a tool to make code clear and succinct, as well as consistent with other code so that readers don't have to explicitly think about names. + +### Documentation + +_Documentation_ is a tool to explain _what_ a piece of code does. + +Documentation is the main way developers learn about the code they use. When writing code, developers consult its documentation comments, typically within an IDE as tooltips. +Documentation comments should thus succinctly describe what a class or method does, including information developers are likely to need such as whether it throws exceptions, +or whether it requires its inputs to be in some specific format. + +### Comments + +_Comments_ are a tool to _why_ a piece of code does what it does. +Importantly, comments should not say _how_ a piece of code does what it does, as this information already exists in the code itself. + +Unfortunately, not all code is ""self-documenting"". +Comments are a way to explain tricky code. +Sometimes, code has to be written in a way that looks overly complicated or wrong because the code is working around some problem in its environment, such as a bug in a library, +or a compiler that only produces fast assembly code in specific conditions. + +Consider the following good example of a comment, taken from an old version of the Java development kit's `libjsound`: + +```c +/* Workaround: 32bit app on 64bit linux gets assertion failure trying to open ports. + Until the issue is fixed in ALSA (bug 4807) report no midi in devices in the configuration. */ +if (jre32onlinux64) { return 0; } +``` + +This is a great example of an inline comment: it explains what the external problem is, what the chosen solution is, and refers to an identifier for the external problem. +This way, a future developer can look up that bug in ALSA, a Linux audio system, and check if was fixed in the meantime so that the code working around the bug can be deleted. + +Inline comments are a way to explain code beyond what code itself can do, which is often necessary even if it ideally should not be. +This explanation can be for the people who will review your code before accepting your proposed changes, or for colleagues who will read your code when working on the codebase months later. +Don't forget that one of these ""colleagues"" is likely ""future you"". No matter how clear you think a piece of code is right now, future you will be grateful for comments that explain the non-obvious parts. + +### Formatting + +Formatting code is all about making it easier to read code. You don't notice good formatting, but you do notice bad formatting, and it makes it harder to focus. + +Here is a real world example of bad formatting: + +```c +if (!update(&context, ¶ms)) + goto fail; + goto fail; +``` + +Did you spot the problem? The code looks like the second `goto` is redundant, because it's formatted that way. +But this is C. The second `goto` is actually outside of the scope of the `if`, and is thus always executed. +This was a real bug that triggered [a vulnerability](https://nakedsecurity.sophos.com/2014/02/24/anatomy-of-a-goto-fail-apples-ssl-bug-explained-plus-an-unofficial-patch/) in Apple products. + +Some languages enforce at least some formatting consistency, such as Python and F#. But as you saw with the `planets.py` exercise earlier, that does not mean it's impossible to format one's code poorly. + +### Consistency + +Should you use `camelCase` or `snake_case` for your names? 4 or 8 spaces for indentation? Or perhaps tabs? So many questions. + +This is what _conventions_ are for. The entire team decides, once, what to do. +Then every member of the team accepts the decisions, and benefits from a consistent code style without having to explicitly think about it. + +Beware of a common problem called _bikeshedding_ when deciding on conventions. +The name comes from the story illustrating it: a city council meeting has two items on the agenda, the maintenance of a nuclear power plant and the construction of a bike shed. +The council quickly approves the nuclear maintenance, which is very expensive, because they all agree that this maintenance is necessary to continue to provide electricity to the city. +Then the council spends an hour discussing the plans for the bike shed, which is very cheap. Isn't it still too expensive? Surely the cost can be reduced a bit, a bike shed should be even cheaper. +Should it be blue or red? Or perhaps gray? How many bikes does it need to contain? +It's easy to spend lots of time on small decisions that ultimately don't matter much, because it's easy to focus on them. +But that time should be spent on bigger decisions that are more impactful, even if they are harder to discuss. + +Once you have agreed on a convention, you should use tools to enforce it, not manual efforts. +Command-line tools exist, such as `clang-format` for C code, as well as tools built into IDEs, which can be configured to run whenever you save a file. +That way, you no longer have to think about what the team's preferences are, tools do it for you. + + +## How can one efficiently debug a program? + +Your program just got a bug report from a user: something doesn't work as expected. Now what? +The presence of a bug implies that the code's behavior doesn't match the intent of the person who wrote it. + +The goal of _debugging_ is to find and fix the _root cause_ of the bug, i.e., the problem in the code that is the source of the bug. +This is different from the _symptoms_ of the bug, in the same way that symptoms of a disease such as loss of smell are different from a root cause such as a virus. + +You will find the root cause by repeatedly asking ""_why?_"" when you see symptoms until you get to the root cause. +For instance, let's say you have a problem: you arrived late to class. +Why? Because you woke up late. +Why? Because your alarm didn't ring. +Why? Because your phone had no battery. +Why? Because you forgot to plug your phone before going to sleep. +Aha! That's the cause of the bug. If you had stopped at, say, ""your alarm didn't ring"", and tried to fix it by adding a second phone with an alarm, you would simply have two alarms that didn't ring, +because you would forget to plug in the second phone as well. +But now that you know you forgot to plug in your phone, you can attack the root cause, such as by putting a post-it above your bed reminding you to charge your phone. +You could in theory continue asking ""why?"", but it stops being helpful after a few times. +In this case, perhaps the ""real"" root cause is that you forget things often, but you cannot easily fix that. + +At a high level, there are three steps to debugging: reproduce the bug, isolate the root cause, and debug. + +Reproducing the bug means finding the conditions under which it appears: +- What environment? + Is it on specific operating systems? At specific times of the day? In specific system languages? +- What steps need to be taken to uncover the bug? + These could be as simple as ""open the program, click on the 'login' button, the program crashes"", or more complex, such as creating + multiple users with specific properties and then performing a sequence of tasks that trigger a bug. +- What's the expected outcome? That is, what do you expect to happen if there is no bug? +- What's the actual outcome? This could be simply ""a crash"", or it could be something more complex, such as ""the search results are empty even though there is one item matching the search in the database"" +- Can you reproduce the bug with an automated test? This makes it easier and less error-prone to check if you have fixed the bug or not. + +Isolating the bug means finding roughly where the bug comes from. +For instance, if you disable some of your program's modules by commenting out the code that uses them, does the bug still appear? Can you find which modules are necessary to trigger the bug? +You can also isolate using version control: does the bug exist in a previous commit? If so, what about an even older commit? Can you find the one commit that introduced the bug? +If you can find which commit introduced the bug, and the commit is small enough, you have drastically reduced the amount of code you need to look at. + +Finally, once you've reproduced and isolated the bug, it's time to debug: see what happens and figure out why. +You could do this with print statements: +```c +printf(""size: %u, capacity: %u\n"", size, capacity); +``` +However, print statements are not convenient. You have to write potentially a lot of statements, especially if you want to see the values within a data structure or an object. +You may not even be able to see the values within an object if they are private members, at which point you need to add a method to the object just to print some of its private members. +You also need to manually remove prints after you've fixed the bug. +Furthermore, if you realize while executing the program that you forgot to print a specific value, using prints forces you to stop the program, add a print, and run it again, which is slow. + +Instead of print statements, use a tool designed for debugging: a debugger! + +### Debuggers + +A debugger is a tool, typically built into an IDE, that lets you pause the execution of a program wherever you want, inspect what the program state is and even modify the state, +and generally look at what a piece of code is actually doing without having to modify that piece of code. + +One remark about debuggers, and tools in general: some people think that not using a tool, and doing it the ""hard"" way instead, somehow makes them better engineers. +This is completely wrong, it's the other way around: knowing what tools are available and using them properly is key to being a good engineer. +Just like you would ignore people telling you to not take a flashlight and a bottle of water when going hiking in a cave, ignore people who tell you to not use a debugger or any other tool you think is useful. + +Debuggers also work for software that runs on other machines, such as a server, as long as you can launch a debugging tool there, you can run the graphical debugger on your machine to debug a program on a remote machine. +There are also command-line debuggers, such as `jdb` for Java or `pdb` for Python, though these are not as convenient since you must manually input commands to, e.g., see what values variables have. + +The one prerequisite debuggers have is a file with _debug symbols_: the link between the source code and the compiled code. +That is, when the program executes a specific line of assembly code, what line is it in the source code? What variables exist at that point, and in which CPU registers are they? +This is of course not necessary for interpreted languages such as Python, for which you have the source code anyway. +It is technically possible to run a debugger without debug symbols, but you then have to figure out how the low-level details map to high-level concepts yourself, which is tedious. + +Let's talk about five key questions you might ask yourself while debugging, and how you can answer them with a debugger. + +_Does the program reach this line?_ +Perhaps you wonder if the bug triggers when a particular line of code executes. +To answer this question, use a _breakpoint_, which you can usually do by right-clicking on a line and selecting ""add a breakpoint"" from the context menu. +Once you have added a breakpoint, run the program in debug mode, and execution will pause once that line is reached. +Debuggers typically allow more advanced breakpoints as well, such as ""break only if some condition holds"", or ""break once every N times this line is executed"". + +You can even use breakpoints to print things instead of pausing execution. +Wait, didn't we just say prints weren't a good idea? The reason why printing breakpoints are better is that you don't need to edit the code, and thus don't need to revert those edits later, +and you can change what is printed where while the program is running. + +_What's the value of this variable?_ +You've added a breakpoint, the program ran to it and paused execution, now you want to see what's going on. +In an IDE, you can typically hover your mouse over a variable in the source code to see its value while the program is paused, as well as view a list of all local variables. +This includes the values within data structures such as arrays, and the private members in classes. +You can also execute code to ask questions, such as `n % 12 == 0` to see if `n` is currently a multiple of 12, or `IsPrime(n)` if you have an `IsPrime` method and want to see if what it returns for `n`. + +_What if instead...?_ +You see that a variable has a value you didn't expect, and wonder if the bug would disappear if it had a different value. +Good news: you can try exactly that. Debuggers typically have some kind of ""command"" window where you can write lines such as `n = 0` to change the value of `n`, or `lst.add(""x"")` to add `""x""` to the list `lst`. + +_What will happen next?_ +The program state looks fine, but maybe the next line is what causes the problem? +""Step"" commands let you execute the program step-by-step, so that you can look at the program state after executing each step to see when something goes awry. +""Step into"" will let you enter any method that is called, ""step over"" will go to the next line instead of entering a method, and ""step out"" will finish executing the current method. +Some debuggers have additional tools, such as a ""smart"" step that only goes inside methods with more than a few lines. +Depending on the programming language and the debugger, you might even be able to change the instruction pointer to whichever line of code you want, and edit some code on the fly without having to stop program execution. + +Note that you don't have to use the mouse to run the program and run step by step: debuggers typically have keyboard shortcuts, such as using F5 to run, F9 to step into, and so on, and you can usually customize those. +Thus, your workflow will be pressing a key to run, looking at the program state after the breakpoint is hit, then pressing a key to step, looking at the program state, stepping again, and so on. + +_How did we get here?_ +You put a breakpoint in a method, the program reached it and paused execution, but how did the program reach this line? +The _call stack_ is there to answer this: you can see which method called you, which method called that one, and so on. +Not only that, but you can view the program state at the point of that call. For instance, you can see what values were given as arguments to the method who called the method who called the method you are currently in. + +_What happened to cause a crash?_ +Wouldn't it be nice if you could see the program state at the point at which there was a crash on a user's machine? +Well, you can! The operating system can generate a _crash dump_ that contains the state of the program when it crashes, and you can load that crash dump into a debugger, +along with the source code of the program and the debugging symbols, to see what the program state was. This is what happens when you click on ""Report the problem to Microsoft"" after your Word document crashed. +Note that this only works with crashes, not with bugs such as ""the behavior is not what I expect"" since there is no automated way to detect such unexpected behavior. + + +### Debugging in practice + +When using a debugger to find the root cause of a bug, you will add a breakpoint, run the program until execution pauses to inspect the state, optionally make some edits to the program given your observations, +and repeat the cycle until you have found the root cause. + +However, sometimes you cannot figure out the problem on your own, and you need help. +This is perfectly normal, especially if you are debugging code written by someone else. +In this case, you can ask a colleague for help, or even post on an Internet forum such as [StackOverflow](https://stackoverflow.com). +Come prepared, so that you can help others help you. What are the symptoms of the bug? Do you have an easy way to reproduce the bug? What have you tried? + +Sometimes, you start explaining your problem to a colleague, and during your explanation a light bulb goes off in your head: there's the problem! +Your colleague then stares at you, happy that you figured it out, but a bit annoyed to be interrupted. +To avoid this situation, start by _rubber ducking_: explaining your problem to a rubber duck, or to any other inanimate object. +Talk to the object as if it was a person, explaining what your problem is. +The reason this works is that when we explain a problem to someone else, we typically explain what is actually happening, rather than what we wish was happening. +If you don't understand the problem while explaining it to a duck, at least you have rehearsed how you will explain the bug, and you will be able to better explain it to a human. + +There is only one way to get better at debugging: practice doing it. Next time you encounter a bug, use a debugger. +The first few times, you may be slower than you would be without one, but after that your productivity will skyrocket. + +--- +#### Exercise +Run the code in the [binary-tree](exercises/lecture/binary-tree) folder. +First, run it. It crashes! Use a debugger to add breakpoints and inspect what happens until you figure out why, and fix the bugs. +Note that the crash is not the only bug. + +
+Solution (click to expand) +

+ +First, there is no base case to the recursive method that builds a tree, so you should add one to handle the `list.size() == 0` case. + +Second, the bounds for sub-lists are off: they should be `0..mid` and `mid+1..list.size()`. + +There is a correctness bug: the constructor uses `l` twice, when it should set `right` to `r`. This would not have happened if the code used better names! + +We provide a [corrected version](exercises/solutions/lecture/BinaryTree.java). + +

+
+ + +## What makes code debuggable? + +The inventor [Charles Babbage](https://en.wikipedia.org/wiki/Charles_Babbage) once said about a machine of his +""_On two occasions, I have been asked 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?'_ +_I am not able to rightly apprehend the kind of confusion of ideas that could provoke such a question._"" + +It should be clear that a wrong input cannot lead to a correct output. +Unfortunately, often a wrong input leads to a wrong output _silently_: there is no indication that anything wrong happened. +This makes it hard to find the root cause of a bug. +If you notice something is wrong, is it because the previous operation did something wrong? +Or because the operation 200 lines of code earlier produced garbage that then continued unnoticed until it finally caused a problem? + +[Margaret Hamilton](https://en.wikipedia.org/wiki/Margaret_Hamilton_\(software_engineer\)), +who coined the term ""software engineering"" to give software legitimacy based on her experience developing software for early NASA missions, +[said](https://www.youtube.com/watch?v=ZbVOF0Uk5lU) of her early work ""we learned to spend more time up front [...] so that we wouldn't waste all that time debugging"". + +We will see three methods to make code debuggable: defensive programming, logging, and debug-only code. + +### Defensive programming + +Bugs are attacking you, and you must defend your code and data! +That's the idea behind _defensive programming_: make sure that problems that happen outside of your code cannot corrupt your state or cause you to return garbage. +These problems could for instance be software bugs, humans entering invalid inputs, humans deliberately trying to attack the software, or even hardware corruption. + +It may seem odd to worry about hardware corruption, but it happens more often than one thinks; +for instance, a single bit flip can turn `microsoft.com` into `microsfmt.com`, since `o` is usually encoded as `01101101` in binary, which can flip to `01101111`, the encoding for `m`. +Software that intends to talk to `microsoft.com` could thus end up receiving data from `microsfmt.com` instead, which may be unrelated, specifically set up for attacks, or +[an experiment to see how much this happens](http://dinaburg.org/bitsquatting.html). + +Instead of silently producing garbage, code should fail as early as possible. +The closest a failure is to its root cause, the easier it is to debug. + +The key tool to fail early is _assertions_. +An assertion is a way to check whether the code actually behaves in the way the engineer who wrote it believes it should. + +For instance, if a piece of code finds the `index` of a `value` in an `array`, an engineer could write the following immediately after finding `index`: +```java +if (array[index] != val) { + throw new AssertionError(...); +} +``` +If this check fails, there must be a bug in the code that finds `index`. The ""isolate the bug"" step of debugging is already done by the assertion. + +What should be done if the check fails, though? Crash the program? +This depends on the program. Typically, if there is a way to cut off whatever caused the problem from the rest of the program, that's a better idea. +For instance, the current _request_ could fail. +This could be a request made to a server, for instance, or an operation triggered by a user pressing a button. +The software can then display a message stating an error occurred, enabling the user to either retry or do something else. +However, some failures are so bad that it is not reasonable to continue. +For instance, if the code loading the software's configuration fails an assertion, there is no point in continuing without the configuration. + +An assertion that must hold when calling a method is a _precondition_ of that method. +For instance, an `int pickOne(int[] array)` method that returns one of the array's elements likely has as precondition ""the array isn't `null` and has at least `1` element"". +The beginning of the method might look like this: +```java +int pickOne(int[] array) { + if (array == null || array.length == 0) { + throw new IllegalArgumentException(...); + } + // ... +} +``` +If a piece of code calls `pickOne` with a null or empty array, the method will throw an `IllegalArgumentException`. + +Why bother checking this explicitly when the method would fail early anyway if the array was null or empty, since the method will dereference the array and index its contents? +The type of exception thrown indicates _whose fault it is_. +If you call `pickOne` and get a `NullPointerException`, it is reasonable to assume that `pickOne` has a bug, because this exception indicates +the code of `pickOne` believes a given reference is non-null, since it dereferences it, yet in practice the reference is null. +However, if you call `pickOne` and get an `IllegalArgumentException`, it is reasonable to assume that your code has a bug, +because this exception indicates you passed an argument whose value is illegal. +Thus, the type of exception helps you find where the bug is. + +An assertion that must hold when a method returns is a _postcondition_ of that method. +In our example, the postcondition is ""the returned value is some value within the array"", which is exactly what you call `pickOne` to get. +If `pickOne` returns a value not in the array, code that calls it will yield garbage, because the code expected `pickOne` to satisfy its contract yet this did not happen. +It isn't reasonable to insert assertions every time one calls a method to check that the returned value is acceptable; instead, it's up to the method to check that it honors its postcondition. +For instance, the end of `pickOne` might look like this: +```java +int result = ... +if (!contains(array, result)) { + throw new AssertionError(...); +} +return result; +``` +This way, if `result` was computed incorrectly, the code will fail before corrupting the rest of the program with an invalid value. + +Some assertions are both pre- and postconditions for the methods of an object: _object invariants_. +An invariant is a condition that always hold from the outside. +It may be broken during an operation, as long as this is not visible to the outside world because it is restored before the end of the operation. +For instance, consider the following fields for a stack: +```java +class Stack { + private int[] values; + private int top; // top of the stack within `values` +} +``` +An object invariant for this class is `-1 <= top < values.length`, i.e., either `top == -1` which means the stack is empty, or `top` points to the top value of the stack within the array. +One way to check invariants is to write an `assertInvariants` method that asserts them and call it at the end of the constructor and the beginning and end of each method. +All methods of the class must preserve the invariant so that they can also rely on it holding when they get called. +This is one reason encapsulation is so useful: if anyone could modify `values` or `top` without going through the methods of `Stack`, +there would be no way to enforce this invariant. + +Consider the following Java method: +```java +void setWords(List words) { + this.words = words; +} +``` +It seems trivially correct, and yet, it can be used in the following way: +``` +setWords(badWords); +badWords.add(""Bad!""); +``` +Oops! Now the state of the object that holds `words` has been modified from outside of the object, which could break any invariants the object is supposed to have. + +To avoid this and protect one's state, _data copies_ are necessary when dealing with mutable values: +```java +void setWords(List words) { + this.words = new ArrayList<>(words); +} +``` +This way, no changes can occur to the object's state without its knowledge. +The same goes for reads, with `return this.words` being problematic and `return new ArrayList<>(this.words)` avoiding the problem. + +Even better, if possible the object could use an _immutable_ list for `words`, such as Scala's default `List[A]`. +This fixes the problem without requiring data copies, which slow down the code. + +--- +#### Exercise +Check out the code in the [stack](exercises/lecture/stack) folder, which contains an `IntStack` class and a usage example. +Add code to `IntStack` to catch problems early, and fix any bugs you find in the process. +First, look at what the constructor should do. Once you've done that, add an invariant and use it, and a precondition for `push`. +Then fix any bugs you find. + +
+Solution (click to expand) +

+ +First, the constructor needs to throw if `maxSize < 0`, since that is invalid. + +Second, the stack should have the invariant `-1 <= top < values.length`, as discussed above. + +After adding this invariant, note that `top--` in `pop` can break the invariant since it is used unconditionally. The same goes for `top++` in `push`. +These need to be changed to only modify `top` if necessary. + +To enable the users of `IntStack` to safely call `push`, one can expose an `isFull()` method, and use it as a precondition of `push`. + +We provide a [corrected version](exercises/solutions/lecture/Stack.java). + +

+
+ + +### Logging + +_What happened in production?_ +If there was a crash, then you can get a crash dump and open it in a debugger. +But if it's a more subtle bug where the outcome ""looks wrong"" to a human, how can you know what happened to produce this outcome? + +This is where _logging_ comes in: recording what happens during execution so that the log can be read in case something went wrong. +One simple way to log is print statements: +```python +print(""Request: "" + request) +print(""State: "" + state) +``` +This works, but is not ideal for multiple reasons. +First, the developer must choose what to log at the time of writing the program. +For instance, if logging every function call is considered too much for normal operation, then the log of function calls will never be recorded, even though in some cases it could be useful. +Second, the developer must choose how to log at the time of writing the program. +Should the log be printed to the console? Saved to a file? Both? Perhaps particular events should send an email to the developers? +Third, using the same print function for every log makes it hard to see what's important and what's not so important. + +Instead of using a specific print function, logging frameworks provide the abstraction of a log with multiple levels of importance, such as Python's logging module: +```python +logging.debug(""Detail"") +logging.info(""Information"") +logging.warning(""Warning"") +logging.error(""Error"") +logging.critical(""What a Terrible Failure"") +``` +The number of log levels and their name changes in each logging framwork, but the point is that there are multiple ones and they do not imply anything about where the log goes. + +Engineers can write logging calls for everything they believe might be useful, using the appropriate log level, and decide _later_ what to log and where to log. +For instance, by default, ""debug"" and ""info"" logs might not even be stored, as they are too detailed and not important enough. +But if there is currently a subtle bug in production, one can enable them to see exactly what is going on, without having to restart the program. +It may make sense to log errors with an email to the developers, but if there are lots of errors the developers are already aware of, +they might decide to temporarily log errors to a file instead, again without having to restart the program. + +It is important to think about privacy when writing logging code. +Logging the full contents of every request, for instance, might lead to logging plain text passwords for a ""user creation"" function. +If this log is kept in a file, and the server is hacked, the attackers will have a nice log of every password ever set. + +### Debug-only code + +What about defensive programming checks and logs that are too slow to reasonably enable in production? +For instance, if a graph has as invariant ""no node has more than 2 edges"", but the graph typically has millions of nodes, what to do? + +This is where _debug-only_ code comes in. +Programming languages, their runtimes, and frameworks typically offer ways to run code only in debug mode, such as when running automated tests. + +For instance, in Python, one can write an `if __debug__:` block, which will execute only when the code is not optimized. + +It's important to double-check what ""debug"" and ""optimized"" mean in any specific context. +For instance, in Python `__debug__` is `True` by default, unless the interpreter is run with the `-O` switch, for ""optimize"". +In Java, assertions are debug-only code but they are disabled by default, and can be enabld with the `-ea` switch. +Scala has multiple levels of debug-only code that are all on by default but can be selectively disabled with the `-Xelide-below` switch. + +Even more important, before writing debug-only code, think hard about what is ""reasonable"" to enable in production given the workloads you have. +Spending half a second checking an invariant is fine in a piece of code that will take seconds to run because it makes many web requests, for instance, +even though half a second is a lot in CPU time. + +Keep in mind what [Tony Hoare](https://en.wikipedia.org/wiki/Tony_Hoare), one of the pioneers of computer science and especially programming languages and verification, +once said in his [""Hints on Programming Language Design""](https://dl.acm.org/doi/abs/10.5555/63445.C1104368): +""_It is absurd to make elaborate security checks on debugging runs, when no trust is put in the results,_ +_and then remove them in production runs, when an erroneous result could be expensive or disastrous._ +_What would we think of a sailing enthusiast who wears his life-jacket when training on dry land_ +_but takes it off as soon as he goes to sea?_"" + + +## Summary + +In this lecture, you learned: +- How to write readable code: naming, formatting, comments, conventions +- How to debug code: reproducing bugs, using a debugger +- How to write debuggable code: defensive programming, logging, debug-only code + +You can now check out the [exercises](exercises/)! +",CS-305: Software engineering +"# Asynchrony + +You're writing an app and want it to download data and process it, so you write some code: +```java +data = download(); +process(data); +``` +This works fine if both of these operations finish in the blink of an eye... but if they take more than that, since these operations are all your code is doing, +your user interface will be frozen, and soon your users will see the dreaded ""application is not responding"" dialog from the operating system. + +Even if your app has no user interface and works in the background, consider the following: +```java +firstResult = computeFirst(); +secondResult = computeSecond(); +compare(firstResult, secondResult); +``` +If `computeFirst` and `computeSecond` are independent, the code as written is likely a waste of resources, since modern machines have multiple +CPU cores and thus your app could have computed them in parallel before executing the comparison on both results. + +Even if you have a single CPU core, there are other resources, such as the disk: +```java +for (int i = 0; i < 100; i++) { + chunk = readFromDisk(); + process(chunk); +} +``` +As written, this code will spend some of its time reading from the disk, and some of its time processing the data that was read, +but never both at the same time, which is a waste of resources and time. + +These examples illustrate the need for _asynchronous code_. +For instance, an app can start a download in the background, show a progress bar while the download is ongoing, and process the data once the download has finished. +Multiple computations can be executed concurrently. +A disk-heavy program can read data from the disk while it is processing data already read. + + +## Objectives + +After this lecture, you should be able to: +- Understand _asynchrony_ in practice +- Build async code with _futures_ +- Write _tests_ for async code +- _Design_ async software components + + +## What are async operations? + +Asynchronous code is all about starting an operation without waiting for it to complete. Instead, one schedules what to do after the completion. +Asynchronous code is _concurrent_, and if resources are available it can also be _parallel_, though this is not required. + +There are many low-level primitives you could use to implement asynchrony: threads, processes, fibers, and so on. +You could then use low-level interactions between those primitives, such as shared memory, locks, mutexes, and so on. +You could do all of that, but low-level concurrency is _very hard_. +Even experts who have spent years understanding hardware details regularly make mistakes. +This gets even worse with multiple kinds of hardware that provide different guarantees, such as whether a variable write is immediately visible to other threads or not. + +Instead, we want a high-level goal: do stuff concurrently. +An asynchronous function is similar to a normal function, except that it has two points of return: synchronously, to let you know the operation has started, +and asynchronously, to let you know the operation has finished. +Just like a synchronous function can fail, an asynchronous function can fail, but again at two points: synchronously, to let you know the operation cannot even start, +perhaps because you gave an invalid argument, and asynchronously, to let you know the operation has failed, perhaps because there was no Internet connection even after retrying in the background. + +We would like our asynchronous operations to satisfy three goals: they should be composable, they should be completion-based, and they should minimize shared state. +Let's see each of these three. + +### Composition + +When baking a cake, the recipe might tell you ""when the melted butter is cold _and_ the yeast has proofed, _then_ mix them to form the dough"". +This is a kind of composition: the recipe isn't telling you to stand in front of the melted butter and watch it cool, rather it lets you know what operations must finish for the next step and what that next step is. +You are free to implement the two operations however you like; you could stand in front of the butter, or you could go do something else and come back later. + +Composition is recursive: once you have a dough, the next step might be ""when the dough has risen _and_ the filling is cold, _then_ fill the dough"". +Again, this is a composition of asynchronous operations into one big operation representing making a filled dough. + +The two previous examples were _and_, but _or_ is also a kind of composition. The recipe might tell you ""bake for 30 minutes, _or_ until golden brown"". +If the cake looks golden brown after 25 minutes, you take it out of the oven, and forget about the 30 minutes. + +One important property of composition is that errors must implicitly compose, too. +If the ripe strawberries you wanted to use for the filling splash on the floor, your overall ""bake a cake"" operation has failed. +The recipe doesn't explicitly tell you that, because it's a baking convention that if any step fails, you should stop and declare the whole operation a failure, unless you can re-do the failed operation from scratch. +Maybe you have other strawberries you can use instead. + +### Completion + +If you have a cake you'd like to bake, you could ask a baker ""please let me know when you've finished baking the cake"", which is _completion-based_. + +Alternatively, you could ask them ""please let me know when the oven is free"" and then bake it yourself, which is a bit lower level and gives you more control. +This is _readiness-based_, and some older APIs, such as those found on Unix, were designed that way. + +The problem with readiness-based operations is that they lead to inefficiencies due to concurrent requests. +The baker just told you the oven was ready, but unbeknownst to you, another person has put their cake in the oven before you've had time to get there. +Now your ""put the cake in the oven"" operation will fail, and you have to ask the baker to let you know when the oven is free again, at which point the same problem could occur. + +### Minimizing shared state + +Multiple asynchronous operations might need to read and write to the same shared state. +For instance, the task of counting all elements in a large collection that satisfy some property could be parallelized by dividing the collection into chunks and processing multiple chunks in parallel. +Each sub-task could then increment a global counter after every matching element it sees. + +However, shared state introduces an exponential explosion in the number of paths through a piece of software. +There is one path in which the code of sub-task 1 accesses the shared state first, and one in which sub-task 2 accesses it first, and so on for each sub-task and for each shared state access. +Even with a simple counter, one already needs to worry about atomic updates. +With more complex shared state, one needs to worry about potential bugs that only happen with specific ""interleavings"" of sub-task executions. +For instance, there may be a crash only if sub-task 3 accessed shared state first, followed by 2, followed by 3 again, followed by 1. +This may only happen to a small fraction of users. +Then one day, a developer speeds up the code of sub-task 3, and now the bug happens more frequently because the problematic interleaving is more common. +The next day, a user gets an operating system update which slightly changes the scheduling policy of threads in the kernel. Now the bug happens all the time for this user, +because the operating system happens to choose an order of execution that exposes the bug on that user's machine. + +Asynchronous operations must minimize shared state, for instance by computing a local result, then merging all local results into a single global result at the end. + +One extreme form of minimizing state is _message passing_: operations send each other messages rather than directly accessing the same state. +For instance, operations handling parts of a large collection could do their work and then send their local result as a message to another ""monitor"" operation which is responsible for merging these results. +Message passing is the standard way to communicate for operations across machines already, so it can make sense to do it even on a single machine. +One interesting use of this is the [Multikernel](https://www.sigops.org/s/conferences/sosp/2009/papers/baumann-sosp09.pdf), an operating system that can run on multiple local CPUs each of a different architecture, +because it uses only message passing and never shared memory, thus what each local thread runs on is irrelevant. + + +## How can we write maintainable async code? + +A simple and naïve way to write asynchronous code is with _callbacks_. +Instead of returning a value, functions take an additional callback argument that is called with the value once it's ready: +```java +void send(String text, Consumer callback); + +send(""hello"", reply -> { + System.out.println(reply); +}); +``` + +One can call an asynchronous function within a callback, leading to nested callbacks: +```java +send(""hello"", reply -> { + send(""login"", reply2 -> { + // ... + }); +}); +``` + +But this becomes rather messy. +And we haven't even discussed errors yet, which need another callback: +```java +void send( + String text, + Consumer callback, + Consumer errorCallback +); +``` + +Code that uses callbacks frequently ends in ""callback hell"": +```java +send(""hello"", reply -> { + send(""login"", reply2 -> { + send(""join"", reply3 -> { + send(""msg"", reply4 -> { + send(""msg"", reply5 -> { + send(""logout"", reply6 -> { + // ... + }); + }); + }); + }); + }); +}); +``` +This code is hard to read and to maintain, because callbacks are _too_ simple and low-level. +They do not provide easy ways to be composed, especially when dealing with errors as well. +The resulting code is poorly structured. Just because something is simple does not mean it is good! + +Instead of overly-simple callbacks, modern code uses _futures_. +A future is an object that represents an operation in one of three states: in progress, finished successfully, or failed. + +In Java, futures are represented with the `CompletableFuture` class, where T is the type of the result. +Since `void` is not a type, Java has the type `Void` with a capital `V` to indicate no result, thus an asynchronous operation that would return `void` if it was synchronous instead returns a `CompletableFuture`. +`CompletableFuture`s can be composed with synchronous operations and with asynchronous operations using various methods. + +### Composing `CompletableFuture`s + +The `thenAccept` method creates a future that composes the current one with a synchronous operation. +If the current future fails, so does the overall future; but if it succeeds, the overall future represents applying the operation to the result. +And if the operation fails, the overall future has failed: +```java +CompletableFuture future = // ... +return future.thenAccept(System.out::println); +``` + +Sometimes a failure can be replaced by some kind of ""backup"" value, such as printing the error message if there is one instead of not printing anything. +The `exceptionally` method returns a future that either returns the result of the current future if there is one, or transforms the future's exception into a result: +```java +return future.exceptionally(e -> e.getMessage()) + .thenAccept(System.out::println); +``` + +One may wish to not potentially wait forever, such as when making a network request if the server is very slow +or the network connectivity is just good enough to connect but not good enough to transmit data at a reasonable rate. +Implementing timeouts properly is difficult, but thankfully the Java developers provided a method to do it easily: +```java +return future.orTimeout(5, TimeUnit.SECONDS) + .exceptionally(e -> e.getMessage()) + .thenAccept(System.out::println); +``` +The returned combined future represents: +- Printing the result of the original future, if it completes successfully within 5s +- Printing the failure of the original future, if it fails within 5s +- Printing the message from the timeout error, if the original future takes more than 5s + +Futures can be composed with asynchronous operations too, such as with `thenCompose`: +```java +CompletableFuture getMessage(); +CompletableFuture log(String text); + +return getMessage().thenCompose(log); +``` +The returned future represents first getting the message, then logging it, or failing if either of these operations fail. + +Composing futures in parallel and doing something with both results can be done with the `thenCombine` method: +```java +CompletableFuture computeFirst(); +CompletableFuture computeSecond(); + +return computeFirst().thenCombine(computeSecond(), (a, b) -> a + b); +``` +The resulting future in this example represents doing both operations concurrently and returning the concatenation of their results, or failing if either future fails or if the combination operation fails. + +One can compose more than two futures into one that executes them all concurrently with a method such as allOf: +``` +CompletableFuture uploads = ...; +for (int n = 0; n < uploads.length; n++) { + uploads[n] = ... +} +return CompletableFuture.allOf(uploads); +``` +In the loop, each upload will start, and by the time the loop has ended, some of the uploads may have finished while others may still be in progress. +The resulting future represents the combined operation. +One can also use `anyOf` to represent the operation of waiting for any one of the futures to finish and ignoring the others. +Refer to the [Javadoc](https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/concurrent/CompletableFuture.html) for the exact semantics of these operations on failures. + +_A quick warning about rate limits._ +We have just seen code that allows one to easily upload a ton of data at the same time. +Similar code can be written with an operation to download data, or to call some API that triggers an operation. +However, in practice, many websites will rate limit users to avoid one person using too many resources. +For instance, at the time of this writing, GitHub allows 5000 requests per hour. +Doing too many operations in a short time may get you banned from an API, so look at the documentation of any API you're using carefully before starting hundreds of futures representing API calls. + +### Creating `CompletableFuture`s + +One can create a future and complete it after some work in a background thread like so: +```java +var future = new CompletableFuture(); + +new Thread(() -> { + // ... do some work ... + future.complete(""hello""); +}).start(); + +return future; +``` +Instead of `complete`, one can use `completeExceptionally` to fail the future instead, giving the exception that caused the failure as an argument instead of the result. +This code forks execution into two logical threads: one that creates `future`, creates and starts the threads, and returns `future`; and one that contains the code in the `Thread` constructor. +By the time `return future` executes, the future may already have been completed, or it may not. +The beauty of futures is that the code that obtains the returned future doesn't have to care. +Instead, it will use composition operations, which will do the right thing whether the future is already completed, still in progress, or will fail. + +Creating a Future representing a background operation is common enough that there is a helper method for it: +```java +return CompletableFuture.supplyAsync(() -> { + // ... do some work ... + return ""hello""; // or throw an exception to fail the Future +}); +``` + +Sometimes one wants to create a future representing an operation that is already finished, perhaps because the result was cached. There is a method for that too: +```java +return CompletableFuture.completedFuture(""hello""); +``` + +However, creating futures is a low-level operation. +Most code composes futures created by lower-level layers. +For instance, a TCP/IP stack might create futures. +Another reason to create futures is when adapting old asynchronous code that uses other patterns, such as callbacks. + +--- +#### Exercise +Take a look at the four methods in [`Basics.java`](exercises/lecture/src/main/java/Basics.java), and complete them one by one, based on the knowledge you've just acquired. +You can run `App.java` to see if you got it right. + +
+Suggested solution (click to expand) +

+ +- Printing today's weather is done by composing `Weather.today` with `System.out.println` using `thenAccept` +- Uploading today's weather is done by composing `Weather.today` with `Server.upload` using `thenCompose` +- Printing either weather is done using `acceptEither` on two individual futures, both of which use `thenApply` to prefix their results, with `System.out.println` as the composition operation +- Finally, `Weather.all` should be composed with `orTimeout` and `exceptionallyCompose` to use `Weather.today` as a backup + +We provide a [solution example](exercises/solutions/lecture/src/main/java/Basics.java). + +

+
+--- + +### Sync over async + +Doing the exercise above, you've noticed `App.java` uses `join()` to block until a future finishes and either return the result or throw the error represented by the future. +There are other such ""sync over async"" operations, such as `isDone()` and `isCompletedExceptionallly()` to check a future's status. + +""Sync over async"" operations are useful specifically to use asynchronous operations in a context that must be synchronous, +typically because you are working with a framework that expects a synchronous operation. +Java's `main()` method must be synchronous, for instance. +JUnit also expects `@Test`s to be synchronous. +This is not true of all frameworks, e.g., C# can have asynchronous main methods, and most C# testing frameworks support running asynchronous test methods. + +You might also need to use methods such as `isDone()` if you are implementing your own low-level infrastructure code for asynchronous functions, though that is a rare scenario. + +In general, everyday async functions must _not_ use `join()` or any other method that attempts to synchronously wait for or interact with an asynchronous operation. +If you call any of these methods on a future outside of a top-level method such as `main` or a unit test, you are most likely doing something wrong. +For instance, if in a button click handler you call `join()` on a `CompletableFuture` representing the download of a picture, you will freeze the entire app until the picture is downloaded. +Instead, the button click handler should compose that future with the operation of processing the picture, and the app will continue working while the download and processing happen in the background. + +### Sync errors + +Another form of synchrony in asynchronous operations is synchronous errors, i.e., signalling that an asynchronous operation could not even be created. +Unlike asynchronous errors contained in Futures, which in Java are handled with methods such as `exceptionally(...)`, synchronous errors are handled with the good old `try { ... } catch { ... }` statement. +Thus, if you call a method that could fail synchronously or asynchronously and you want to handle both cases, you must write the error handling code twice. + +Only use sync errors for bugs, i.e., errors that are not recoverable and indicate something has gone wrong such as an `IllegalArgumentException`: +```java +CompletableFuture send(String s) { + if (s == null) { + // pre-condition violated, the calling code has a bug + throw new IllegalArgumentException(...); + } + ... +} +``` +You will thus not have to duplicate error handling logic, because you will not handle unrecoverable errors. + +**Never** do something like this in Java: +```java +CompletableFuture send(String s) { + if (internetUnavailable) { + // oh well, we already know it'll fail, we can fail synchronously...? + throw new IOError(...); + } + ... +} +``` +This forces all of your callers to handle both synchronous and asynchronous exceptions. +Instead, return a `CompletableFuture` that has failed already: +```java + if (internetUnavailable) { + return CompletableFuture.failedFuture(new IOError(...)); + } +``` +This enables your callers to handle that exception just like any other asynchronous failure: with future composition. + +Another thing you should **never** do is return a `null` future: +```java + if (s.equals("""")) { + // nothing to do, might as well do nothing...? + return null; + } +``` +While Java allows any value of a non-primitive type to be `null`, ideally `CompletableFuture` would disallow this entirely, as once again this forces all of your callers to explicitly handle the `null` case. +Instead, return a `CompletableFuture` that is already finished: +```java + if (s.equals("""")) { + return CompletableFuture.completedFuture(null); + } +``` + +### Cancellation + +One form of failure that is expected is _cancellation_: a future might fail because it has been explicitly canceled. +Canceling futures is a common operation to avoid wasting resources. +For instance, if a user has navigated away from a page which was still loading pictures, there is no point in finishing the download of these pictures, +decoding them from raw bytes, and displaying them on a view that is no longer on screen. +Cancellation is not only about ignoring a future's result, but actually stopping whatever operation was still ongoing. + +One may be tempted to use a method such as `Thread::stop()` in Java, but this method is obsolete for a good reason: forceful cancellation is dangerous. +The thread could for instance be holding a lock when it is forcefully stopped, preventing any other thread from ever entering the lock. +(`CompletableFuture` has a `cancel` method with a `mayInterruptIfRunning` parameter, but this parameter only exists because the method implements the more general interface `CompletionStage`; +as the documentation explains, the parameter is ignored because forcefully interrupting a computation is not a good idea in general) + +Instead, cancellation should be _cooperative_: the operation you want to cancel must be aware that cancellation is a possibility, and provide a way to cancel at well-defined points in the operation. + +Some frameworks have built-in support for cancellation, but in Java you have to do it yourself. +One way to do it is to pass around an `AtomicBoolean` value, which serves as a ""box"" to pass by reference a flag indicating whether a future should be canceled. +Because this is shared state, you must be disciplined in its use: only write to it outside of the future, and only read from it inside of the future. +The computation in the future should periodically check whether it should be canceled, and if so throw a cancellation exception instead of continuing: +```java +for (int step = 0; step < 100; step++) { + if (cancelFlag.get()) { + throw new CancellationException(); + } + ... +} +``` + +### Progress + +If you've implemented cancellation and let your users cancel ongoing tasks, they might ask themselves ""should I cancel this or not?"" after a task has been running for a while. +Indeed, if a task has been running for 10 minutes, one may want to cancel it if it's only 10% done, whereas if it's 97% done it's probably be better to wait. + +This is where _progress_ indication comes in. +You can indicate progress in units that make sense for a given operation, such as ""files copied"", ""bytes downloaded"", or ""emails sent"". + +Some frameworks have built-in support for progress, but in Java you have to do it yourself. +One way to do it is to pass around an `AtomicInteger` value, which serves as a ""box"" to pass by reference a counter indicating the progress of the operation relative to some maximum. +Again, because this is shared state, you must be disciplined in its use, this time the other way around: only read from it outside of the future, and only write to it from inside of the future. +The computation in the future should periodically update the progress: +```java +for (int step = 0; step < 100; step++) { + progress.set(step); + ... +} +progress.set(100); +``` + +It is tempting to define progress in terms of time, such as ""2 minutes remaining"". +However, in practice, the duration of most operations cannot be easily predicted. +The first 10 parts of a 100-part operation might take 10 seconds, and then the 11th takes 20 seconds on its own; how long will the remaining 89 take? +Nobody knows. This leads to poor estimates. + +One way to show something to users without committing to an actual estimate is to use an ""indeterminate"" progress bar, such as a bar that continuously moves from left to right. +This lets users know something is happening, meaning the app has not crashed, and is typically enough if the operation is expected to be short so users won't get frustrated waiting. + +--- +#### Exercise +Take a look at the three methods in [`Advanced.java`](exercises/lecture/src/main/java/Advanced.java), and complete them one by one, based on the knowledge you've just acquired. +Again, you can run `App.java` to see if you got it right. + +
+Suggested solution (click to expand) +

+ +- `Server.uploadBatch` should be modified to support cancellation as indicated above, and then used in a way that sets the cancel flag if the operation times out using `orTimeout` +- Converting `OldServer.download` can be done by creating a `CompletableFuture` and passing its `complete` and `completeExceptionally` methods as callbacks +- Reliable downloads can be implemented in the equivalent of a recursive function: `download` composed with `reliableDownload` if it fails, using `exceptionallyCompose` + +We provide a [solution example](exercises/solutions/lecture/src/main/java/Advanced.java). + +

+
+--- + +### In other languages + +We have seen futures in Java as `CompletableFuture`, but other languages have roughly the same concept with different names such as `Promise` in JavaScript, +`Task` in C#, and `Future` in Rust. Importantly, operations on these types don't always have the exact same semantics, so be sure to read their documentation before using them. + +Some languages have built-in support for asynchronous operations. +Consider the following C# code: +```csharp +Task GetAsync(string url); + +async Task WriteAsync() +{ + var text = await GetAsync(""epfl.ch""); + Console.WriteLine(text); +} +``` +`Task` is C#'s future equivalent, so `GetAsync` is an asynchronous operation that takes in a string and returns a string. +`WriteAsync` is marked `async`, which means it is an asynchronous operation. +When it calls `GetAsync`, it does so with the `await` operator, which is a way to compose futures while writing code that looks and feels synchronous. +To be clear, `WriteAsync` is an asynchronous operation and does not block. It is equivalent to explicitly returning `GetAsync(""epfl.ch"")` composed with `Console.WriteLine`, C#'s equivalent of `System.out.println`. +But because the language supports it, the compiler does all of the future composition, an engineer can write straightforward asynchronous code without having to explicitly reason about future composition. +It is still possible to explicitly compose futures, such as with `Task.WhenAll` to compose many futures into one. +Handling failures is also easier: +```csharp +try +{ + var text = await GetAsync(""epfl.ch""); + Console.WriteLine(text); +} +catch (Exception) { ... } +``` +Whether `GetAsync` fails synchronously or asynchronously, the handling is in the `catch` block, and the compiler takes care of transforming this code into code that composes futures in the expected way. + +Similarly, Kotlin uses ""coroutines"", a form of lightweight threads, which can be suspended by `suspend` functions: +```kotlin +suspend fun getAsync(url: String): String + +suspend fun writeAsync() { + val content = getAsync(""epfl.ch"") + println(content) +} +``` +Once again, this code is fully asynchronous: when `writeAsync` calls `getAsync`, it can get _suspended_, and the language runtime will then schedule another coroutine that is ready to run, +such as one that updates the user interface with a progress bar. +Kotlin can also handle both kinds of failures with a `catch` block. + +### Futures as functions + +Now that we've seen different kinds of futures, let's take a step back and think of what a future is at a high level. +There are two fundamental operations for futures: (1) turning a value, or an error, to a future; and +(2) combining a future with a function handling its result into a new future. + +The first operation, creation, is two separate methods in Java's `CompletableFuture`: `completedFuture` and `failedFuture`. +The second operation, transformation, is the `thenCompose` method in Java, which takes a future and a function from the result to another future and returns the aggregate future. + +You may have already seen this pattern before. +Consider Java's `Optional`. One can create a full or empty optional with the `of` and `empty` methods, and transform it with the `flatMap` method. + +In the general case, there are two operations on a ""value of T"" type: create a `Value` from a `T`, and map a `Value` with a `T -> Value` function to a `Value`. +Mathematically-inclined folks use different names for these. ""Create"" is called ""Return"", ""Map"" is called ""Bind"", and the type is called a _monad_. + +What's the use of knowing about monads? +You can think of monads as a kind of design pattern, a ""shape"" for abstractions. +If you need to represent some kind of ""box"" that can contain values, such as an optional, a future, or a lists, you already know many useful methods you should provide: +the ones you can find on optionals, futures, and lists. + +Consider the following example from the exercises: +```java +return Weather.today() + .thenApply(w -> ""Today: "" + w) + .acceptEither( + Weather.yesterday() + .thenApply(w->""Yesterday: "" + w), + System.out::println + ); +``` +You know this code operates on futures because you've worked with the codebase and you know what `Weather.today()` and `Weather.yesterday()` return. +But what if this was a different codebase, one where these methods return optionals instead? +The code still makes sense: if there is a weather for today, transform it and print it, otherwise get the weather for yesterday, transform it and print it. +Or maybe they return lists? +Transform all elements of today's prediction, and if there are none, use the elements of yesterday's predictions instead, transformed; then print all these elements. + + +## How should we test async code? + +Consider the following method: +```java +void printWeather() { + getString(...) + .thenApply(parseWeather) + .thenAccept(System.out::println); +} +``` +How would you test it? + +The problem is that immediately after calling the method, the `getString` future may not have finished yet, so one cannot test its side effect yet. +It might even never finish if there is an infinite loop in the code, or if it expects a response from a server that is down. + +One commonly used option is to sleep via, e.g., `Thread.sleep`, and then assume that the operation must have finished. +**Never, ever, ever, ever sleep in tests for async code**. +It slows tests down immensely since one must sleep the whole duration regardless of how long the operation actually takes, +it is brittle since a future version of the code may be slower, +and most importantly it's unreliable since the code could sometimes take more time due to factors out of your control, +such as continuous integration having fewer resources because many pull requests are open and being tested. + +The fundamental problem with the method above is its return value, or rather the lack of a return value. +Tests cannot access the future, they cannot even name it, because the method does not expose it. +To be testable, the method must be refactored to expose the future: +```java +CompletableFuture printWeather() { + return getString(...) + .thenApply(parseWeather) + .thenAccept(System.out::println); +} +``` +It is now possible to test the method using the future it returns. +One option to do so is this: +```java +printWeather().thenApply(r -> { + // ... test? ... +}); +``` +However, this is a bad idea because the lambda will not run if the future fails or if it never finishes, so the test will pass even though the future has not succeeded. +In fact, the test does not even wait for the future to finish, so the test may pass even if the result is wrong because the lambda will execute too late! +To avoid these problems, one could write this instead: +```java +printAnswer().join(); +// ... test ... +``` +But this leaves one problem: if the code is buggy and the future never completes, `join()` will block forever and the tests will never complete. +To avoid this problem, one can use the timeout method we saw earlier: +```java +printWeather() + .orTimeout(5, TimeUnit.SECONDS) + .join(); +// ... test ... +``` +There is of course still the issue unrelated to asynchrony that the method has an implicit dependency on the standard output stream, which makes testing difficult. +You can remove this dependency by returning the `CompletableFuture` directly instead of printing it, which makes tests much simpler: +```java +String weather = getWeather() + .orTimeout(5, TimeUnit.SECONDS) + .join(); +// ... test weather ... +``` + +Exposing the underlying future is the ideal way to test asynchronous operations. +However, this is not always feasible. Consider an app for which we want to test a button click ""end-to-end"", i.e., by making a fake button click and testing the resulting changes in the user interface. +The button click handler must typically have a specific interface, including a `void` return type: +```java +@Override +void onClick(View v) { ... } +``` +One cannot return the future here, since the `void` return type is required to comply with the button click handler interface. +Instead, one can add an explicit callback for tests: +```java +@Override +void onClick(View v) { + // ... + callback.run(); +} + +public void setCallback(...) { ... } +``` +Tests can then set a callback and proceed as usual: +```java +setCallback(() -> { + // ... can we test here? +}); +// ... click the button ... +``` +However, this now raises another problem: how to properly test callback-based methods? +One way would be to use low-level classes such as latches or barriers to wait for the callback to fire. +But this is a reimplementation of the low-level code the authors of `CompletableFuture` have already written for us, so we should be reusing that instead! +```java +var future = new CompletableFuture(); +setCallback(() -> future.complete(null)); +// ... click the button ... +future.orTimeout(5, TimeUnit.SECONDS) + .join(); +``` +We have reduced the problem of testing callbacks to a known one, testing futures, which can then be handled as usual. + +--- +#### Exercise +Take a look at the three tests in [`WeatherTests.java`](exercises/lecture/src/test/java/WeatherTests.java), and complete them one by one, based on the knowledge you've just acquired. + +
+Suggested solution (click to expand) +

+ +- As we did above, call `orTimeout(...)` then `join()` on `Weather.today()`, then assert that its result is `""Sunny""` +- Create a `CompletableFuture`, call its `complete` method in the callback of `WeatherView`, then wait for it with a timeout after executing `WeatherView.clickButton()`, and test the value of `WeatherView.weather()` +- There are multiple ways to do the last exercise; one way is to return the combination of the two futures with `allOf` in `printWeather`, and replace `System.out::println` with a `Consumer` parameter, + then test it with a consumer that adds values to a string; another would be a more thorough refactoring of `printWeather` that directly returns a `CompletableFuture>` + +We provide a [solution example](exercises/solutions/lecture/src/test/java/WeatherTests.java). + +

+
+ +--- + +## How does asynchrony interact with software design? + +Which operations should be sync and which should be async, and why? +This is a question you will encounter over and over again when engineering software. + +One naïve option is to make everything asynchronous. Even `1 + 1` could be `addAsync(1, 1)`! This is clearly going too far. + +It's important to remember that _asynchrony is viral_: if a method is async, then the methods that call it must also be async, though it can itself call sync methods. +A sync method that calls an async method is poor design outside of the very edges of your system, typically tests and the main method, +because it will need to block until the async method is done, which can introduce all kinds of issues such as UI freezes or deadlocks. + +This also applies to inheritance: if a method might be implemented in an asynchronous way, but also has other synchronous implementations, it must be async. +For instance, while fetching the picture for an album in a music player may be done from a local cache in a synchronous way, it will also sometimes be done by downloading the image from the Internet. +Thus, you should make operations async if they are expected to be implemented in an asynchronous way, typically I/O operations. + +One policy example is the one in the [Windows Runtime APIs](https://learn.microsoft.com/en-us/windows/uwp/cpp-and-winrt-apis/concurrency#asynchronous-operations-and-windows-runtime-async-functions), +which Microsoft introduced with Windows 8. +Any operation that has the potential to take more than 50 milliseconds to complete returns a future instead of a synchronous result. +50ms may not be the optimal threshold for everything, but it is a reasonable and clear position to take that enables engineers to decide whether any given operation should be asynchronous. + +Remember the ""YAGNI"" principle: ""_You Aren't Gonna Need It"". +Don't make operations async because they might perhaps one day possibly maybe need to be async. +Only do so if there is a clear reason to. Think of how painful, or painless, it would be to change from sync to async if you need it later. + +One important principle is to be consistent. +Perhaps you are using an OS that gives you asynchronous primitives for reading and writing to files, but only a synchronous primitive to delete it. +You could expose an interface like this: +```java +class File { + CompletableFuture read(); + CompletableFuture write(String text); + void delete(); +} +``` +However, this is inconsistent and surprising to developers using the interface. +Instead, make everything async, even if deletion has to be implemented by returning an already-completed future after synchronously deleting the file: +```java +class File { + CompletableFuture read(); + CompletableFuture write(String text); + CompletableFuture delete(); +} +``` +This offers a predictable and clear experience to developers using the interface. + +What if you need to asynchronously return a sequence of results? +For instance, what should the return type of a `downloadImagesAsync` method be? +It could be `CompletableFuture>`, if you want to batch the results. +Or it could be `List>`, if you want to parallelize the downloads. +But if you'd like to return ""a sequence of asynchronous operations"", what should you return? + +Enter ""reactive"" programming, in which you react to events such as downloads, clicks, or requests, each of which is an asynchronous event. +For instance, if you have a ""flow"" of requests, you could `map` it to a flow of users and their data, then `filter` it to remove requests from users without the proper access right, and so on. +In other words, it's a monad! Thus, you already mostly know what operations exist and how to use it. +Check out [ReactiveX.io](https://reactivex.io) if you're interested, with libraries such as [RxJava](https://github.com/ReactiveX/RxJava). + + +## Summary + +In this lecture, you learned: +- Asynchrony: what it is, how to use it, and when to use it +- Maintainable async code by creating and combining futures +- Testable async code with reliable and useful tests + +You can now check out the [exercises](exercises/)! +",CS-305: Software engineering +"# Infrastructure + +> **Prerequisites**: Before following this lecture, you should: +> - Install Git. On Windows, use [WSL](https://docs.microsoft.com/en-us/windows/wsl/install) as Git is designed mostly for Linux. +> On macOS, see [the git documentation](https://git-scm.com/download/mac). On Linux, Git may already be installed, or use your distribution's package manager. +> If you have successfully installed Git, running `git --version` in the command line should show a version number. +> - Create a GitHub account (you do not have to use an existing GitHub account, you can create one just for this course if you wish) +> - Set up an SSH key for GitHub by following [their documentation](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account) +> - Tell Git who you are by running `git config --global user.name 'your_name'` with your name and `git config --global user.email 'your_email'` with the e-mail you used for GitHub +> - Choose an editor Git will open to write a summary of your changes with `git config --global core.editor 'your_editor'`, since Git defaults to `vi` which is hard to use for newcomers. +> On Windows with WSL you can use `notepad.exe`, which will open Windows's Notepad. +> On macOS you can use `open -e -W -n` which will open a new TextEdit window. +> On Linux you can use your distribution's built-in graphical text editor, or `nano`. +> +> If you use Windows with WSL, note that running `explorer.exe .` from the Linux command line will open Windows's Explorer in the folder your command-line is, which is convenient. +> +> Optionally, you may want to set the Git config setting `core.autocrlf` to `true` on Windows and `input` on Linux and macOS, +> so that Git converts between Unix-style line endings (`\n`) and Windows-style line separators (`\r\n`) automatically. + + +Where do you store your code, and how do you make changes to it? +If you're writing software on your own, this is not a problem, as you can use your own machine and change whichever files you want whenever you want. +But if you're working with someone else, it starts being problematic. +You could use an online cloud service where you store files, and coordinate who changes which file and when. +You could email each other changes to sets of files. +But this does not work so well when you have more people, and it is completely unusable when you have tens or hundreds of people working on the same codebase. +This is where infrastructure comes in. + + +## Objectives + +- Contrast old and new _version control_ systems +- Organize your code with the _Git_ version control system +- Write useful descriptions of code changes +- Avoid mistakes with _continuous integration_ + + +## What is version control? + +Before we talk about how to manage your code using a version control system, we must define some terms. + +A _repository_ is a location in which you store a codebase, such as a folder on a remote server. +When you make a set of changes to a repository, you are _pushing_ changes. +When you retrieve the changes that others have made from the repository, you are _pulling_ changes. + +A set of changes is called a _commit_. +Commits have four main components: who, what, when, and why. +""Who"" is the author of the commit, the person who made the changes. +""What"" is the contents of the commit, the changes themselves. +""When"" is the date and time at which the commit was made. This can be earlier than when the commit was actually pushed to a repository. +""Why"" is a message associated with the commit that explains why the changes were made, such as detailing why there was a bug and why the new code fixes the bug. +The ""why"" is particularly important because you will often have to look at old changes later and understand why they were made. + +Sometimes, a commit causes problems. Perhaps a commit that was supposed to improve performance also introduces a bug. +Version control systems allow you to _revert_ this commit, which creates a new commit whose contents are the reverse of the original one. +That is, if the original commit replaced ""X"" by ""Y"", then the revert commit replaces ""Y"" with ""X"". +Importantly, the original commit is not lost or destroyed, instead a new revert commit is created. + +Commits are put together in a _history_ of changes. +Initially, a repository is empty. Then someone adds some content in a commit, then more content in another commit, and so on. +The history of a repository thus contains all the changes necessary to go from nothing to the current state. +Some of these changes could be going back and forth, such as revert commits, or commits that replace code that some previous commit added. +At any time, any developer with access to the repository can look at the entire history to see who made what changes when and why. + +_1st generation_ version control systems were essentially a layer of automation on manual versioning. +As we mentioned earlier, if you are developing with someone else, you might put your files somewhere and coordinate who is changing what and when. +A 1st generation system helps you do that with fewer mistakes, but still fundamentally uses the same model. + +With 1st generation version control, if Alice wants to work on file A, she ""checks out"" the file. +At that point, the file is locked: Alice can change it, but nobody else can. If Bob wants to also check out file A, the system will reject his attempt. +Bob can, however, check out file B if nobody else is using it. +Once Alice is done with her work, she creates a commit with her changes, and releases the lock. At that point, Bob can check out file A and make his changes. + +1st generation version control systems thus act as locks at the granularity of files. +They prevent developers from making parallel changes to the same file, which prevents some mistakes but isn't very convenient. +For instance, Alice might want to modify function X in file A, while Bob wants to modify function Y in file A. +These changes won't conflict, but they still cannot do them in parallel, because locks in 1st generation version control are at the file granularity. + +Developers moved on from 1st generation systems because they wanted more control over _conflicts_. +When two developers want to work on the same file at the same time, they should be able to, as long as they can then _merge_ their changes into one unified version. +Merging changes isn't always possible automatically. If two developers changed the same function in different ways, for instance, they probably need to have a chat to decide which changes should be kept. + +Another feature that makes sense if a system can handle conflicts and merges is _branches_. +Sometimes, developers want to work on multiple copies of the codebase in parallel. +For instance, you might be working on some changes that improve performance, when a customer comes in with a bug report. +You could fix the bug and commit it with your performance changes, but the resulting commit is not convenient. +If you later need to revert the performance changes, for instance, you would also revert the bugfix because it's in the same commit. +Instead, you create a branch for your performance changes, then you switch to a branch for the bugfix, and you can work on both in parallel. +When your bugfix is ready, you can _merge_ it into the ""main"" branch of the repository, and the same goes for the performance changes. +One common use of branches is for versions: you can release version 1.0 of your software, for instance, and create a branch representing the state of the repository for that version. +You can then work on the future version 2.0 in the ""main"" branch. If a customer reports a bug in version 1.0, you can switch to the branch for version 1.0, fix the bug, release the fix, +then go back to working on version 2.0. Your changes for version 1.0 did not affect your main branch, because you made them in another branch. + +The typical workflow with branches for modern software is that you will create a branch starting from the main branch of the repository, +then add some commits to the branch to fix a bug or add a feature or perform whatever task the branch is for, +and then ask a colleague to review it. If the colleague asks for some changes, such as adding more code comments, you can add a commit to the branch with these changes. +Once your colleague is happy, you can merge the branch's commits into the main branch. +Then you can create another branch to work on something else, and so on. Your colleagues are themselves also working on their own branches. +This workflow means everyone can push whatever commits they want on their branch without conflicting with others, even if their work isn't quite finished yet. +Often it is a good idea to _squash_ a branch's commits into a single commit and merge the resulting commit into the main branch. +This combines all of the branch's changes into one clean commit in the history of the main branch, rather than having a bunch of commits that do a few small changes each but make no sense without each other. + +In the case of branches that represent versions, one sometimes needs to apply the same changes to multiple branches. +For instance, while developing version 2.0 in the main branch, you may find a bug, and realize that the bug also exists in version 1.0. +You can make a commit fixing the bug in version 2.0, and then _cherry pick_ the commit into the branch for version 1.0. +As long as the change does not conflict with other changes made in the version 1.0 branch, the version control system can copy your bugfix commit into a commit for another branch. + +_2nd generation_ version control systems were all about enabling developers to handle conflicts. +Alice can work on file A without the need to lock it, and Bob can also work on file A at the same time. +If Alice pushes her changes first, the system will accept them, and when Bob then wants to apply his changes, two things can happen. +One possibility is that the changes can be merged automatically, for instance because the changes are on two different parts of the file. +The other possibility is that the changes conflict and must be merged manually. Bob then has to choose what to do, perhaps by asking Alice, and produce one ""merged"" version of the file that can be pushed. + +The main remaining disadvantage of 2nd generation version control is its centralization. +There is one repository that developers work with, hosted on one server. +Committing changes requires an Internet connection to that server. +This is a problem if the server is down, or a developer is in a place without access to the Internet, or any other issue that prevents a developer from reaching the server. + +_3rd generation_ version control systems are all about decentralization. +Every machine has its own repository. It is not a ""backup"" or a ""replica"" of some ""main"" repository, but just another clone of the repository. +Developers can make commits locally on their own repository, then push these commits to other clones of the repository, such as on a server. +Developers can also have multiple branches locally, with different commits in each, and push some or all of these branches to other clones of the repository. +This all works as long as the repositories have compatible histories. That is, one cannot push a change to a repository that isn't based on the same history as one's local repository. + +In practice, teams typically agree on one ""main"" repository that they will all push commits to, and work locally on their clone of that repository. +While from the version control system's point of view all repository clones are equal, it is convenient for developers to agree on one place where everyone puts their changes. + +The main version control system in use today is _Git_. +Git was invented by Linus Torvalds, who invented Linux, because he was tired of the problems with the previous version control system he used for Linux. +There are also other 3rd generation version control systems such as Mercurial and Bazaar, but Git is by far the most used. + +Many developers use public websites to host the ""main"" repository clone of their projects. +The most famous these days is GitHub, which uses Git but isn't technically related to it. +GitHub not only stores a repository clone, but can also host a list of ""issues"" for the repository, such as bugs and feature requests, as well as other data such as a wiki for documentation. +There are also other websites with similar features such as GitLab and BitBucket, though they are not as popular. + +An example of a project developed on GitHub is [the .NET Runtime](https://github.com/dotnet/runtime), which is developed mainly by Microsoft employees and entirely using GitHub. +Conversations about bugs, feature requests, and code reviews happen in the open, on GitHub. + + +## How does one use Git? + +Now that we've seen the theory, let's do some practice! +You will create a repository, make some changes, and publish it online. Then we'll see how to contribute to an existing online repository. + +Git has a few basic everyday commands that we will see now, and many advanced commands we won't discuss here. +You can always look up commands on the Internet, both basic and advanced ones. +You will eventually remember the basics after using them enough, but there is no shame at all in looking up what to do. + +We will use Git on the command line for this tutorial, since it works the same everywhere. +However, for everyday tasks you may prefer using graphical user interfaces such as [GitKraken](https://www.gitkraken.com/), [GitHub Desktop](https://desktop.github.com/), or the Git support in your favorite IDE. + +Start by creating a folder and _initializing_ a repository in that folder: + +```sh +~$ mkdir example +~$ cd example +~/example$ git init +``` + +Git will tell you that you have initialized an empty Git repository in `~/example/.git/`. +This `.git/` folder is a special folder Git uses to store metadata. It is not part of the repository itself, even though it is in the repository folder. + +Let's create a file: + +```sh +$ echo 'Hello' > hello.txt"" +``` + +We can now ask Git what it thinks is going on: + +```sh +$ git status +... +Untracked files: + hello.txt +``` + +Git tells us that it sees we added `hello.txt`, but this file isn't tracked yet. +That is, Git won't include it in a commit unless we explicitly ask for it. So let's do exactly that: + +```sh +$ git add -A +``` + +This command asks Git to include all current changes in the repository for the next commit. +If we make more changes, we will have to ask for these new changes to be tracked as well. +But for now, let's ask Git what it thinks: + +```sh +$ git status +... +Changes to be committed: + new file: hello.txt +``` + +Now Git knows we want to commit that file. So let's commit it: + +```sh +$ git commit +``` + +This will open a text editor for you to type the commit message in. As we saw earlier, the commit message should be a description of _why_ the changes were made. +Often the very first commit in a repository sets up the basic file structure as an initial commit, so you could write `Initial commit setting up the file` or something similar. +You will then see output like this: + +```sh +[...] Initial commit. + 1 file changed, 1 insertion(+) + create mode 100644 hello.txt +``` + +Git repeats the commit message you put, here `Initial commit.`, and then tells you what changes happened. Don't worry about that `mode 100644`, it's more of an implementation detail. + +Let's now make a change by adding one line: + +```sh +$ echo 'Goodbye' >> hello.txt +``` + +We can ask git for the details of what changes we did: + +```sh +$ git diff +``` + +This will show a detailed list of the differences between the state of the repository as of the latest commit and the current state of the repository, i.e., we added one line saying `Goodbye`. + +Let's track the changes we just made: + +```sh +$ git add -A +``` + +What happens if we ask for a list of differences again? + +```sh +$ git diff +``` + +...Nothing! Why? Because `diff` by default shows differences that are not tracked for the next commit. +There are three states for changes to files in Git: modified, staged, and committed. +By default changes are modified, then with `git add -A` they are staged, and with `git commit` they are committed. +We have been using `-A` with `git add` to mean ""all changes"", but we could in fact add only specific changes, such as specific files or even parts of files. + +In order to see staged changes, we have to ask for them: + +```sh +$ git diff --staged +``` + +We can now commit our changes. Because this is a small commit that does not need much explanation, we can use `-m` to write the commit message directly in the command: + +```sh +$ git commit -m 'Say goodbye' +``` + +Let's now try out branches, by creating a branch and switching to it: + +```sh +git switch -c feature/today +``` + +The slash in the branch name is nothing special to Git, it's only a common naming convention to distinguish the purpose of different branches. +For instance, you might have branches named `feature/delete-favorites` or `bugfix/long-user-names`. +But you could also name your branch `delete-favorites` or `bugfix/long/user/names` if you'd like, as long as everybody using the repository agrees on a convention for names. + +Now make a change to the single line in the file, such as changing ""Hello"" to ""Hello today"". +Then, track the changes and commit them: + +```sh +$ git add -A && git commit -m 'Add time' +``` + +You will notice that Git tells you there is `1 insertion(+), 1 deletion (-)`. +This is a bit odd, we changed one line, why are there two changes? +The reason is that Git tracks changes at the granularity of lines. +When you edit a line, Git sees this as ""you deleted the line that was there, and you added a new line"". The fact that the ""deleted"" and the ""added"" lines are similar is not relevant. + +If you've already used Git before, you may have heard of the `-a` to `git commit`, which could replace the explicit `git add -A` in our case. +The reason we aren't using it here, and the reason why you should be careful if using it, is that `-a` only adds changes to existing files. +It does not add changes to new files or deleted files. +This makes it very easy to accidentally forget to include some new or deleted files in the commit, and to then have to make another commit with just these files, which is annoying. + +Anyway, we've made a commit on our `feature/today` branch. +In case we want to make sure that we are indeed on this branch, we can ask Git: + +```sh +$ git branch +``` + +This will output a list of branches, with an asterisk `*` next to the one we are on. + +Let's now switch to our main branch. +Depending on your Git version, this branch might have different names, so look at the output of the previous command and use the right one, such as `master` or `main`: + +```sh +$ git switch main +``` + +To see what happens when two commits conflict, let's make a change to our `hello.txt` file that conflicts with the other branch we just made. +For instance, replace ""Hello"" with ""Hello everyone"". +Then, track the change and commit it as before. + +At this point, we have two branches, our main branch and `feature/today`, that have diverged: they both have one commit that is not in the other. +Let's ask Git to merge the branches, that is, add the commits from the branch we specify into the current branch: + +```sh +$ git merge feature/today +``` + +Git will optimistically start with `Auto-merging hello.txt`, but this will soon fail with a `Merge conflict in hello.txt`. +Git will ask us to fix conflicts and commit the result manually. + +What does `hello.txt` look like now? + +```sh +$ cat hello.txt +<<<<<<< HEAD +Hello everyone +======= +Hello today +>>>>>>> feature/today +Goodbye +``` + +Let's take a moment to understand this. The last line hasn't changed, because it's not a part of the conflict. +The first line has been expanded to include both versions: between the `<<<` and `===` is the version in `HEAD`, that is, the ""head"", the latest commit, in the current branch. +Indeed, on our main branch the first line was `Hello everyone`. +Between the `===` and the `>>>` is the version in `feature/today`. +What we need to do is manually merge the changes, i.e., edit the file to replace the conflict including the `<<<`, `===`, and `>>>` lines with the merged changes we want. +For instance, we could end up with a file containing the following: + +```sh +$ cat hello.txt +Hello everyone today +Goodbye +``` + +This is one way to merge the file. We could also have chosen `Hello today everyone`, or perhaps we would rather discard one of the two changes and keep `Hello everyone` or `Hello today`. +Or perhaps we want yet another change, we could have `Hello hello` instead. Git does not care, it only wants us to decide what the merged version should be. + +Once we have made our merge changes, we should add the changes and commit as before: + +```sh +$ git add -A && git commit -m 'Merge' +``` + +Great. Wait, no, actually, not so great. That's a pretty terrible commit message. It's way too short and not descriptive. +Thankfully, _because we have not published our changes to another clone of the repository yet_, we can make changes to our commits! +This is just like how a falling tree makes no sound if there's no one around to hear it. If nobody can tell, it did not happen. +We can change our commit now, and when we push it to another clone the clone will only see our modified commit. +However, if we had already pushed our commit to a clone, our commit would be visible, so we could not change it any more as the clone would get confused by a changing commit since commits are supposed to be immutable. + +To change our commit, which again should only be done if the commit hasn't been pushed yet, we ""amend"" it: + +```sh +$ git commit --amend -m 'Merge the feature/today branch' +``` + +We have only modified the commit message here, but we could also modify the commit contents, i.e., the changes themselves. + +Sometimes we make changes we don't actually want, for instance temporary changes while we debug some code. +Let's make a ""bad"" change: + +```sh +$ echo 'asdf' >> hello.txt +``` + +We can restore the file to its state as of the latest commit to cancel this change: + +```sh +$ git restore hello.txt +``` + +Done! Our temporary changes have disappeared. +You can also use `.` to restore all files in the current directory, or any other path. +However, keep in mind that ""disappeared"" really means disappeared. It's as if we never changed the file, as the file is now in the state it was after the latest commit. +Do not use `git restore` unless you actually want to lose your changes. + +Sometimes we accidentally add files we don't want. Perhaps a script went haywire, or perhaps we copied some files by accident. +Let's make a ""bad"" new file: + +```sh +$ echo 'asdf' > mistake.txt +``` + +We can ask Git to ""clean"" the repository, which means removing all untracked files and directories. +However, because this will delete files, we'd better first run it in ""dry run"" mode using `-n`: + +```sh +$ git clean -fdn +``` + +This will show a list of files that _would_ be deleted if we didn't include `-n`. +If we're okay with the proposed deletion, let's do it: + +```sh +$ git clean -fd +``` + +Now our `mistake.txt` is gone. + +Finally, before we move on to GitHub, one more thing: keep in mind that Git only tracks _files_, not _folders_. +Git will only keep track of folders if they are a part of a file's path. + +So if we create a folder and ask Git what it sees, it will tell us there is nothing, because the folder is empty: + +```sh +$ mkdir folder +$ git status +``` + +If you need to include an ""empty"" folder in a Git repository for some reason, you should add some empty file in it so that Git can track the folder as part of that file. + +Let's now publish our repository. Go to [GitHub](https://github.com) and create a repository using the ""New"" button on the home page. +You can make it public or private, but do not create files such as ""read me"" files or anything else, just an empty repository. + +Then, follow the GitHub instructions for an existing repository from the command line. Copy and paste the commands GitHub gives you. +These commands will add the newly-created GitHub repository as a ""remote"" to your local repository, which is to say, another clone of the repository that Git knows about. +Since it will be the only remote, it will also be the default one. The default remote is traditionally named `origin`. +The commands GitHub provide will also push your commits to this remote. +Once you've executed the commands, you can refresh the page on your GitHub repository and see your files. + +Now make a change to your `hello.txt`, track the change, and commit it. +You can then sync the commit with the GitHub repository clone: + +```sh +$ git push +``` + +You can also get commits from GitHub: + +```sh +$ git pull +``` + +Pulling will do nothing in this case, since nobody else is using the repository. +In a real-world scenarios, other developers would also have a clone of the repository on their machine and use GitHub as their default remote. +They would push their changes, and you would pull them. + +Importantly, `git pull` only synchronizes the current branch. If you would like to sync commits from another branch, you must `git switch` to that branch first. + +Similarly, `git push` only synchronizes the current branch, and if you create a new branch you must tell it where to push with `-u` by passing both the remote name and the branch name: + +```sh +$ git switch -c example +$ git push -u origin example +``` + +Publishing your repository online is great, but sometimes there are files you don't want to publish. +For instance, the binary files compiled from source code in the repository probably should not be in the repository, since they can be recreated easily and would only take up space. +Files that contain sensitive data such as passwords should also not be in the repository, especially if it's public. +Let's simulate a sensitive file: + +```sh +$ echo '1234' > password.txt +``` + +We can tell Git to pretend this file doesn't exist by adding a line with its name to a special file called `.gitignore`: + +```sh +$ echo 'password.txt' >> .gitignore +``` + +Now, if you try `git status`, it will tell you that `.gitignore` was created but ignore `password.txt` since you told Git to ignore it. + +You can also ignore entire directories. +Note that this only works for files that haven't been committed to the repository yet. +If you had already made a commit in which `password.txt` exists, adding its name to `.gitignore` will only ignore future changes, not past ones. +If you accidentally push to a public repository a commit with a file that contains a password, you should assume that the password is compromised and immediately change it. +There are bots that scan GitHub looking for passwords that have been accidentally committed, and they will find your password if you leave it out there, even for a few seconds. + +Now that you have seen the basics of Git, time to contribute to an existing project! +You will do this through a _pull request_, which is a request that the maintainers of an existing project pull your changes into their project. +This is a GitHub concept, as from Git's perspective it's merely syncing changes between clones of a repository. + +Go to and click on the ""Fork"" button. +A _fork_ is a clone of the repository under your own GitHub username, which you need here because you do not have write access to `sweng-example/hello` so you cannot push changes to it. +Instead, you will push changes to your fork, to which you have write access, and then ask the maintainers of `sweng-example/hello` to accept the change. +You can create branches within a fork as well, as a fork is just another clone of the repository. +Typically, if you are a collaborator of a project, you will use a branch in the project's main repository, while if you are an outsider wanting to propose a change, you will create a fork first. + +Now that you have a forked version of the project on GitHub, click on the ""Code"" button and copy the SSH URL, which should start with `git@github.com:`. +Then, ask Git to make a local clone of your fork, though you should go back to your home directory first, since creating a repository within a repository causes issues: + +```sh +$ cd ~ +$ git clone git@github.com:... +``` + +Git will clone your fork locally, at which point you can make a change, commit, and push to your fork. +Once that's done, if you go to your fork on GitHub, there should be a banner above the code telling you that the branch in your fork is 1 commit ahead of the main branch in the original repository. +Click on the ""Contribute"" button and the ""Open pull request"" button that shows up, then confirm that you want to open a pull request, and write a description for it. + +Congratulations, you've made your first contribution to an open source project! + +The best way to get used to Git is to use it a lot. Use Git even for your own projects, even if you do not plan on using branches. +You can use private repositories on GitHub as backups, so that even if your laptop crashes you will not lose your code. + +There are many advanced features in Git that can be useful in some cases, such as `bisect`, `blame`, `cherry-pick`, `stash`, and many more. +Read the [official documentation](https://git-scm.com/docs/) or find online advanced tutorials for more if you're curious! + + +## How does one write good commit messages? + +Imagine being an archaeologist and having to figure out what happened in the past purely based on some half-erased drawings, some fossils, and some tracks. +You will eventually figure out something that could've happened to cause all this, but it will take time and you won't know if your guess is correct. +Wouldn't it be nice if there was instead a journal that someone made, describing everything important they did and why they did it? + +This is what commit messages are for: keep track of what you do and why you did it, so that other people will know it even after you're done. +Commit messages are useful to people who will review your code before approving it for merging in the main branch, and to your colleagues who investigate bugs months after the code was written. +Your colleagues in this context include ""future you"". No matter how ""obvious"" or ""clear"" the changes seems to you when you make them, a few months later you won't remember why you did something the way you did it. + +The typical format of a commit message is a one-line summary followed by a blank line and then as many lines as needed for details. For instance, this is a good commit message: + +``` +Fix adding favorites on small phones + +The favorites screen had too many buttons stacked on the same row. +On phones with small screens, there wasn't enough space to show them all, +and the ""add"" button was out of view. + +This change adds logic to use multiple rows of buttons if necessary. +``` + +As we saw earlier, ""squashing"" commits is an option when merging your code into the main branch, so not all commits on a branch need to have such detailed messages. +Sometimes a commit is just ""Fix a typo"" or ""Add a comment per the review feedback"". These commits aren't important to understand the changes, +so their messages will be dropped once the branch is squashed into a single commit while being merged. + +The one-line summary is useful to get an overview of the history without having to see every detail. +You can see it on online repositories such as GitHub, but also locally. +Git has a `log` command to show the history, and `git log --oneline` will show only the one-line summary of each commit. + +A good summary should be short and in the imperative mood. +For instance: +- ""Fix bug #145"" +- *Add an HD version of the wallpaper"" +- ""Support Unicode 14.0"" + +The details should describe _what_ the changes do and _why_ you did them, but not _how_. +There is no point in describing how because the commit message is associated with the commit contents, and those already describe how you changed the code. + + +## How can we avoid merging buggy code? + +Merging buggy code to the main branch of a repository is an annoyance for all contributors to that repository. +They will have to fix the code before doing the work they actually want to do, and they may not all fix it in the same way, leading to conflicts. + +Ideally, we would only accept pull requests if the resulting code compiles, is ""clean"" according to the team's standards, and has been tested. +Different teams have different ideas of what ""clean"" code is, as well as what ""testing"" means since it could be manual, automated, performed on one or multiple machines, and so on. + +When working in an IDE, there will typically be menu options to analyze the code for cleanliness, to compile the code, to run the code, and to run automated tests if the developers wrote some. +However, not everyone uses the same IDE, which means they may have different definitions of what these operations mean. + +The main issue with using operations in an IDE to check properties about the code is that humans make mistakes. +On a large enough projects, human mistakes happen all the time. For instance, it's unreasonable to expect hundreds of developers to never forget even once to check that the code compiles and runs. +Checking for basic mistakes also a poor use of people's time. Reviewing code should be about the logic of the code, not whether every line is syntactically valid, that's the compiler's job. + +We would instead like to _automate_ the steps we need to check code. +This is done using a _build system_, such as CMake for C++, MSBuild for C#, or Gradle for Java. +There are many build systems, some of which support multiple languages, but they all fundamentally provide the same feature: build automation. +A build system can invoke the compiler on the right files with the right flags to compile the code, invoke the resulting binary to run the code, +and even perform more complex operations such as downloading dependencies based on their name if they have not been downloaded already. + +Build systems are configured with code. They typically have a custom declarative language embedded in some other language such as XML. +Here is an example of build code for MSBuild: + +```xml + + + + + +``` + +This code tells MSBuild that (1) this is a .NET project, which is the runtime typically associated with C#, and (2) it depends on the library `Microsoft.Z3`, specifically its version `4.10.2`. +One can then run MSBuild with this file from the command line, and MSBuild will compile the project after first downloading the library it depends on if it hasn't been downloaded already. +In this case, the library name is associated with an actual library by looking up the name on [NuGet](https://www.nuget.org/), the library catalog associated with MSBuild. + +Build systems remove the dependency on an IDE to build and run code, which means everyone can use the editor they want as long as they use the same build system. +Most IDEs can use build system code as the base for their own configuration. For instance, the file above can be used as-is by Visual Studio to configure a project. + +Build systems enable developers to build, run, and check their code anywhere. But it has to be somewhere, so which machine or machines should they use? +Once again, using a developer's specific machine is not a good idea because developers customize their machine according to their personal preferences. +The machines developers use may not be representatives of the machines the software will actually run on when used by customers. + +Just as we defined builds using code through a build system, we can define environments using code! +Here is an example of environment definition code for the Docker container system, which you do not need to understand: + +``` +FROM node:12-alpine +RUN apk add python g++ make +COPY . . +RUN yarn install +CMD [""node"", ""src/index.js""] +EXPOSE 3000 +``` + +This code tells Docker to use the `node:12-alpine` base environment, which has Node.js preinstalled on an Alpine Linux environment. +Then, Docker should run `apk add` to install specific packages, including `make`, a build system. +Docker should then copy the current directory inside of the container, and run `yarn install` to invoke Node.js's `yarn` build system to pre-install dependencies. +The file also tells Docker the command to run when starting this environment and the HTTP port to expose to the outside world. + +Defining an environment using code enables developers to run and test their code in specific environments that can be customized to match customers' environments. +Developers can also define multiple environments, for instance to ensure their software can run on different operating systems, or on operating systems in different languages. + +We have been using the term ""machine"" to refer to the environment code runs in, but in practice it's unlikely to be a physical machine as this would be inefficient and costly. +Pull requests and pushes happen fairly rarely given that modern computers can do billions of operations per second. Provisioning one machine exclusively for one project would be a waste. + +Instead, automated builds use _virtual machines_ or _containers_. +A virtual machine is a program that emulates an entire machine inside it. For instance, one can run an Ubuntu virtual machine on Windows. +From Windows's perspective, the virtual machine is just another program. But for programs running within the virtual machine, it looks like they are running on real hardware. +This enables partitioning resources: a single physical machine can run many virtual machines, especially if the virtual machines are not all busy at the same time. +It also isolates the programs running inside the virtual machine, meaning that even if they attempt to break the operating system, the world outside of the virtual machine is not affected. +However, virtual machines have overhead, especially when running many of them. +Even if 100 virtual machines all run the exact same version of Windows, for instance, they must all run an entire separate instance of Windows including the Windows kernel. +This is where _containers_ come in. Containers are a lightweight form of virtual machines that share the host operating system's kernel instead of including their own. +Thus, there is less duplication of resources, at the cost of less isolation. +Typically, services that allow anyone to upload code will use virtual machines to isolate it as much as possible, whereas private services can use containers since they trust the code they run. + +Using build systems and virtual machines to automatically compile, run, and check code whenever a developer pushes commits is called _continuous integration_, +and it is a key technique in modern software development. +When a developer opens a pull request, continuous integration can run whatever checks have been configured, such as testing that the code compiles and passes some static analysis. +Merging can then be blocked unless continuous integration succeeds. +Thus, nobody can accidentally merge broken code into the main branch, and developers who review pull requests don't need to manually check that the code works. + +Importantly, whether a specific continuous integration run succeeds or fails means that there exists a machine on which the code succeeds or fails. +It is possible that code works fine on the machine of the developer who wrote it, yet fails in continuous integration. +A common response to this is ""but it works on my machine!"", but that is irrelevant. The goal of software is not to work on the developer's machine but to work for users. + +Problems with continuous integration typically stem from differences between developers' machines and the virtual machines configured for continuous integration. +For instance, a developer may be testing a phone app on their own phone, with a test case of ""open the 'create item' page and click the 'no' button"", which they can do fine. +But their continuous integration environment may be set up with a phone emulator that has a small screen with few pixels, and the way the app is written means +the 'no' button is not visible: + +

+ +The code thus does not work in the continuous integration environment, not because of a problem with continuous integration, but because the code does not work on some phones. +The developer should fix the code so that the ""No"" button is always visible, perhaps below the ""Yes"" button with a scroll bar if necessary. + +--- + +#### Exercise: Add continuous integration +Go back to the GitHub repository you created, and add some continuous integration! +GitHub includes a continuous integration service called GitHub Actions, which is free for basic use. +Here is a basic file you can use, which should be named `.github/workflows/example.yml`: +```yaml +on: push +jobs: + example: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v3 + - run: echo ""Hello!"" +``` +After pushing this file to the GitHub repository and waiting for a few seconds, you should see a yellow circle next to the commit indicating GitHub Actions is running, +which you can also see in the ""Actions"" tab of the repository. +This is a very basic action that only clones the repository and prints text. In a real-world scenario, you would at least invoke a build system. +GitHub Actions is quite powerful, as you can read on [the GitHub Actions documentation](https://docs.github.com/en/actions). + +--- + +Version control, continuous integration, and other such tasks were typically called ""operations"", and were done by a separate team from the ""development"" team. +However, nowadays, these concepts have combined into ""DevOps"", in which the same team does both, which makes it easier for developers to configure exactly the operations they want. + + +## Summary + +In this lecture, you learned: +- Version control systems, and the differences between 1st, 2nd, and 3rd generation +- Git: how to use it for basic scenarios, and how to write good commit messages +- Continuous integration: build systems, virtual machines, and containers + +You can now check out the [exercises](exercises/)! +",CS-305: Software engineering +"# Design + +Imagine if, in order to display ""Hello, World!"" on a screen, you had to learn how everything worked. +You'd need to learn all about LED lights and their physics. +And then you'd need to read the thousands of pages in CPU data sheets to know what assembly code to write. + +Instead, you can write `print(""Hello, World!"")` in a language like Scala or Python, and that's it. +The language's runtime does all the work for you, with the help of your operating system, which itself contains drivers for hardware. +Even those drivers don't know about LED lights, as the screen itself exposes an interface to display data that the drivers use. +Python itself isn't a monolith either: it contains sub-modules such as a tokenizer, a parser, and an interpreter. + +Unfortunately, it's not easy to write large codebases in a clean way, and this is where _design_ comes in. + + +## Objectives + +After this lecture, you should be able to: +- Apply _modularity_ and _abstraction_ in practice +- Compare ways to handle _failures_ +- Organize code with _design patterns_ +- Use common patterns for modern applications + + +## How can one design large software systems? + +[Barbara Liskov](https://en.wikipedia.org/wiki/Barbara_Liskov), one of the pioneers of computer science and programming language design in particular, +once [remarked](https://infinite.mit.edu/video/barbara-liskov) that ""_the basic technique we have for managing the complexity of software is modularity_"". + +Modularity is all about dividing and sub-dividing software into independent units that can be separately maintained and be reused in other systems: _modules_. +Each module has an _interface_, which is what the module exposes to the rest of the system. The module _abstracts_ some concept and presents this abstraction to the world. +Code that uses the module does not need to know or care about how the abstraction is implemented, only that it exists. + +For instance, one does not need to know woodworking or textiles to understand how to use a sofa. +Some sofas can even be customized by customers, such as choosing whether to have a storage compartment or a convertible bed, because the sofas are made of sub-modules. + +In programming, a module's interface is typically called an ""API"", which is short for ""Application Programming Interface"". +APIs contain objects, functions, errors, constants, and so on. +This is _not_ the same thing as the concept of an `interface` in programming languages such as Java. +In this lecture, we will talk about the high-level notion of an interface, not the specific implementation of this concept in any specific language. + +Consider the following Java method: +```java +static int compute(String expr) { + // ... +} +``` +This can be viewed as a module, whose interface is the method signature: `int compute(String expr)`. +Users of this module do not need to know how the module computes expressions, such as returning `4` for `2 + 2`. They only need to understand its interface. + +A similar interface can be written in a different technology, such as Microsoft's COM: +```cpp +[uuid(a03d1424-b1ec-11d0-8c3a-00c04fc31d2f)] +interface ICalculator : IDispatch { + HRESULT Compute([in] BSTR expr, + [out] long* result); +}; +``` +This is the interface of a COM component, which is designed to be usable and implementable in different languages. +It fundamentally defines the same concept as the Java one, except it has a different way of defining errors (exceptions vs. `HRESULT` codes) +and of returning data (return values vs. `[out]` parameters). +Anyone can use this COM module given its interface, without having to know or care about how it is implemented and in which language. + +Another kind of cross-program interface is HTTP, which can be used through server frameworks: +```java +@Get(""/api/v1/calc"") +String compute(@Body String expr) { + // ... +} +``` +The interface of this HTTP server is `HTTP GET /api/v1/calc` with the knowledge that the lone parameter should be passed in the body, and the return value will be a string. +The Java method name is _not_ part of the interface, because it is not exposed to the outside world. Similarly, the name of the parameter `expr` is not exposed either. + +These three interfaces could be used in a single system that combines three modules: an HTTP server that internally calls a COM component that internally calls a Java method. +The HTTP server doesn't even have to know the Java method exists if it goes through the COM component, which simplifies its development. + +However, this requires some discipline in enforcing boundaries. If the Java method creates a file on the local disk, and the HTTP server decides to use that file, then the modularity is broken. +This problem also exists at the function level. Consider the same Java method as above, but this time with an extra method: +```java +static int compute(String expr); + +static void useReversePolishNotation(); +``` +The second method is intended to configure the behavior of the first. +However, it creates a dependency between any two modules that use `compute`, since they have to agree on whether to call `useReversePolishNotation` or not, as the methods are `static`. +If a module tries to use `compute` assuming the default infix notation, but another module in the system has chosen to use reverse polish notation, the former will fail. + +Another common issue with modules is voluntarily exposing excess information, usually because it makes implementation simpler in the short term. +For instance, should a `User` class with a `name` and a `favoriteFood` also have a Boolean property `fetchedFromDatabase`? +It may make sense in a specific implementation, but the concept of tracking users' favorite foods is completely unrelated to a database. +The maintainers of code using such a `User` class would need to know about databases any time they deal with users, +and the maintainers of the `User` class itself could no longer change the implementation of `User` to be independent of databases, since the interface mandates a link between the two concepts. +Similarly, at the package level, a ""calc"" package for a calculator app with a `User` class and a `Calculator` class probably should not have a `UserInterfaceCheckbox` class, as it is a much lower level concept. + +--- +#### Exercise +What would a `Student` class's interface look like... +- ... for a ""campus companion"" app? +- ... for a course management system? +- ... for an authentication system? +How will they differ and why? +
+Proposed solution (click to expand) +

+ +A campus companion app could view students as having a name and preferences such as whether to display vegetarian menus first or in what order to display the user's courses. + +The campus companion does not care about whether the student has paid their fee for the current semester, which is something a course management system might care about, +along with what major the student is in. + +Neither the campus companion nor the course management should know what the user's password is, or even the concept of passwords since the user might log in using biometric data or two-factor authentication. +Those concepts are what the authentication system cares about for students. + +

+
+ + +## What does modularity require in practice? + +It is all too easy to write software systems in which each ""module"" is a mishmash of concepts, modules depend on each other without any clear pattern. +Maintaining such a system requires reading and understanding most of the system's code, which doesn't scale to large systems. +We have seen the theoretical benefits and pitfalls of modularity, now let's see how to design modular systems in practice. + +We will talk about the _regularity_ of interfaces, _grouping_ and _layering_ modules, and organizing modules by _abstraction level_. + +### Regularity + +Consider fractals such as this one: + +

+ +This image may look complex, but because it is a fractal, it is very regular. +It can be [formally defined](https://en.wikipedia.org/wiki/Mandelbrot_set#Formal_definition) with a short mathematical equation and a short sentence. +Contrast it to this image: + +

+ +This is random noise. It has no regularity whatsoever. The only way to describe it is to describe each pixel in turn, which takes a long time. + +The idea that things should be regular and have short descriptions applies to code as well. +Consider the following extract from Java's `java.util` package: +```java +class Stack { + /** Returns the 1-based position where an object is on this stack. */ + int search(Object o); +} +``` +For some reason, `search` returns a 1-based position, even though every other index in Java is 0-based. +Thus, any description of `search` must include this fact, and a cursory glance at code that uses `search` may not spot a bug if the index is accidentally used as if it was 0-based. + +One should follow the ""principle of least surprise"", i.e., things should behave in the way most people will expect them to, and thus not have exceptions to common rules. +Another example from Java is the `URL` class's `equals` method. +One would expect that, like any other equality check in Java, `URL::equals` checks the fields of both objects, or perhaps some subset of them. +However, what it [actually does](https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/net/URL.html#equals(java.lang.Object)) is to check whether the two URLs resolve to the same IP address. +This means the result depends on whether the two URLs happen to point to the same IP at that particular point in time, and even whether the machine the code is running on has an Internet connection. +It also takes time to resolve IP addresses, which is orders of magnitude slower than usual `equals` methods that check for field equality. + +A more formal way to view regularity is [Kolmogorov complexity](https://en.wikipedia.org/wiki/Kolmogorov_complexity): how many words do you need to describe something? +For instance, the fractal above has low Kolmogorov complexity because it can be described in very few words. One can write a short computer program to produce it. +In comparison, the random noise above has high Kolmogorov complexity because it can only be described with many words. A program to produce it has to produce each pixel individually. +Any module whose description must include ""and..."" or ""except..."" has higher Kolmogorov complexity than it likely needs to. + +### Grouping + +What do the following classes have in common? `Map, Base64, Calendar, Formatter, Optional, Scanner, Timer, Date`. + +Not much, do they? Yet they are all in the same `java.util` package in Java's standard library. +This is not a good module interface: it contains a bunch of unrelated things! +If you see that a Java program depends on `java.util`, you don't gain much information, because it is such a broad module. + +Now what do the following classes have in common? `Container, KeysView, Iterable, Sequence, Collection, MutableSet, Set, AsyncIterator`. + +This is much more straightforward: they are all collections, and indeed they are in the Python collections module. +Unfortunately, that module is named `collections.abc`, because it's a fun acronym for ""abstract base classes"", which is not a great name for a module. +But at least if you see a Python program depends on `collections.abc`, after looking up the name, you now know that it uses data structures. + +The importance of _grouping_ related things together explains why global variables are such a problem. +If multiple modules all access the same global variable, then they all effectively form one module since a programmer needs to understand how each of them uses the global variable to use any of them. +The grouping done by global variables is accidental, and thus unlikely to produce useful groups. + +### Layering + +You may already know the networking stack's layers: the application layer uses the transport layer, which uses the network layer, and so on until the physical layer at the bottom-most level. +The application layer does not use the network layer directly, nor does it even know there is a network layer. The network layer doesn't know there is an application or a transport layer, either. + +Layering is a way to define the dependencies between modules in a minimal and manageable way, so that maintaining a module can be done without knowledge of most other modules. + +There can be more than one module at a given layer: for instance, an app could use mobile and server modules, which form the layer below the app module. +The server module itself may depend on an authentication module and a database module, which form the layer below, and so on. + +Thus, layer `N` depends _only_ on layer `N-1`, and the context for layer `N`'s implementation is the interface of layer `N-1`. +By building layers in a tall stack, one minimizes the context of each layer's implementation. + +Sometimes, however, it is necessary to have one layer take decisions according to some higher-level logic, such as what comparison to use in a sorting function. +Hardcoding knowledge about higher-level items in the sorting function would break layering and make it harder to maintain the sorting function. +Instead, one can inject a ""comparison"" function as a parameter of the sort function: +```scala +def sort(items, less_than) = { + ... + if (less_than(items(i), items(j)) + ... +} +``` +The higher-level layers can thus pass a higher-level comparison function, and the sorting function does not need to explicitly depend on any layer above it, solving the problem. +This can also be done with objects by passing other objects as constructor parameters, and even with packages in languages that permit it such as [Ada](https://en.wikipedia.org/wiki/Ada_(programming_language)). + +Layering also explains the difference between _inheritance_, such as `class MyApp extends MobileApp`, and _composition_, such as `class MyApp { private MyApp app; }`. +The former requires `MyApp` to expose all of the interface of `MobileApp` in addition to its own, whereas the latter lets `MyApp` choose what to expose and optionally use `MobileApp` to implement its interface: + +

+ +In most cases, composition is the appropriate choice to avoid exposing irrelevant details to higher-level layers. +However, inheritance can be useful if the modules are logically in the same layer, such as `LaserPrinter` and `InkjetPrinter` both inheriting from `Printer`. + +### Abstraction levels + +An optical fiber cable provides a very low-level abstraction, which deals with light to transmit individual bits. +The Ethernet protocol provides a higher-level abstraction, which deals with MAC addresses and transmits bytes. +A mobile app provides provides a high-level abstraction, which deals with requests and responses to transmit information such as the daily menus in cafeterias. + +If you had to implement a mobile app, and all you had available was an optical fiber cable, you would spend most of your time re-implementing intermediate abstractions, +since defining a request for today's menu in terms of individual bits is hard. + +On the other hand, if you had to implement an optical fiber cable extension, and all you had available was the high-level abstraction of daily cafeteria menus, you would not be able to do your job. +The high-level abstraction is convenient for high-level operations, but voluntarily hides low-level details. + +When designing a module, think about its _abstraction level_: where does it stand in the spectrum from low-level to high-level abstractions? +If you provide an abstraction of a level higher than what is needed, others won't be able to do their work because they cannot access the low-level information they need. +If you provide an abstraction of a level lower than what is needed, others will have to spend lots of time reinventing the high-level wheel on top of your low-level abstraction. + +You do not always have to choose a single abstraction level: you can expose multiple ones. +For instance, a library might expose a module to transmit bits on optical fiber, a module to transmit Ethernet packets, and a module to make high-level requests. +Internally, the high-level request module might use the Ethernet module, which might use the optical fiber module. Or not; there's no way for your customers to know, and there's no reason for them to care, +as long as your modules provide working implementations of the abstractions they expose. + +A real-world example of differing abstraction levels is displaying a triangle using a GPU, which is the graphics equivalent of printing the text ""Hello, World!"". +Using a high-level API such as [GDI](https://en.wikipedia.org/wiki/Graphics_Device_Interface) displaying a triangle requires around 10 lines of code. +You can create a window object, create a triangle object, configure these objects' colors, and show them. +Using a lower-level API such as [OpenGL](https://en.wikipedia.org/wiki/OpenGL), displaying a triangle requires around 100 lines of code, because you must explicitly deal with vertexes and shaders. +Using an even lower-level API such as [Vulkan](https://en.wikipedia.org/wiki/Vulkan), displaying a triangle requires around 1000 lines of code, +because you must explicitly deal with all of the low-level GPU concepts that even OpenGL abstracts away. Every part of the graphics pipeline must be configured in Vulkan. +But this does not make Vulkan a ""bad"" API, only one that is not adapted to high-level tasks such as displaying triangles. +Instead, Vulkan and similar APIs such as [Direct3D 12](https://en.wikipedia.org/wiki/Direct3D#Direct3D_12) are intended to be used for game engines and other ""intermediate"" abstractions that +themselves provide higher-level abstractions. For instance, OpenGL can be implemented as a layer on top of Vulkan. +Without such low-level abstractions, it would be impossible to implement high-level abstractions efficiently, and indeed performance was the main motivation for the creation of APIs such as Vulkan. + +When implementing an abstraction on top of a lower-level abstraction, be careful to avoid _abstraction leaks_. +An abstraction leak is when a low-level detail ""pops out"" of a high-level abstraction, forcing users of the abstraction to understand and deal with lower-level details than they care about. +For instance, if the function to show today's menu has the signature `def showMenu(date: LocalDate, useIPv4: Boolean)`, anyone who wants to write an application that shows menus must +explicitly think about whether they want to use IPv4, a lower-level detail that should not be relevant at all in this context. +Note that the terminology ""abstraction leak"" is not related to the security concept of ""information leak"", despite both being leaks. + +One infamous abstraction leak is provided by the former C standard library function `char* gets(char* str)`. +""Former"" function because it is the only one that was considered so bad it had to be removed from the C standard library, breaking compatibility with previous versions. +What `gets` does is to read a line of input on the console and store it in the memory pointed to by `str`. +However, there's a mismatch in abstraction levels: `gets` tries to provide the abstraction of ""a line of text"", yet it uses the C concept of ""a pointer to characters"". +Because the latter has no associated length, this abstraction leak is a security flaw. +No matter how large the buffer pointed to by `str` is, the user could write more characters than that, at which point `gets` would overwrite whatever is next in memory with whatever data the user typed. + +### Recap + +Design systems such that individual modules have a regular API that provides one coherent abstraction. +Layer your modules so that each module only depends on modules in the layer immediately below, and the layers are ordered by abstraction level. +For instance, here is a design in which the green module provides a high-level abstraction and depends on the yellow modules, which provide abstractions of a lower level, +and themselves depend on the red modules and their lowest-level abstractions: + +

+ +One way to do this at the level of individual functions is to write the high-level module first, with a high-level interface, +and an implementation that uses functions you haven't written yet. +For instance, for a method that predicts the weather: + +```java +int predictWeather(LocalDate date) { + var past = getPastWeather(date); + var temps = extractTemperatures(past); + return predict(temps); +} +``` + +After doing this, you can implement `getPastWeather` and others, themselves in terms of lower-level interfaces, until you either implement the lowest level yourself or reuse existing code for it. +For instance, `getPastWeather` will likely be implemented with some HTTP library, while `extractTemperature` will likely be custom-made for the format of `past`. + +--- +#### Exercise +Look at `App.java` in the [`calc`](exercises/lecture/calc) project. It mixes all kinds of concepts together. Modularize it! +Think about what modules you need, and how you should design the overall system. +First off, what will the new `main` method look like? +
+Suggested solution (click to expand) +

+ +Create one function for obtaining user input, one for parsing it into tokens, one for evaluating these tokens, and one for printing the output or lack thereof. +The evaluation function can internally use another function to execute each individual operator, so that all operators are in one place and independent of input parsing. +See the [solution file](exercises/solutions/lecture/Calc.java) for an example, which has the following structure: + +```mermaid +graph TD + A[main] + B[getInputl] + C[parseInput] + D[compute] + E[execute] + F[display] + A --> B + A --> C + A --> D + A --> F + D --> E +``` + +

+
+ +--- + +At this point, you may be wondering: how far should you go with modularization? Should your programs consist of thousands of tiny modules stacked hundreds of layers high? +Probably not, as this would cause maintainability issues just like having a single big module for everything does. But where to stop? + +There is no single objective metric to tell you how big or small a module should be, but here are some heuristics. +You can estimate _size_ using the number of logical paths in a module. How many different things can the module do? +If you get above a dozen or so, the module is probably too big. +You can estimate _complexity_ using the number of inputs for a module. How many things does the module need to do its job? +If it's more than four or five, the module is probably too big. + +Remember the acronym _YAGNI_, for ""You Aren't Gonna Need It"". +You could split that module into three even smaller parts that hypothetically could be individually reused, but will you need this? No? You Aren't Gonna Need It, so don't do it. +You could provide ten different parameters for one module to configure every detail of what it does, but will you need this? No? You Aren't Gonna Need It, so don't do it. + +One way to discuss designs with colleagues is through the use of diagrams such as [UML class diagrams](https://en.wikipedia.org/wiki/Class_diagram), +in which you draw modules with their data and operations and link them to indicate composition and inheritance relationships. +Keep in mind that the goal is to discuss system design, not to adhere to specific conventions. +As long as everyone agrees on what each diagram element means, whether or not you adhere to a specific convention such as UML is irrelevant. + +Beware of the phenomenon known as ""[cargo cult programming](https://en.wikipedia.org/wiki/Cargo_cult_programming)"". +The idea of a ""cargo cult"" originated in remote islands used by American soldiers as bases during wars. +These islands were home to native populations who had no idea what planes were, but who realized that when Americans did specific gestures involving military equipment, +cargo planes full of supplies landed on the islands. They naturally hypothesized that if they, the natives, could replicate these same gestures, more planes might land! +Of course, from our point of view we know this was useless because they got the correlation backwards: Americans were doing landing gestures because they knew planes were coming, not the other way around. +But the natives did not know that, and tried to get cargo planes to land. +Some of these cults lasted for longer than they should have, and their modern-day equivalent in programming is engineers who design their system +""because some large company, like Google or Microsoft, does it that way"" without the knowledge nor the understanding of why the large company does it that way. +Typically, big system in big companies have constraints that do not apply to the vast majority of systems, such as dealing with thousands of requests per second or having to provide extreme availability guarantees. + + +## How can one mitigate the impact of failures? + +What should happen when one part of a system has a problem? + +[Margaret Hamilton](https://en.wikipedia.org/wiki/Margaret_Hamilton_(software_engineer)), who along with her team wrote the software that put spaceships in orbit and people on the Moon, +recalled [in a lecture](https://www.youtube.com/watch?v=ZbVOF0Uk5lU) how she tried persuading managers to add a safety feature to a spaceship. +She had brought her young daughter to work one day, and her daughter tried the spaceship simulator. Surprisingly, the daughter managed to crash the software running in the simulator. +It turned out that the software was not resilient to starting one operation while the spaceship was supposed to be in a completely different phase of flight. +Hamilton tried to persuade her managers that the software should be made resilient to such errors, but as she recalls it: +""_[the managers] said 'this won't ever happen, astronauts are well-trained, they don't make mistakes... the very next mission, Apollo 8, this very thing happened [...] it took hours to get [data] back_"". + +The lack of a check for this condition was an _error_, i.e., the team who programmed the software chose not to consider a problem that might happen in practice. +Other kinds of errors involve forgetting to handle a failure case, or writing code that does not do what the programmer think it does. + +Errors cause _defects_ in the system, which can be triggered by external inputs, such as an astronaut pressing the wrong button. +If defects are not handled, they cause _failures_, which we want to avoid. + +Errors are inevitable in any large system, because systems involve humans and humans are fallible. ""Just don't make errors"" is not a realistic solution. +Even thinking about all possible failure cases is hard; consider the ""[Cat causes login screen to hang](https://bugs.launchpad.net/ubuntu/+source/unity-greeter/+bug/1538615)"" in Ubuntu. +Who would have thought that thousands of characters in a username input field was a realistic possibility from a non-malicious user? + +Preventing failures thus requires preventing defects from propagating through the system, i.e., _mitigating_ the impact of defects. +We will see four ways to do it, all based on modules: isolating, repairing, retrying, and replacing. + +How much effort you should put into tolerating defects depends on what is at stake. +A small script you wrote yourself to fetch cartoons should be tolerant to temporary network errors, but does not need advanced recovery techniques. +On the other hand, the [barrier over the Thames river](https://www.youtube.com/watch?v=eY-XHAoVEeU) that prevents mass floods needs to be resilient against lots of possible defects. + +### Isolating + +Instead of crashing an entire piece of software, it is desirable to _isolate_ the defect and crash only one module, as small and close to the source of the defect as possible. +For instance, modern Web browsers isolate each tab into its own module, and if the website inside the tab causes a problem, only that tab needs to crash, not the entire browser. +Similarly, operating systems isolate each program such that only the program crashes if it has a defect, not the entire operating system. + +However, only isolate if the rest of the program can reasonably function without the failed module. +For instance, if the module responsible for drawing the overall browser interface crashes, the rest of the browser cannot function. +On the other hand, crashing only a browser tab is acceptable, as the user can still use other tabs. + +### Repairing + +Sometimes a module can go into unexpected states due to defects, at which point it can be _repaired_ by switching to a well-known state. +This does not mean moving from the unexpected state in some direction, since the module does not even know where it is, but replacing the entirety of the module's state with a specific ""backup"" state that is known to work. +An interesting example of this is [the ""top secret"" room](https://zelda-archive.fandom.com/wiki/Top_Secret_Room) in the video game _The Legend of Zelda: A Link to the Past_. +If the player manages to get the game into an unknown state, for instance by switching between map areas too quickly for the game to catch up, +the game recognizes that it is confused and drops the player into a special room, and pretends that this is intentional and the player has found a secret area. + +One coarse form of repair is to ""turn it off and on again"", such as restarting a program, or rebooting the operating system. +The state immediately after starting is known to work, but this cannot be hidden from users and is more of a way to work around a failure. + +However, only repair if the entirety of a module's state can be repaired to a state known to work. +Repairing only part of a module risks creating a Frankenstein abomination that only makes the problem worse. + +### Retrying + +Not all failures are forever. Some failures come from external causes that can fix themselves without your intervention, and thus _retrying_ is often a good idea. +For instance, if a user's Internet connection fails, any Web request your app made will fail. But it's likely that the connection will be restored quickly, for instance +because the user was temporarily in a place with low cellular connectivity such as a tunnel. +Thus, retrying some number of times before giving up avoids showing unnecessary failures to the user. +How many times to retry, and how much to wait before retries, is up to you, and depends on the system and its context. + +However, only retry if a request is _idempotent_, meaning that doing it more than once has the same effect as doing it once. +For instance, withdrawing cash from a bank account is not an idempotent request. If you retry it because you didn't get a response, but the request had actually reached the server, the cash will be withdrawn twice. + +You should also only retry when encountering problems that are _recoverable_, i.e., for which retrying has a chance to succeed because they come from circumstances beyond your control that could fix themselves. +For instance, ""no internet"" is recoverable, and so is ""printer starting and not ready yet"". This is what Java tried to model as ""checked"" exceptions: if the exception is recoverable, +the language should force the developer to deal with it. +On the other hand, problems such as ""the desired username is already taken"" or ""the code has a bug which divided by zero"" are not recoverable, because retrying will hit the same issue again and again. + +### Replacing + +Sometimes there is more than one way to perform a task, and some of these ways can serve as backups, _replacing_ the main module if there is a problem. +For instance, if a fingerprint reader cannot recognize a user's finger because the finger is too wet, an authentication system could ask for a password instead. + +However, only replace if you have an alternative that is as robust and tested as the original one. +The ""backup"" module should not be old code that hasn't been run in years, but should be treated with the same care and quality bar as the main module. + + +## How can one reuse concepts across software systems? + +When designing a system, the context is often the same as in previous systems, and so are the user requirements. +For instance, ""cross over a body of water"" is a common requirement and context that leads to the natural solution ""build a bridge"". +If every engineer designed the concept of a bridge from scratch every time someone needed to cross a body of water, +each bridge would not be very good, as it would not benefit from the knowledge accumulated by building previous bridges. +Instead, engineers have blueprints for various kinds of bridges, select them based on the specifics of the problem, and propose improvements when they think of any. + +In software engineering, these kinds of blueprints are named _design patterns_, and are so common that one sometimes forgets they even exist. +For instance, consider the following Java loop: +```java +for (int item : items) { + // ... +} +``` +This `for` construct looks perfectly normal and standard Java, but it did not always exist. +It was introduced in Java 1.5, alongside the `Iterable` interface, instead of having every collection provide its own way to iterate. +This used to be known as ""the Iterator design pattern"", but nowadays it’s such a standard part of modern programming languages +that we do not explicitly think of it as a design pattern any more. + +Design patterns are blueprints, not algorithms. +A design pattern is not a piece of code you can copy-paste, but an overall description of what the solution to a common problem can look like. +You can think of it as providing the name of a dish rather than the recipe for it. +Have some fish? You could make fish with vegetables and rice, which is a healthy combo. Soy sauce is also a good idea as part of the sauce. +How exactly you cook the fish, or which vegetables you choose, is up to you. + +There are many patterns, and even more descriptions of them online. We provide a [short summary](DesignPatterns.md) of common ones. + +In this lecture, we will see patterns to separate the user interface of a program, the business logic that is core to the program, and the reusable strategies the program needs +such as retrying when a request fails. + +The problem solved by design patterns for user interfaces is a common one: software engineers must write code for applications that will run on different kinds of systems, such as a desktop app and a mobile app. +However, writing the code once per platform would not be maintainable: most of the code would be copy-pasted. +Any modification would have to be replicated on all platforms’ code, which would inevitably lead to one copy falling out of sync. + +Instead, software engineers should be able to write the core logic of the application once, and only write different code per platform for the user interface. +This also means tests can be written against the logic without being tied to a specific user interface. +This is a requirement in practice for any large application. +For instance, Microsoft Office is tens of millions of lines of code; it would be entirely infeasible to have this code duplicated in Office for Windows, Mac, Android, the web, and so on. + +The business logic is typically called the _model_, and the user interface is called the _view_. +We want to avoid coupling them, thus we naturally need something in the middle that will talk to both of them, but what? + +### Model-View-Controller (MVC) + +In the MVC pattern, the view and model are mediated by a controller, with which users interact. +A user submits a request to the controller, which interacts with the model and returns a view to the user: + +

+ +For instance, in a website, the user's browser sends an HTTP request to the controller, which eventually creates a view using data from the model, and the view renders as HTML. +The view and model are decoupled, which is good, but there are also disadvantages. +First, users don’t typically talk directly to controllers, outside of the web. +Second, creating a new view from scratch every time is not very efficient. + +### Model-View-Presenter (MVP) + +In the MVP pattern, the view and model are mediated by a presenter, but the view handles user input directly. +This matches the architecture of many user interfaces: users interact directly with the view, such as by touching a button on a smartphone screen. +The view then informs the presenter of the interaction, which talks to the model as needed and then tells the view what to update: + +

+ +This fixes two of MVC's problems: users don’t need to know about the intermediary module, they can interact with the view instead, and the view can be changed incrementally. + +--- +#### Exercise +Transform the code of `App.java` in the [`weather`](exercises/lecture/weather) project to use the MVP pattern. +As a first step, what will the interface of your model and view look like? +Once that's set, implement them by moving the existing code around, and think about what the presenter should look like. +
+Suggested solution (click to expand) +

+ +The model should provide a method to get the forecast, and the view should provide a method to show text and one to run the application. +Then, move the existing code into implementations of the model and the view, and write a presenter that binds them together. +See the [solution file](exercises/solutions/lecture/Weather.java) for an example. + +

+
+ +--- + +MVP does have disadvantages. +First, the view now holds state, as it is updated incrementally. This pushes more code into the view, despite one of our original goals being to have as little code in the view as possible. +Second, the interface between the view and the presenter often becomes tied to specific actions that the view can do given the context, such as a console app, and it's hard to make the view generic over many form factors. + +### Model-View-ViewModel (MVVM) + +Let's take a step back before describing the next pattern. +What is a user interface anyway? +- Data to display +- Commands to execute + +...and that's it! At a high-level, at least. + +The key idea behind MVVM is that the view should observe data changes using the Observer pattern, +and thus the intermediary module, the viewmodel, only needs to be a platform-independent user interface that exposes data, commands, +and an Observer pattern implementation to let views observe changes. + +The result is a cleanly layered system, in which the view has little code and is layered on top of the viewmodel, which holds state and itself uses the model to update its state when commands are executed: + +

+ +The view observes changes and updates itself. It can choose to display the data in any way it wants, as the viewmodel does not tell it how to update, only what to display. + +The view is conceptually a function of the viewmodel: it could be entirely computed from the viewmodel every time, or it could incrementally change itself as an optimization. +This is useful for platforms such as smartphones, in which applications running in the background need to use less memory: the view can simply be destroyed, as it can be entirely re-created from the viewmodel whenever needed. +MVVM also enables the code reuse we set out to achieve, as different platforms need different views but the same model and viewmodel, and the viewmodel contains the code that keeps track of state, thus the views are small. + +### Middleware + +You've written an app using an UI design pattern to separate your business logic and your user interface, but now you get a customer request: +can the data be cached so that an Internet connection isn't necessary? Also, when there isn't cached data, can the app retry if it cannot connect immediately? + +You could put this logic in your controller, presenter, or viewmodel, but that would tie it to a specific part of your app. +You could put it in a model, but at the cost of making that module messier as it would contain multiple orthogonal concepts. + +Instead, this is where the _middleware_ pattern comes in, also known as _decorator_. +A middleware provides a layer that exposes the same interface as the layer below but adds functionality: + +

+ +A middleware can ""short circuit"" a request if it wants to answer directly instead of using the layers below. +For instance, if a cache has recent data, it could return that data without asking the layer below for the very latest data. + +One real-world example of middlewares is in [Windows file system minifilters](https://learn.microsoft.com/en-us/windows-hardware/drivers/ifs/filter-manager-concepts), +which are middlewares for storage that perform tasks such as virus detection, logging, or replication to the cloud. +This design allows programs to add their own filter in the Windows I/O stack without interfering with others. +Programs such as Google Drive do not need to know about other programs such as antiviruses. + +--- +#### Exercise +You've transformed the [`weather`](exercises/lecture/weather) project to use the MVP pattern already, now add a retrying middleware that retries until the weather is known and not `???`. +As a first step, write a middleware that wraps a model and provides the same interface as the model, without adding functionality. +Then add the retrying logic to your middleware. +
+Suggested solution (click to expand) +

+ +Your middleware needs to use the wrapped model in a loop, possibly with a limit on retries. +See the [solution file](exercises/solutions/lecture/RetryingWeather.java) for an example. + +

+
+ +--- + +Beware: just because you _can_ use all kinds of patterns does not mean you _should_. +Remember to avoid cargo cults! If ""You Aren't Gonna Need It"", don't do it. +Otherwise you might end up with an ""implementation of AspectInstanceFactory that locates the aspect from the BeanFactory using a configured bean name"", +just in case somebody _could_ want this flexibility. +[Really](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/aop/config/SimpleBeanFactoryAwareAspectInstanceFactory.html)! + +## Summary + +In this lecture, you learned: +- Abstraction and modularity, and how to use them in practice: regularity, grouping, layering, abstraction levels, and abstraction leaks +- Tolerating defects: isolating, retrying, repairing, and replacing +- Design patterns, and specifically common ones to decouple user interfaces, business logic, and reusable strategies: MVC, MVP, MVVM, and Middleware + +You can now check out the [exercises](exercises/)! +",CS-305: Software engineering +"# Design Patterns + +This document contains a curated list of common design patterns, including context and examples. + +_These examples are there to concisely illustrate patterns._ +_Real code would also include visibility annotations (`public`, `private`, etc.), make fields `final` if possible,_ +_provide documentation, and other improvements that would detract from the point being made by each example._ + + +## Adapter + +An _adapter_ converts an object of type `X` when it must be used with an interface that accepts only objects of type `X'`, similar but not the same as `X`. +For instance, an app may use two libraries that both represent color with four channels, one B/G/R/A and one A/R/G/B. +It is not possible to directly use an object from one library with the other library, and that's where you can use an _adapter_: +an object that wraps another object and provides a different interface. + +A real-world example is an electrical adapter, for instance to use a Swiss device in the United States: both plugs fundamentally do the same job, +but one needs a passive adapter to convert from one plug type to the other. + +Example: +```java +interface BgraColor { + // 0 = B, 1 = G, 2 = R, 3 = A + float getChannel(int index); +} + +interface ArgbColor { + float getA(); + float getR(); + float getG(); + float getB(); +} + +class BgraToArgbAdapter implements ArgbColor { + BgraColor wrapped; + + BgraToArgbAdapter(BgraColor wrapped) { + this.wrapped = wrapped; + } + + @Override public float getA() { return wrapped.getChannel(3); } + @Override public float getR() { return wrapped.getChannel(2); } + @Override public float getG() { return wrapped.getChannel(1); } + @Override public float getB() { return wrapped.getChannel(0); } +} +``` + + +## Builder + +A _builder_ works around the limitations of constructors when an object must be immutable yet creating it all at once is not desirable. +For instance, a `Rectangle` with arguments `width, height, borderThickness, borderColor, isBorderDotted, backgroundColor` is complex, +and a constructor with all of these arguments would make code creating a `Rectangle` hard to read. +Furthermore, some arguments logically form groups: it is not useful to specify both `borderColor` and `isBorderDotted` if one does not want a border. +Creating many rectangles that share all but a few properties is also verbose if one must re-specify all of the common properties every time. +Instead, one can create a `RectangleBuilder` object that defines property groups, uses default values for unspecified properties, +and has a `build()` method to create a `Rectangle`. +Each method defining properties returns `this` so that the builder is easier to use. + +A special case of builders is when creating an object incrementally is otherwise too expensive, as in immutable strings in many languages. +If one wants to create a string by appending many chunks, using the `+` operator will copy the string data over and over again, creating many intermediate strings. +For instance, appending `[""a"", ""b"", ""c"", ""d""]` without a builder will create the intermediate strings `""ab""` and `""abc""` which will not be used later. +In contrast, a `StringBuilder` can internally maintain a list of appended strings, and copy their data only once when building the final string. + +Example: +```java +class Rectangle { + public Rectangle(int width, int height, int borderThickness, Color borderColor, boolean isBorderDotted, Color backgroundColor, ...) { + ... + } +} + +class RectangleBuilder { + // width, height are required + RectangleBuilder(int width, int height) { ... } + // optional, no border by default + RectangleBuilder withBorder(int thickness, Color color, boolean isDotted) { ... ; return this; } + // optional, no background by default + RectangleBuilder withBackgroundColor(Color color) { ... ; return this; } + // to create the rectangle + Rectangle build() { ... } +} + +// Usage example: +new RectangleBuilder(100, 200) + .withBorder(10, Colors.BLACK, true) + .build(); +``` + + +## Composite + +A _composite_ handles a group of objects of the same kind as a single object, through an object that exposes the same interface as each individual object. +For instance, a building with many apartments can expose an interface similar to that of a single apartment, with operations such as ""list residents"" that compose the results +of calling the operation on each apartment in the building. + +Example: +```java +interface FileSystemItem { + String getName(); + boolean containsText(String text); + ... +} + +class File implements FileSystemItem { + // implementation for a file +} + +class Folder implements FileSystemItem { + Folder(String name, List children) { ... } + + // the implementation of ""containsText"" delegates to its children + // the children could themselves be folders, without Folder having to know or care +} +``` + + +## Facade + +A _facade_ hides unnecessary details of legacy or third-party code behind a clean facade. +It's a kind of adapter whose goal is to convert a hard-to-use interface into an easy-to-use one, +containing the problematic code into a single class instead of letting it spill in the rest of the system. +This can be used for legacy code that will be rewritten: if the rewritten code has the same interface as the facade, the rest of the program won't need to change after the rewrite. + +Example: +```java +// Very detailed low-level classes, useful in some contexts, but all we want is to read some XML data +class BinaryReader { + BinaryReader(String path) { ... } +} +class StreamReader { + StreamReader(BinaryReader reader) { ... } +} +class TextReader { + TextReader(StreamReader reader) { ... } +} +class XMLOptions { ... } +class XMLReader { + XMLReader(TextReader reader, XMLOptions options) { ... } +} +class XMLDeserializer { + XMLDeserializer(XMLReader reader, bool ignoreCase, ...) { ... } +} + +// So we provide a facade +class XMLParser { + XMLParser(String path) { + // ... creates a BinaryReader, then a StreamReader, ..., and uses specific parameters for XMLOptions, ignoreCase, ... + } +} +``` + + +## Factory + +A _factory_ is a function that works around the limitations of constructors by creating an object whose exact type depends on the arguments, which is not something most languages can do in a constructor. +Thus, one creates instead a factory function whose return type is abstract and which decides what concrete type to return based on the arguments provided to the factory. + +Example: + +```java +interface Config { ... } + +class XMLConfig implements Config { ... } + +class JSONConfig implements Config { ... } + +class ConfigFactory { + static Config getConfig(String fileName) { + // depending on the file, creates a XMLConfig or a JSONConfig + } +} +``` + + +## Middleware (a.k.a. Decorator) + +A _middleware_, also known as _decorator_ is a layer that exposes the same interface as the layer directly below it and adds some functionality, +such as caching results or retrying failed requests. +Instead of bloating an object with code that implements a reusable strategy orthogonal to the object's purpose, one can ""decorate"" it with a middleware. +Furthermore, if there are multiple implementations of an interface, without a middleware one would need to copy-paste the reusable logic in each implementation. +A middleware may not always use the layer below, as it can ""short circuit"" a request by answering it directly, for instance if there is recent cached data for a given request. + +Example: +```java +interface HttpClient { + /** Returns null on failure */ + String get(String url); +} + +// Implements HTTP 1 +class Http1Client implements HttpClient { ... } +// Implements HTTP 2 +class Http2Client implements HttpClient { ... } + +class RetryingHttpClient implements HttpClient { + HttpClient wrapped; + int maxRetries; + + HttpClientImpl(HttpClient wrapped, int maxRetries) { + this.wrapped = wrapped; + this.maxRetries = maxRetries; + } + + @Override + String get(String url) { + for (int n = 0; n < maxRetries; n++) { + String result = wrapped.get(url); + if (result != null) { + return result; + } + } + return null; + } +} + +class CachingHttpClient implements HttpClient { ... } + +// one can now decorate any HttpClient with a RetryingHttpClient or a CachingHttpClient, +// and since the interface is the same, one can decorate an already-decorated object, e.g., new CachingHttpClient(new RetryingHttpClient(new Http2Client(...), 5)) +``` + + +## Null Object + +A _null object_ is a replacement for ""a lack of object"", e.g., `null`, that behaves as a ""no-op"" for all operations, which enables the rest of the code to not have to explicitly handle it. +This is useful even in languages that do not directly have `null`, such as Scala with `Option`, since sometimes one may want to run an operation on a potentially missing object +without having to explicitly handle `None` everywhere. +It's the equivalent of returning an empty list instead of `null` to indicate a lack of results: one can handle an empty list like any other list. + +Example: +```java +interface File { + boolean contains(String text); +} + +class RealFile implements File { ... } + +class NullFile implements File { + @Override + boolean contains(String text) { + return false; + } +} + +class FileSystem { + static File getFile(String path) { + // if the path doesn't exist, instead of returning `null` (or a `None` option in languages like Scala), + // return a `NullFile`, which can be used like any other `File` + } +} +``` + + +## Observer + +The _observer_ pattern lets objects be notified of events that happen in other objects, without having to poll for changes. +For instance, it would be extremely inefficient for an operating system to constantly ask the keyboard ""did the user press a key?"", since >99% of the time this is not the case. +Instead, the keyboard lets the OS ""observe"" it by registering for change notifications. +Furthermore, the keyboard can implement a generic ""input change notifier"" interface so that the OS can handle input changes without having to depend on the specifics of any input device. +Similarly, the OS can implement a generic ""input change observer"" interface so that the keyboard can notify the OS of changes without having to depend on the specifics of any OS. + +Example: +```java +interface ButtonObserver { + // Typically the object that triggered the event is an argument, so that the observer can distinguish multiple sources + // if it has registered to their events + void clicked(Button source); +} + +class Button { + void registerForClicks(ButtonObserver button) { + // handles a list of all observers + } + // optionally, could provide a way to remove an observer +} + +// One can now ask any Button to notify us when it is clicked +// In fact, the Button may implement this by itself observing a lower-level layer, e.g., the mouse, and reporting only relevant events +``` + + +## Pool + +A _pool_ keeps a set of reusable objects to amortize the cost of creating these objects, typically because their creation involves some expensive operation in terms of performance. +For instance, connecting to a remote server is slow as it requires multiple round-trips to perform handshakes, cryptographic key exchanges, and so on; +a connection pool can lower this cost by reusing an existing connection that another part of the program used but no longer needs. +As another example, language runtimes keep a pool of free memory that can be used whenever the program allocates memory, instead of asking the operating system for memory each time, +which is slow because it involves a system call. The pool still needs to perform a system call to get more memory once it's empty, but that should happen rarely. + +The pool pattern is typically only necessary for advanced performance optimizations. + +Example: +```java +class ExpensiveThing { + ExpensiveThing() { + // ... some costly operation, e.g., opening a connection to a server ... + } +} + +class ExpensiveThingPool { + private Set objects; + + ExpensiveThing get() { + // ... return an existing instance if `objects` contains one, or create one if the pool is empty ... + } + // Warning, ""releasing"" the same thing twice is dangerous! + void release(ExpensiveThing thing) { ... } +} + +// Instead of `new ExpensiveThing()`, one can now use a pool +``` + +## Singleton + +A _singleton_ is a global variable by another name, which is typically either a bad idea or a workaround for the limitations of third-party code. +A singleton is an object of which there is only one instance, publicly accessible by any other code. + +There are advanced cases in which a singleton might be justified, such as in combination with a pool to share a pool across libraries, but it should generally be avoided. + + +## Strategy + +A _strategy_ lets the callers of a function configure part of the function's logic by passing code as a parameter. +For instance, a sorting method needs a way to compare two elements, but the same type of elements might be compared in different ways based on context, such as ascending or descending for integers. +The sorting method can take as argument a function that compares two elements, and then call this function whenever it needs a comparison. +Thus, the sorting method only cares about implementing an efficient sorting algorithm given a way to compare, and does not hardcode a specific kind of comparison. +This also ensures the sorting method does not need to depend on higher-level modules to know how to sort its input, such as what kind of object is being sorted, since the strategy takes care of that. + +Example: +```java +// (De)serializes objects +interface Serializer { ... } + +// Persistent cache for objets, which stores objects on disk +// Uses a ""serializer"" strategy since depending on context one may want different serialization formats, or perhaps even encryption for sensitive data +class PersistentCache { + PersistentCache(Serializer serializer) { ... } +} +``` + + +## MVC: Model-View-Controller + +A _controller_ is an object that handles user requests, using a _model_ internally, then creates a _view_ that is rendered to the user. +MVC is appropriate when the user interacts with the controller directly, e.g., through an HTTP request. +MVC can decouple user interface code and business logic, making them more maintainable, reusable, and testable. + +Example: +```java +// Model +class WeatherForecast { + WeatherForecast(...) { ... } + + int getTemperature(...) { ... } +} + +// View +// Could be HTML as is the example here, but we could also add a `WeatherView` interface and multiple types of views, +// such as a JSON one for automated request (using the HTTP ""Accept"" header to know what view format the user wants) +class HtmlWeatherView { + HtmlWeatherView(int temperature, ...) { ... } + + String toString() { ... } +} + +// Controller +class WeatherController { + WeatherForecast forecast; + + WeatherController(...) { + // The controller could create its own WeatherForecast object, or have it as a dependency, likely with an interface for it to make it testable + forecast = ...; + } + + HtmlWeatherView get(...) { + int temperature = forecast.getTemperature(...); + return new HtmlWeatherView(temperature, ...); + } +} + +// In general, one would use a framework that can be configured to know which HTTP paths correspond to which method on which controller object, +// But this can be done manually as well: +System.out.println(new WeatherController(...).get(...).toString()); +``` + + +## MVP: Model-View-Presenter + +A _presenter_ is an object that is used by a _view_ to respond to user commands, and which uses a _model_ internally, updating the view with the results. +MVP is appropriate when the user interacts with the view, such as a mobile app, and has the same goal as MVC: make code more maintainable, reusable, and testable. + +Example: +```java +// Model +class WeatherForecast { + WeatherForecast(...) { ... } + + int getTemperature(...) { ... } +} + +// View +// Could have an interface if a single Presenter can use multiple Views, e.g., for testing, or for multiple app form factors +class WeatherView { + WeatherPresenter presenter; + + WeatherView(WeatherPresenter presenter) { + this.presenter = presenter; + presenter.setView(this); + // ... configure the UI framework to call `onClick` when the user clicks + } + + void start() { /* ... display the user interface ... */ } + + void onClick(...) { presenter.showTemperature(); } + + void showTemperature(int temperature) { + // ... displays `temperature` ... + } +} + +// Presenter +class WeatherPresenter { + WeatherForecast forecast; + WeatherView view; + + WeatherPresenter(...) { + // Same remark as the MVC example concerning injection + forecast = ...; + } + + void setView(WeatherView view) { this.view = view; } + + void showTemperature() { + int temperature = forecast.getTemperature(...); + view.showTemperature(temperature); + } +} + +// Usage example: +new WeatherView(new WeatherPresenter(...)).start(); + +``` + + +## MVVM: Model-View-ViewModel + +A _viewmodel_ is a platform-independent user interface, which defines data, events for data changes using the _observer_ pattern, and commands. +The viewmodel internally uses a _model_ to implement operations, and a _view_ can be layered on top of the viewmodel to display the data and interact with the user. +MVVM is an evolution of MVP which avoids keeping state in the view, and which emphasizes the idea of a platform-independent user interface, +instead of making the presenter/view interface match a specific kind of user interface such as a console app. + +Example: +```java +// Model +class WeatherForecast { + WeatherForecast(...) { ... } + + int getTemperature(...) { ... } +} + +// View +// There is no need for an interface, because the viewmodel does not interact with the view, +// thus there can be many different views for a viewmodel without any shared interface between them +class WeatherView { + WeatherViewModel viewModel; + + WeatherView(WeatherViewModel viewModel) { + this.viewModel = viewModel; + viewModel.registerForTemperatureChanges(showTemperature); + // ... configure the UI framework to call `onClick` when the user clicks + } + + void start() { /* ... display the user interface ... */ } + + void onClick(...) { viewModel.updateTemperature(); } + + void showTemperature() { + // ... displays `this.viewModel.getTemperature()` ... + // (or the viewmodel could pass the temperature as an argument directly) + } +} + +// ViewModel +class WeatherViewModel { + // No reference to a view! + // Only an observer pattern enabling anyone (views, but also, e.g., unit tests) to register for changes + + WeatherForecast forecast; + int temperature; + Runnable temperatureCallback; + + WeatherViewModel(...) { + // Same remark as the MVC example concerning injection + forecast = ...; + } + + // Data + int getTemperature() { return temperature; } + + // Data change event + void registerForTemperatureChanges(Runnable action) { this.temperatureCallback = action; } + + // Command + void updateTemperature() { + // In a real app, this operation would likely be asynchronous, + // the viewmodel could perhaps have an ""isLoading"" property enabling views to show a progress bar + int temperature = forecast.getTemperature(...); + this.temperature = temperature; + if (temperatureCallback != null) { temperatureCallback.run(); } + } +} + +// Usage example: +new WeatherView(new WeatherViewModel(...)).start(); +``` +",CS-305: Software engineering +"# Teamwork + +Working on your own typically means engineering a small application, such as a calculator. +To design bigger systems, teams are needed, including not only engineers but also designers, managers, customer representatives, and so on. +There are different kinds of tasks to do, which need to be sub-divided and assigned to people: requirements elicitation, design, implementation, verification, maintenance, and so on. + +This lecture is all about teamwork: who does what when and why? + + +## Objectives + +After this lecture, you should be able to: +- Contrast different software development methodologies +- Apply the Scrum methodology in a software development team +- Divide tasks within a development team +- Produce effective code reviews + + +## What methodologies exist to organize a project? + +We will see different methodologies, but let's start with one you may have already followed without giving it a name, _waterfall_. + +### Waterfall + +In Waterfall, the team completes each step of the development process in sequence. +Waterfall is named that way because water goes down, never back up. First requirements are written down, then the software is designed, then it is implemented, +then it is tested, then it is released and maintained. +Once a step is finished, its output is considered done and can no longer be modified, then the next step begins. +There is a clear deadline for each task, and a specific goal, such as documents containing requirements or a codebase implementing the design. + +Waterfall is useful for projects that have a well-defined goal which is unlikely to change, typically due to external factors such as legal frameworks or externally-imposed deadlines. +Projects that use Waterfall get early validation of their requirements, and the team is forced to document the project thoroughly during design. + +However, Waterfall is not a good fit if customer requirements aren't set in stone, for instance because the customers might change their mind, or because the target customers aren't even well-defined. +The lack of flexibility can also result in inefficiencies, since steps must be completed regardless of whether their output is actually useful. +Waterfall projects also delay the validation of the product itself until the release step, which might lead to wasted work if the software does not match what the customers expected. + +Typically, a project might use Waterfall if there are clear requirements that cannot change, using mature technologies that won't cause surprises along the way, +with a team that may not have enough experience to take decisions on its own. + +### Agile + +Agile is not a methodology by itself but a mindset born out of reaction to Waterfall's rigidity and formality, which is not a good fit for many teams including startups. +The [Agile Manifesto](https://agilemanifesto.org/) emphasizes individuals and interactions, working software, customer collaboration, and responding to change, +over processes and tools, comprehensive documentation, contract negotiation, and following a plan. + +In practice, Agile methods are all about iterative development in a way that lets the team get frequent feedback from customers and adjust as needed. + +### Scrum + +Scrum is an Agile method that is all about developing projects in _increments_ during _sprints_. +Scrum projects are a succession of fixed-length sprints that each produce a functional increment, i.e., something the customers can try and give feedback on. +Sprint are usually a few weeks long; the team chooses at the start how long sprints will be, and that duration remains constant throughout the project. +At the start of each sprint, the team assigns to each member tasks to complete for that sprint, and at the end the team meets with the customer to demo the increment. + +Scrum teams are multi-disciplinary, have no hierarchy, and should be small, i.e., 10 people or fewer. +In addition to typical roles such as engineer and designer, there are two Scrum-specific roles: the ""Scrum Master"" and the ""Product Owner"". + +The Scrum Master facilitates the team's work and checks in with everyone to make sure the team is on pace to deliver the expected increment. +The Scrum Master is _not_ a manager, they do not decide who does what. In general, a developer takes on the extra role of Scrum Master. + +The Product Owner is an internal representative for the customers, who formalizes and prioritizes requirements and converts them into a ""Product Backlog"" of items for the development team. +The Product Backlog is a _sorted_ list of items, i.e., the most important are at the top. It contains both user stories and bugs. +Because it is constantly sorted, new items might be inserted in any position depending on their priority, and the bottom-most items will likely never get done, because they are not important enough. +For instance, the following items might be on a backlog: +- ""As an admin, I want to add a welcome message on the main page, so that I can keep my users informed"" +- ""Bug: Impossible to connect if the user name has non-ASCII characters"" +- ""As a player, I want to chat with my team in a private chat, so that I can discuss strategies with my team"" + +To plan a sprint, the team starts by taking the topmost item in the Product Backlog and moves it to the ""Sprint Backlog"", which is the list of items they expect to complete in a sprint. +The team then divides the item into development tasks, such as ""add an UI to set the welcome message"", ""return the welcome message as part of the backend API"", and ""show the welcome message in the app"". +The team assigns each task a time estimate, dependent on its complexity, a ""definition of done"", which will be used to know when the task is finished, and a developer to implement the task. +The ""definition of done"" represents specific expectations from the team, so that the person to which the task is assigned knows what they have to do. +It can for instance represent a user scenario: ""an admin should be able to go to the settings page and write a welcome message, which must be persisted in the database"". +This avoids misunderstandings between developers, e.g., Alice thought saving the message to the database was part of Bob's task and Bob thought Alice would do it. +Tasks implicitly contain testing: whoever writes or modifies a piece of code is in charge of testing it, though the team might make specific decisions on specific kind of tests or test scenarios. + +During a sprint, the team members each work on their own tasks, and have a ""Daily Scrum Meeting"" at the start of each day. +The Daily Scrum Meeting is _short_: it should last at most 15 minutes, preferably much less, and should only be attended by the development team including the Scrum Master, +not by the Product Owner nor by any customer or other person. +This meeting is also known as a ""standup"" meeting because it should be short enough that the team does not need to formally get a room and sit down. +The Daily Scrum Meeting consists of each team member explaining what they've done in the previous day, what they plan to do this day, and whether they are blocked for any reason. +Any such ""blockers"" can then be discussed _after_ the meeting is over, with only the relevant people. +This way, all team members know each other's status, but they do not need to sit through meetings that are not relevant to them. + +Any bugs the team finds during the sprint should be either fixed on the spot if they are small enough, or reported to the Product Owner if they need more thought. +The Product Owner will then prioritize these bugs in the Product Backlog. +It is entirely normal that some bugs may stay for a while in the backlog, or even never be fixed, if they are not considered important enough compared to other things the team could spend time on. + +Importantly, the Sprint Backlog cannot be modified during a sprint. +Once the team has committed to deliver a specific increment, it works only on that increment, in a small-scale version of Waterfall. +If a customer has an idea for a change, they communicate it with the Product Owner, who inserts it at the appropriate position in the Product Backlog once the sprint is over. + +Once the sprint is over, the team demoes the resulting increment to the customer and any relevant stakeholders as part of a ""Sprint Review"" to get feedback, +which can then be used by the Product Owner to add, remove, or edit Product Backlog items. +The team then performs a ""Sprint Retrospective"" without the customer to discuss the development process itself. +Once the Review and Retrospective are done, the team plans the next sprint and executes it, and the process starts anew. + +Scrum does not require all requirements to be known upfront, unlike Waterfall, which gives it flexibility. +The team can change direction during development, since each sprint is an occasion to get customer feedback and act on it. +The product can thus be validated often with the customer, which helps avoid building the wrong thing. + +However, Scrum does not impose any specific deadlines for the final product, and requires the existence of customers, +or at least of someone who can play the role of customer if the exact customers are not yet known. +It also does not fit well with micro-managers or specific external deadlines, since the team is in charge of its own direction. + +### Other methodologies + +There are plenty of other software development methodologies we will not talk about in depth. +For instance, the ""[V Model](https://en.wikipedia.org/wiki/V-Model)"" tries to represent Waterfall with more connections between design and testing, +and the ""[Spiral model](https://en.wikipedia.org/wiki/Spiral_model)"" is designed to minimize risks. +[Kanban](https://en.wikipedia.org/wiki/Kanban_(development)) is an interesting methodology that essentially takes Scrum to its logical extreme, +centered on a board with tasks in various states that start from a sorted backlog and end in a ""done"" column. + + +## How can one effectively work in a team? + +The overall workflow of an engineer in a team is straightforward: create a branch in the codebase, work on it, +then iterate on the work with feedback from the team before integrating the work in the codebase. +This raises many questions. How can one form a team? How to be a ""good"" team member? How to divide tasks among team members? + +Team formation depends on development methodologies. +In Scrum, teams are multi-disciplinary, i.e., there is no ""user interface team"" or ""database team"" but rather teams focused on an overall end-to-end product that include diverse specialists. +Scrum encourages ""two-pizza"" teams, i.e., teams that could be fed with two large pizzas, so 4-8 people. + +Being a good team member requires three skills: communication, communication, and communication. +Do you need help because you can't find a solution to a problem? Are you blocked because of factors outside of your control? Communicate! +There are no winners on a losing team. It is not useful to write ""perfect"" code in isolation if it does not integrate with the rest of the team's code, +or if there are other more important tasks to be done than the code itself, such as helping a teammate. +A bug is never due to a single team member, since it implies the people who reviewed the code also made mistakes by not spotting the bug. +Do not try to assign blame within the team for a problem, but instead communicate in a way that is the team vs. the problem. + +Dividing tasks within a team is unfortunately more of an art than a science, and thus requires practice to get right. + +If the tasks are too small, the overhead of each task's fixed costs is too high. +If the tasks are too big, planning is hard because there are too many unknowns per task. +One heuristic for task size is to think in terms of code review: what will the code for the task roughly look like, and how easy will that be to review? + +If the tasks are estimated to take more time than they really need, the team will run out of work to do and will need to plan again. +If the tasks are estimated to take too little time, the team will not honor its deadlines. +Estimating the complexity of a task is difficult and comes with experience. +One way to do it is with ""planning poker"", in which team members write down privately their time estimate, then everyone reveals their estimate at once, the team discusses the results, +and the process starts again until all team members independently agree. + +If the tasks are assigned to the wrong people, the team may not finish them on time because members have to spend too much time doing things they are not familiar with or do not enjoy doing. +One key heuristic to divide tasks is to maximize parallelism while minimizing dependencies and interactions. +If two people have to constantly meet in order to work, perhaps their tasks could be split differently so they need to meet less. +Typically, a single task should be assigned to and doable by a single person. + +An important concept when assigning work within a team is the ""bus factor"": how many team members can be hit by a bus before the team can no longer continue working? +This is a rather morbid way to look at it; one can also think of vacations, illnesses, or personal emergencies. +Many teams have a bus factor of 1, because there is at least one member who is the only person with knowledge of some important task, password, or code. +If this member leaves the team, gets sick, or is in any way incapacitated, the team grinds to a halt because they can no longer perform key tasks. +Thus, tasks should be assigned such that nobody is the only person who ever does specific key tasks. + +One common mistake is to assign tasks in terms of application layers rather than end-to-end functionality. +For instance, the entirety of the database work for a sprint in Scrum could be assigned to one person, the entirety of the UI work to another, and so on. +However, if the database person cannot do their work, for instance because of illness, then no matter what other team members do, nothing will work end-to-end. +Furthermore, if the same people are continuously assigned to the same layers, the bus factor becomes 1. +Instead, team members should be assigned to features, such as one person in charge of user login, one in charge of informational messages, and one in charge of chat. + +Finally, another common mistake is to make unrealistic assumptions regarding time: everyone will finish on time, and little time is necessary to integrate the code from all tasks into the codebase. +Realistically, some tasks will always be late due to incorrect estimations, illnesses, or other external factors. +Integrating the code from all tasks also takes time and may require rework, because engineers may realize that they misunderstood each other and that they did not produce compatible code. +It is necessary to plan for more time than the ""ideal"" time per task. + + +## How can one produce a useful code review? + +Code reviews have multiple goals. +The most obvious one is to review the code for bugs, to stop bugs getting into the main branch of the codebase. +Reviewers can also propose alternative solutions that may be more maintainable than the one proposed by the code author. + +Code review is also a good time for team members to learn about codebase changes. +In 2013, [Bacchelli and Bird](https://dl.acm.org/doi/10.5555/2486788.2486882) found that knowledge transfer and shared code ownership were important reasons developers did code reviews in practice, +right behind finding defects, improving the code, and proposing alternative solutions. + +Code reviews are cooperative, not adversarial. The point is not to try and find possible ""backdoors"" a colleague might have inserted; +if you have a malicious colleague, code reviews are not the tool to deal with the problem. +That is, unless your ""colleague"" is not a colleague but a random person on the Internet suggesting a change to your open source software, at which point you need to be more careful. + +### Team standards and tools + +Each team must decide the standards with which it will review code, such as naming and formatting conventions and expected test coverage. +Automate as much as possible: do not ask reviewers to check compliance with a specific naming convention if a static analyzer can do it, +or to check the code coverage if a tool can run the tests and report the coverage. + +### From the author's side + +To maximize the usefulness of the reviews you get as a code author, first review the code yourself to avoid wasting people's time with issues you could've found yourself, +and then choose appropriate reviewers. +For instance, you may ask a person who has worked for a while on that part of the codebase to chime in, as well as an expert on a specific subject such as security. +Make it clear what you expect from the reviewers. Is this a ""draft"" pull request that you might heavily change? Is it a fix that must go in urgently and thus should get reviews as soon as possible? + +You also want to give reviewers a reasonable amount of code to review, ideally a few hundred lines of code at most. +It's perfectly acceptable to open multiple pull requests in parallel for independent features, or to open pull requests sequentially for self-contained chunks of a single feature. + +### From the reviewer's side + +Skim the code in its entirety first to understand what is going on, then read it in details with specific goals in mind, adding comments as you go. Finally, make a decision: +does the code need changes or should it be merged as-is? If you request changes, perform another review once these are done. +Since you might do multiple reviews, don't bother pointing out small issues if you are going to ask for major changes anyway. +Sometimes you may also want to merge the code yet create a bug report for small fixes that should be performed later, if merging the code is important to unblock someone else. + +Evaluate the code in terms of correctness, design, maintainability, readability, and any other bar you think is important. +For instance, do you think another developer could easily pick up the code and evolve it? If not, you likely want to explain why and what could be done to improve this aspect. + +When writing a comment, categorize it: are you requesting an important change? is it a small ""nitpick""? is it merely a question for your own understanding? +For instance, you could write ""Important: this bound should be N+1, not N, because..."" or ""Question: could this code use the existing function X instead of including its own logic to do Y?"". +Be explicit: make it clear whether you are actually requesting a change, or merely doing some public brain-storming for potential future changes. +Pick your battles: sometimes you may personally prefer one way to do it, but still accept what the author did instead of asking for small changes that don't really matter in the big picture. + +Remember to comment on _the code_, not _the person who wrote the code_. ""Your code is insecure"" is unnecessarily personal; ""this method is insecure"" avoids this problem. + +If you do not perform a thorough review of all of the code, specify what you did. It's perfectly fine to only check changes to code you already know, +or to not be confident in evaluating specific aspects such as security or accessibility, but you must make this clear so that the code author does not get the wrong idea. + +There are plenty of guidelines available on the Internet that you might find useful, such as [Google's](https://google.github.io/eng-practices/review/reviewer/). + + +## Summary + +In this lecture, you learned: +- Development methodologies, including Waterfall and especially Scrum +- Dividing tasks within a team to maximize productivity and minimize conflicts +- What code reviews are for and how to write one + +You can now check out the [exercises](exercises/)! +",CS-305: Software engineering +"# Testing + +> **Prerequisite**: Before following this lecture, you should make sure you can build and run the [sample project](exercises/sample-project). + +It's tempting to think that testing is not necessary if one ""just"" writes correct code from the first try and double-checks the code before running it. + +But in practice, this does not work. Humans make mistakes all the time. +Even [Ada Lovelace](https://en.wikipedia.org/wiki/Ada_Lovelace), who wrote a correct algorithm to compute Bernoulli numbers +for [Charles Babbage](https://en.wikipedia.org/wiki/Charles_Babbage)'s ""[Analytical Engine](https://en.wikipedia.org/wiki/Analytical_Engine)"", +made a typo by switching two variables in the code transcription of her algorithm. +And she had plenty of time to double-check it, since the Analytical Engine was a proposed design by Babbage that was not actually implemented! +The ""first program ever"" already contained a typo. + +One modern option is computer-aided verification, but it requires lots of time. +If Ada Lovelace had lived in the 21st century, she could have written a proof and gotten a computer to check it, ensuring that the proof is correct as long as the prover program is itself correct. +This can work in practice, but currently at the cost of high developer effort. +The [seL4 operating system kernel](https://sel4.systems/), for instance, required 200,000 lines of proof for its 10,000 lines of code +(Klein et al., [""seL4: formal verification of an OS kernel""](https://dl.acm.org/doi/10.1145/1629575.1629596)). +Such a method might have worked for Ada Lovelace, an aristocrat with plenty of free time, but is not realistic yet for everyday programmers. + +Another modern option is to let users do the work, in a ""beta"" or ""early access"" release. +Users get to use a program ahead of everyone else, at the cost of encountering bugs and reporting them, effectively making them testers. +However, this only works if the program is interesting enough, such as a game, yet most programs out there are designed as internal tools for small audiences that are unlikely to want to beta test. +Furthermore, it does not eliminate bugs entirely either. Amazon's ""New World"" game, despite having an ""Open Beta"" period, +[released](https://www.denofgeek.com/games/new-world-bugs-glitches-exploits-list-cyberpunk-2077/) with many glitches including a 7-day delay before respawning. + +Do we even need tests in the first place? What's the worst that could happen with a bug? +In some scenarios, the worst is not that bad comparatively, such as a bug in an online game. +But imagine a bug in the course registration system of a university, leaving students wondering whether they are signed up to a course or not. +Worse, a bug in a bank could make money appear or disappear at random. +Even worse, bugs can be lethal, as in the [Therac-25 radiation therapy machine](https://en.wikipedia.org/wiki/Therac-25) which killed some patients. + + +## Objectives + +After this lecture, you should be able to: + +- Understand the basics of _automated testing_ +- Evaluate tests with _code coverage_ +- Identify _when_ to write which tests +- _Adapt_ code to enable fine-grained testing + + +## What is a test? + +Testing is, at its core, three steps: + +1. Set up the system +2. Perform an action +3. Check the outcome + +If the outcome is the one we expect, we gain confidence that the system does the right thing. +However, ""confidence"" is not a guarantee. +As [Edsger W. Dijkstra](https://en.wikipedia.org/wiki/Edsger_W._Dijkstra) once said, ""_Program testing can be used to show the presence of bugs, but never to show their absence!_"". + +The simplest way to test is manual testing. +A human manually performs the workflow above. +This has the advantage of being easy, since one only has to perform the actions that would be typically expected of users anyway. +It also allows for a degree of subjectivity: the outcome must ""look right"", but one does not need to formally define what ""looking right"" means. + +However, manual testing has many disadvantages. +It is slow: imagine manually performing a hundred tests. +It is also error-prone: the odds increase with each test that a human will forget to perform one step, or perform a step incorrectly, or not notice that something is wrong. +The subjectivity of manual testing is also often a disadvantage: two people may not agree on exactly what ""right"" or ""wrong"" is for a given test. +Finally, it also makes it hard to test edge cases. Imagine testing that a weather app correctly shows snowfalls when they occur if it's currently sunny outside your home. + +To avoid the issues of manual testing, we will focus on automated testing. +The workflow is fundamentally the same, but automated: + +1. Code sets up the system +2. Code performs an action +3. Code checks the outcome + +These steps are commonly known as ""Arrange"", ""Act"", and ""Assert"". + +Automated testing can be done quickly, since computers are much faster than human, and do not forget steps or randomly make mistakes. +This does not mean automated tests are always correct: if the code describing the test is wrong, then the test result is meaningless. +Automated testing is also more objective: the person writing the test knows exactly what will be tested. +Finally, it makes testing edge cases possible by programmatically faking the environment of the system under test, such as the weather forecast server for a weather app. + +There are other benefits to automated testing too: tests can be written once and used forever, everywhere, even on different implementations. +For instance, the [CommonMark specification](https://spec.commonmark.org/) for Markdown parsers includes many examples that are used as tests, allowing anyone to use these tests to check their own parser. +If someone notices a bug in their parser that was not covered by the standard tests, they can suggest a test that covers this bug for the next version of the specification. +This test can then be used by everyone else. +The number of tests grows and grows with time, and can reach enormous amounts such as [the SQLite test suite](https://www.sqlite.org/testing.html), which currently has over 90 million lines of tests. + +On the flip side, automated testing is harder than manual testing because one needs to spend time writing the test code, which includes a formal definition of what the ""right"" behavior is. + + +## How does one write automated tests? + +We will use Java as an example, but automated testing works the same way in most languages. + +The key idea is that each test is a Java method, and a test failure is indicated by the method throwing an exception. +If the method does not throw exceptions, then the test passes. + +One way to do it using Java's built-in concepts is the following: + +```java +void test1plus1() { + assert add(1, 1) == 2 +} +``` + +If `add(1, 1)` returns `2`, then the assertion does nothing, the method finishes, and the test is considered to pass. +But if it returns some other number, the assertion throws an `AssertionError`, which is a kind of exception, and the test is considered to fail. + +...or, at least, that's how it should be, but [Java asserts are disabled by default](https://docs.oracle.com/javase/7/docs/technotes/guides/language/assert.html), unfortunately. +So this method does absolutely nothing unless the person running the test remembers to enable assertions. + +One could mimic the `assert` statement with an `if` and a `throw`: + +```java +void test1plus1() { + if (add(1, 1) != 2) { + throw new AssertionError() + } +} +``` + +This is a working implementation of a test, which we could run from a `main` method. +However, if the test fails, there is no error message, since we did not put one when creating the `AssertionError`. +For instance, if the test fails, what did `add(1, 1)` actually return? It would be good to know this. +We could write code to store the result in a variable, test against that variable, and then create a message for the exception including that variable. +Or we could use [JUnit](https://junit.org/junit5/) to do it for us: + +```java +@Test +void test1plus1() { + assertEquals(add(1, 1), 2); +} +``` + +JUnit finds all methods annotated with `@Test` and runs them, freeing us from the need to write the code to do it ourselves, +and throws exceptions whose message includes the ""expected"" and the ""actual"" value. + +...wait, did we do that right? Should we have put the ""expected"" value first? It's hard to remember. +And even if we do that part right, it's hard to make assertion messages useful for tests such as ""this list should either be empty or contain `[1, 2, 3]`"". +We can write code to check that, but if the test fails we will get ""expected `true`, but was `false`"", which is not useful. + +Instead, let's use [Hamcrest](https://hamcrest.org/JavaHamcrest/) to write our assertions on top of JUnit: + +```java +@Test +void test1plus1() { + assertThat(add(1, 1), is(2)); +} +``` + +This is much clearer! The `is` part is a Hamcrest ""matcher"", which describes what value is expected. `is` is the simplest one, matching exactly one value, but we can use fancier ones: + +```java +List values = ...; + +assertThat(values, + either(empty()) +.or(contains(1, 2, 3))); +``` + +If this assertion fails, Hamcrest's exception message states ""Expected: (an empty collection or iterable containing `[<1>, <2>, <3>]`) but: was `<[42]>`"". + +Sometimes we need to test that a piece of code _fails_ in some circumstances, such as validating arguments properly and throwing an exception of an argument has an invalid value. +This is what `assertThrows` is for: + +```java +var ex = assertThrows( + SomeException.class, + () -> someOperation(42) +); + +// ... test 'ex'... +``` + +The first argument is the type of exception we expect, the second is a function that should throw that type of exception. +If the function does not throw an exception, or throws an exception of another type, `assertThrows` will throw an exception to indicate the test failed. +If the function does throw an exception of the right type, `assertThrows` returns that exception so that we can test it further if needed, such as asserting some fact about its message. + +--- +#### Exercise +It's your turn now! Open [the in-lecture exercise project](exercises/lecture) and test `Functions.java`. +Start by testing valid values for `fibonacci`, then test that it rejects invalid values. +For `split` and `shuffle`, remember that Hamcrest has many matchers and has documentation. +
+Example solution (click to expand) +

+ +You could test `fibonacci` using the `is` matcher we discussed earlier for numbers such as 1 and 10, and test that it throws an exception with numbers below `0` using `assertThrows`. + +To test `split`, you could use Hamcrest's `contains` matcher, and for the shuffling function, you could use `arrayContainingInAnyOrder`. + +We provide some [examples](exercises/solutions/lecture/FunctionsTests.java). + +

+
+ +--- + +**Should you test many things in one method, or have many small test methods?** +Think of what the tests output will look like if you combine many tests in one method. +If the test method fails, you will only get one exception message about the first failure in the method, and will not know whether the rest of the test method would pass. +Having big test methods also means the fraction of passing tests is less representative of the overall code correctness. +In the extreme, if you wrote all assertions in a single method, a single bug in your code would lead to 0% of passing tests. +Thus, you should prefer small test methods that each test one ""logical"" concept, which may need one or multiple assertions. +This does not mean one should copy-paste large blocks of code between tests; instead, share code using features such as JUnit's `@BeforeAll`, `@AfterAll`, `@BeforeEach`, and `@AfterEach` annotations. + +**How can you test private methods?** You **don't**. +Otherwise the tests must be rewritten every time the implementation changes. +Think back to the SQLite example: the code would be impossible to change if any change in implementation details required modifying even a fraction of the 90 million lines of tests. + +**What standards should you have for test code?** +The same as for the rest of the code. +Test code should be in the same version control repository as other code, and should be reviewed just like other code when making changes. +This also means tests should have proper names, not `test1` or `testFeatureWorks` but specific names that give information in an overview of tests such as `nameCanIncludeThaiCharacters`. +Avoid names that use vague descriptions such as ""correctly"", ""works"", or ""valid"". + + +## What metric can one use to evaluate tests? + +What makes a good test? +When reviewing a code change, how does one know whether the existing tests are enough, or whether there should be more or fewer tests? +When reviewing a test, how does one know if it is useful? + +There are many ways to evaluate tests; we will focus here on the most common one, _coverage_. +Test coverage is defined as the fraction of code executed by tests compared to the total amount of code. +Without tests, it is 0%. With tests that execute each part of the code at least once, it is 100%. +But what is a ""part of the code""? What should be the exact metric for coverage? + +One naïve way to do it is _line_ coverage. Consider this example: + +```java +int getFee(Package pkg) { + if (pkg == null) throw ...; + int fee = 10; + if (pkg.isHeavy()) fee += 10; + if (pkg.isInternational()) fee *= 2; + return fee; +} +``` + +A single test with a non-null package that is both heavy and international will cover both lines. +This may sound great since the coverage is 100% and easy to obtain, but it is not. +If the `throw` was on a different line instead of being on the same line as the `if`, line coverage would no longer be 100%. +It is not a good idea to define a metric for coverage that depends on code formatting. + +Instead, the simplest metric for test coverage is _statement_ coverage. +In our example, the `throw` statement is not covered but all others are, and this does not change based on code formatting. +Still, reaching almost 100% statement coverage based on a single test for the code above seems wrong. +There are three `if` statements, indicating the code performs different actions based on a condition, yet we ignored the implicit `else` blocks in those ifs. + +A more advanced form of coverage is _branch_ coverage: the fraction of branch choices that are covered. +For each branch, such as an `if` statement, 100% branch coverage requires covering both choices. +In the code above, branch coverage for our single example test is 50%: we have covered exactly half of the choices. + +Reaching 100% can be done with two additional tests: one null package, and one package that is neither heavy nor international. + +But let us take a step back for a moment and think about what our example code can do: + +

+ +There are five paths throughout the code, one of which fails. +Yet, with branch coverage, we could declare victory after only three tests, leaving two paths unexplored. +This is where path coverage comes in. Path coverage is the most advanced form of coverage, counting the fraction of paths throughout the code that are executed. +Our three tests cover 60% of paths, i.e., 3 out of 5. We can reach 100% by adding tests for the two uncovered paths: a package that is heavy but not international, and one that is the other way around. + +Path coverage sounds very nice in theory. +But in practice, it is often infeasible, as is obvious from the following example: + +```java +while (true) { + var input = getUserInput(); + if (input.length() <= 10) break; + tellUser(""No more than 10 chars""); +} +``` + +The maximum path coverage obtainable for this code is _zero_. +That's because there is an infinite number of paths: the loop could execute once, or twice, or thrice, and so on. +Since one can only write a finite number of tests, path coverage is blocked at 0%. + +Even without infinite loops, path coverage is hard to obtain in practice. +With just 5 independent `if` statements that do not return early or throw, one must write 32 tests. +If 1/10th of the lines of code are if statements, a 5-million-lines program has more paths than there are atoms in the universe. +And 5 million lines is well below what some programs have in practice, such as browsers. + +There is thus a tradeoff in coverage between feasibility and confidence. +Statement coverage is typically easy to obtain but does not give that much confidence, whereas path coverage can be impossible to obtain in practice but gives a lot of confidence. +Branch coverage is a middle ground. + +It is important to note that coverage is not everything. +We could cover 100% of the paths in our ""get fee"" function above with 5 tests, but if those 5 tests do not actually check the value returned by the function, they are not useful. +Coverage is a metric that should help you decide whether additional tests would be useful, but it does not replace human review. + +--- +#### Exercise +Run your tests from the previous exercise with coverage. +You can do so either from the command line or from your favorite IDE, which should have a ""run tests with coverage"" command next to ""run tests"". +Note that using the command line will run the [JaCoCo](https://www.jacoco.org/jacoco/) tool, which is a common way to get code coverage in Java. +If you use an IDE, you may use the IDE's own code coverage tool, which could have minor differences in coverage compared to JaCoCo in some cases. + + +## When to test? + +Up until now we have assumed tests are written after development, before the code is released. +This is convenient, since the code being tested already exists. +But it has the risk of duplicating any mistakes found in the code: if an engineer did not think of an edge case while writing the code, +they are unlikely to think about it while writing the tests immediately afterwards. +It's also too late to fix the design: if a test case reveals that the code does not work because its design needs fundamental alterations, +this will likely have to be done quickly under pressure due to a deadline, leading to a suboptimal design. + +If we simplify a product lifecycle to its development and its release, there are three times at which we could test: + +

+ +The middle one is the one we have seen already. The two others may seem odd at first glance, but they have good reasons to exist. + +Testing before development is commonly known as **test-driven development**, or _TDD_ for short, because the tests ""drive"" the development, specifically the design of the code. +In TDD, one first writes tests, then the code. +After writing the code, one can run the tests and fix any bugs. +This forces programmers to think before coding, instead of writing the first thing that comes to mind. +It provides instant feedback while writing the code, which can be very gratifying: write some of the code, run the tests, and some tests now pass! +This gives a kind of progress indication. It's also not too late to fix the design, since the design does not exist yet. + +The main downside of TDD is that it requires a higher time investment, and may even lead to missed deadlines. +This is because the code under test must be written regardless of what tests are written. +If too much time is spent writing tests, there won't be enough time left to write the code. +When testing after development, this is not a problem because it's always possible to stop writing tests at any time, since the code already exists, +at the cost of fewer tests and thus less confidence in the code. +Another downside of TDD is that the design must be known upfront, which is fine when developing a module according to customer requirements but not when prototyping, for instance, research code. +There is no point in writing a comprehensive test suite for a program if that program's very purpose will change the next day after some thinking. + +Let us now walk through a TDD example step by step. +You are a software engineer developing an application for a bank. +Your first task is to implement money withdrawal from an account. +The bank tells you that ""users can withdraw money from their bank account"". +This leaves you with a question, which you ask the bank: ""can a bank account have a balance below zero?"". +The bank answers ""no"", that is not possible. + +You start by writing a test: + +```java +@Test void canWithdrawNothing() { + var account = new Account(100); + assertThat(account.withdraw(0), is(0)); +} +``` + +The `new Account` constructor and the `withdraw` method do not exist, so you create a ""skeleton"" code that is only enough to make the tests _compile_, not pass yet: + +```java +class Account { + Account(int balance) { } + int withdraw(int amount) { throw new UnsupportedOperationException(""TODO""); } +} +``` + +You can now add another test for the ""balance below zero"" question you had: + +```java +@Test void noInitWithBalanceBelow0() { + assertThrows(IllegalArgumentException.class, () -> new Account(-1)); +} +``` + +This test does not require more methods in `Account`, so you continue with another test: + +```java +@Test void canWithdrawLessThanBalance() { + var account = new Account(100); + assertThat(account.withdraw(10), is(10)); + assertThat(account.balance(), is(90)); +} +``` + +This time you need to add a `balance` method to `Account`, with the same body as `withdraw`. Again, the point is to make tests compile, not pass yet. + +You then add one final test for partial withdrawals: + +```java +@Test void partialWithdrawIfLowBalance() { + var account = new Account(10); + assertThat(account.withdraw(20), is(10)); + assertThat(account.balance(), is(0)); +} +``` + +Now you can run the tests... and see them all fail! This is normal, since you did not actually implement anything. +You can now implement `Account` and run the tests every time you make a change until they all pass. + +Finally, you go back to your customer, the bank, and ask what is next. +They give you another requirement they had forgotten about: the bank can block accounts and withdrawing from a blocked account has no effect. +You can now translate this requirement into tests, adding code as needed to make the tests compile, then implement the code. +Once you finish, you will go back to asking for requirements, and so on until your application meets all the requirements. + +---- +#### Exercise +It's your turn now! In [the in-lecture exercise project](exercises/lecture) you will find `PeopleCounter.java`, which is documented but not implemented. +Write tests first then implement the code and fix your code if it doesn't pass the tests, in a TDD fashion. +First, think of what tests to write, then write them, then implement the code. + +
+Example tests (click to expand) +

+ +You could have five tests: the counter initializes to zero, the ""increment"" method increments the counter, +the ""reset"" method sets the counter to zero, the ""increment"" method does not increment beyond the maximum, +and the maximum cannot be below zero. + +We provide [sample tests](exercises/solutions/lecture/PeopleCounterTests.java) and [a reference implementation](exercises/solutions/lecture/PeopleCounter.java). + +

+
+ +---- + +Testing after deployment is commonly known as **regression testing**. The goal is to ensure old bugs do not come back. + +When confronted with a bug, the idea is to first write a failing test that reproduces the bug, then fix the bug, then run the test again to show that the bug is fixed. +It is crucial to run the test before fixing the bug to ensure it actually fails. +Otherwise, the test might not actually reproduce the bug, and will ""pass"" after the bug fix only because it was already passing before, providing no useful information. + +Recall the SQLite example: all of these 90 million lines of code show that a very long list of possible bugs will not appear again in any future release. +This does not mean there are no bugs left, but that many if not all common bugs have been removed, and that the rest are most likely unusual edge cases that nobody has encountered yet. + + +## How can one test entire modules? + +Up until now we have seen tests for pure functions, which have no dependencies on other code. +Testing them is useful to gain confidence in their correctness, but not all code is structured as pure functions. + +Consider the following function: + +```java +/** Downloads the book with the given ID + * and prints it to the console. */ +void printBook(String bookId); +``` + +How can we test this? First off, the function returns `void`, i.e., nothing, so what can we even test? +The documentation also mentions downloading data, but from where does this function do that? + +We could test this function by passing a book ID we know to be valid and checking the output. +However, that book could one day be removed, or have its contents update, invalidating our test. + +Furthermore, tests that depend on the environment such as the book repository this function uses cannot easily test edge cases. +How should we test what happens if there is a malformed book content? Or if the Internet connection drops after downloading the table of contents but before downloading the first chapter? + +One could design _end-to-end tests_ for this function: run the function in a custom environment, such as a virtual machine whose network requests are intercepted, +and parse its output from the console, or perhaps redirect it to a file. +While end-to-end testing is useful, it requires considerable time and effort, and is infrastructure that must be maintained. + +Instead, let's address the root cause of the problem: the input and output to `printBook` are _implicit_, when they should be _explicit_. + +Let's make the input explicit first, by designing an interface for HTTP requests: + +```java +interface HttpClient { + String get(String url); +} +``` + +We can then give an `HttpClient` as a parameter to `printBook`, which will use it instead of doing HTTP requests itself. +This makes the input explicit, and also makes the `printBook` code more focused on the task it's supposed to do rather than on the details of HTTP requests. + +Our `printBook` function with an explicit input thus looks like this: + +```java +void printBook( + String bookId, + HttpClient client +); +``` + +This process of making dependencies explicit and passing them as inputs is called **dependency injection**. + +We can then test it with whatever HTTP responses we want, including exceptions, by creating a fake HTTP client for tests: + +```java +var fakeClient = new HttpClient() { + @Override + public String get(String url) { ... } +} +``` + +Meanwhile, in production code we will implement HTTP requests in a `RealHttpClient` class so that we can call `printBook(id, new RealHttpClient(...))`. + +We could make the output explicit in the same way, by creating a `ConsolePrinter` interface that we pass as an argument to `printBook`. +However, we can change the method to return the text instead, which is often simpler: + +```java +String getBook( + String bookId, + HttpClient client +); +``` + +We can now test the result of `getBook`, and in production code feed it to `System.out.println`. + +Adapting code by injecting dependencies and making outputs explicit enables us to test more code with ""simple"" tests rather than complex end-to-end tests. +While end-to-end tests would still be useful to ensure we pass the right dependencies and use the outputs in the right way, manual testing for end-to-end scenarios already provides +a reasonable amount of confidence. For instance, if the code is not printing to the console at all, a human will definitely notice it. + +This kind of code changes can be done recursively until only ""glue code"" between modules and low-level primitives remain untestable. +For instance, an ""UDP client"" class can take an ""IP client"" interface as a parameter, so that the UDP functionality is testable. +The implementation of the ""IP client"" interface can itself take a ""Data client"" interface as a parameter, so that the IP functionality is testable. +The implementations of the ""Data client"" interface, such as Ethernet or Wi-Fi, will likely need end-to-end testing since they do not themselves rely on other local software. + +--- +#### Exercise +It's your turn now! In [the in-lecture exercise project](exercises/lecture) you will find `JokeFetcher.java`, which is not easy to test in its current state. +Change it to make it testable, write tests for it, and change `App.java` to match the `JokeFetcher` changes and preserve the original program's functionality. +Start by writing an interface for an HTTP client, implement it by moving existing code around, and use it in `JokeFetcher`. Then add tests. + +
+Suggestions (click to expand) +

+ +The changes necessary are similar to those we discussed above, including injecting an `HttpClient` dependency and making the function return a `String`. +We provide [an example `JokeFetcher`](exercises/solutions/lecture/JokeFetcher.java), [an example `App`](exercises/solutions/lecture/App.java), +and [tests](exercises/solutions/lecture/JokeFetcherTests.java). + +

+
+ +---- + +If you need to write lots of different fake dependencies, you may find _mocking_ frameworks such as [Mockito](https://site.mockito.org/) for Java useful. +These frameworks enable you to write a fake `HttpClient`, for instance, like this: + +```java +var client = mock(HttpClient.class); +when(client.get(anyString())).thenReturn(""Hello""); +// there are also methods to throw an exception, check that specific calls were made, etc. +``` + +There are other kinds of tests we have not talked about in this lecture, such as performance testing, accessibility testing, usability testing, and so on. +We will see some of them in future lectures. + + +## Summary + +In this lecture, you learned: +- Automated testing, its basics, some good practices, and how to adapt code to make it testable +- Code coverage as a way to evaluate tests, including statement coverage, branch coverage, and path coverage +- When tests are useful, including testing after development, TDD, and regression tests + +You can now check out the [exercises](exercises/)! +",CS-305: Software engineering +"# Mobile Platforms + +This lecture's purpose is to give you a high-level picture of what the universe of mobile applications and devices is like. You will read about: + +* Differences between desktops and mobile devices w.r.t. applications, security, energy, and other related aspects +* Challenges and opportunities created by mobile platforms +* Brief specifics of the Android stack and how applications are structured +* A few ideas for offering users a good experience on their mobile +* The ecosystem that mobile apps plug into + +## From desktops to mobiles + +Roughly every decade, a new, lower priced computer class forms, based on a new programming platform, network, and interface. This results in new types of usage, and often the establishment of a new industry. This is known as [Bell's Law](https://en.wikipedia.org/wiki/Bell%27s_law_of_computer_classes). + +With every new computer class, the number of computers per person increases drastically. Today we have clouds of vast data centers, and perhaps an individual computer, like our laptop, that we use to be productive. On top of that come several computer devices per individual, like phones, wearables, and smart home items, which we use for entertainment, communication, quality of life, and so on. + +It is in this context that mobile software development becomes super-important. + +We said earlier that, no matter what job you will have, you will write code. We can add to that: you will likely write code for mobile devices. There are more than 15 billion mobile devices operating worldwide, and that number is only going up. As Gordon Bell said, this leads to new usage patterns. + +We access the Internet more often from our mobile than our desktop or laptop. Most of the digital content we consume we do so on mobiles. We spend hours a day on our mobile, and the vast majority of that time we spend in apps, not on websites. + +A simple example of a major change in how we use computing and communication is social media. Most of the world's population uses it. It changes how we work. Even the professional workforce is increasingly dependent on mobiles, for this reason. + +### Mobile vs. desktop: Applications + +There are many differences between how we write applications for a mobile device vs. a desktop computer. On a desktop, applications can do pretty much whatever they want, whereas, on a mobile, each app is super-specialized. On the desktop, users explicitly starts applications; on mobile, the difference between running or not is fluid: apps can be killed anytime, and they need to be ready to restart. + +On a desktop, you typically have multiple applications active in the foreground, with multiple windows on-screen. The mobile experience is difference: a user's interaction with an app doesn't always begin in the same place, but rather the user's journey often begins non-deterministically. As a result, a mobile app has a more complex structure than a traditional desktop application, with multiple types of components and entry points. + +The execution model on mobiles is more cooperative than on a desktop. For example, a social media app allows you to compose an email, and does so by reusing the email app. Another example is something like WhatsApp, which allows you to take pictures (and does so by asking the Photo app to do it). In essence, apps request services from other apps, and they build upon the functionality of others, which is fundamentally difference from the desktop application paradigm. + +### Mobile vs. desktop: Operating environment + +One of the biggest differences is in the security model. Think of your parents' PC at home or in an Internet café: there are potentially multiple users that don’t trust each other, each have specific file permissions, every application by default inherits all of a user's permissions, all applications are trusted to run with user's privileges alongside each other. The operating system prevents one application from overwriting others, but does not protect the I/O resources (e.g., files). One could fairly say that security is somewhat of an afterthought on the desktop. + +A mobile OS has considerably stronger isolation. The assumption here is that users might naively install malicious apps, and the goal is to protect users’ data (and privacy) even when they do stupid things. So, each mobile app is sandboxed separately. + +When you install a mobile app, it will ask the device for necessary permissions, like access to contacts, camera, microphone, location information, SMS, WiFi, user accounts, body sensors, etc. + +A mobile device is more constrained, e.g., you don't really get ""root"" access. It provides strong hardware-based isolation and powerful authentication features (like face recognition). E.g., for iPhone's FaceID, the face scan pattern is encrypted and sent to a secure hardware ""enclave"" in the CPU, to make sure that stored facial data is inaccessible to any party, including Apple. + +Power management is a first-class citizen in a mobile OS. While PCs use high-power CPUs and GPUs with heatsinks, smartphones and tablets are typically not plugged in, so they cannot provide sustained high power. On a mobile, the biggest energy hog is typically the screen, whereas in a desktop it is the CPU and GPU. + +Desktop OSes tend to be generic, whereas mobile OSes specialize for a given set of mobile devices, so they can have strict hardware requirements. This leads to less backward compatibility, and so the latest apps will not run on older versions of the OS, and new versions of a mobile OS won't run on older devices. + +There is also orders-of-magnitude less storage on mobiles, and orders-of-magnitude less memory. Plus, you cannot easily expand / modify storage and memory. + +Mobile networking tends to be more intermittent than in-wall connectivity. An Ethernet connections gives you Gbps, while a WiFi connection gives you Mbps. + +On a desktop, the input comes typically from a keyboard and a mouse, whereas mobiles have small touch keyboards, voice commands, complex gestures, etc. Output is also limited on mobiles, so often they communicate with other devices to achieve their output. + +We are witnessing today a convergence between desktop and mobile OSes, which will gradually make desktops disappear, and computing will become increasingly more embedded. + +### Challenges and opportunities + +This new world of mobile presents both opportunities and challenges. + +Users are many more, and are more diverse. They have widely differing computer skills. Developers need to focus on ease of use, internationalization, and accessibility. + +Platforms are more diverse (think phones, wearables, TVs, cars, e-book readers). Different vendors take different approaches to dealing with this diversity: Apple has the ""walled garden"" approach, while Android is more like the wild west: Android is open-source, so OEMs (original equipment manufacturers) can customize Android to their devices, so naturally Android runs on tens of thousands of models of devices today. + +Mobiles get interesting new kinds of inputs, from multi-touch screens, accelerometers, GPS. But they also face serious limitations in terms of screen size, processor power, and battery capacity. + +Mobile devices need to (and can be) managed more tightly. Today you can remote-lock a device, you can locate the device, and with software like MDM (mobile device management) you can do everything on the device remotely. Some mobile carriers even block users from installing certain apps on their devices. + +App stores or Play stores are digital storefronts for content consumed on device. Today, the success of a mobile platform depends heavily on its app store, which becomes a central hub for content to be consumed on that particular mobile platform. The operator of the store controls apps published in the store (e.g., disallow sexually explicit content, violence, hate speech, anything illegal), which means that the developer needs to be aware of rules and laws in the places where app will be used. Operators typically use automated tools and human reviewers to check apps for malware and terms-of-service violations. + +## How does a mobile operating environment work? + +A mobile device aims to achieve seemingly contradictory goals: On the one hand, it aims for greater integration than a desktop, to create a more cooperative ecosystem in which to enable apps to provide services to each other. On the other hand, it aims for greater isolation, it wants to offer a stronger sandbox, to protect user data, and to restrict apps from interacting in more complex ways + +Each mobile OS makes its own choices for how to do this. In this lecture, we chose to focus on Android, for three reasons: + +* it is open-source, so it's easier to understand what it really does +* over 70% of mobiles use Android, there are more than 2.5 billion Android users and more than 3 billion active Android devices, so this is very real +* we will use it in the follow-on course [Software Development Project](https://dslab.epfl.ch/teaching/sweng/proj) + +Android is based on Linux, and things like multi-threading and low-level memory management are all provided by the Linux kernel. There is a layer, called the hardware abstraction layer (HAL), that provides standard interfaces to specific hardware capabilities, such as the camera or the Bluetooth module. Whenever an application makes a call to access device hardware, Android loads a corresponding library module from the HAL for that hardware component. + +Above the HAL, there is the Android runtime (ART) and the Native C/C++ libraries. + +The ART provides virtual machines for executing DEX (Dalvik executable) bytecode, which is specially designed to have a minimal memory footprint. Java code gets turned into DEX bytecode, which can run on the ART. + +Android apps can be written in Kotlin, Java, or even C++. The Android SDK (software development kit) tools compile code + resource files into a so-called Android application package (APK), which is an archive used to install the app. When publishing to the Google Play app store, one generates instead an Android app bundle (ABB), and then Google Play itself generates optimized APKs for the target device that is requesting installation of the app. This is a more practical approach than having the developer produce individual APKs for each device. + +Each Android app lives in its own security sandbox. In Linux terms, each app gets its own user ID. Permissions for all the files accessed by the app are set so that only the assigned user ID can access them. It is possible for apps to share data and access each other's files, in which case they get the same Linux user ID; the apps however must be from the same developer (i.e., signed with the same developer certificate). + +Each app runs in its own Linux process. By default, an app can access only the components that it needs to do its work and no more (this follows the [principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege)). An app can access the device's location, camera, Bluetooth connection, etc. as long as the phone user has explicitly granted it these permissions. Each app has its own instance of the ART, with its own VM to execute DEX bytecode. In other words, apps don't run inside a common VM. + +In Android, the UI is privileged. Foreground and UI threads get higher priority in CPU scheduling than the other threads. Android uses process containers (a.k.a., [Linux cgroups](https://en.wikipedia.org/wiki/Cgroups)) to allocate a higher percentage of the CPU to them. When users switch between apps, Android keeps non-foreground apps (e.g., not visible to the user) in a cache (in Linux terms, the corresponding processes do not terminate) and, if the user returns to the app, the process is reused, which makes app switching faster. + +Many core system components (like the ART and HAL) require native libraries written in C/C++. Android provides Java APIs to expose the functionality of these native libraries to apps. This Java APIs framework is layered on top of the ART and native C/C++ libraries. Apps written in C/C++ can use the native libraries directly (this requires the Android native development kit, or NDK). + +Within the Java API framework, there are a number of Android features provided through APIs written in Java that provide key building blocks for apps. For example, the View System is used for building the UI. E.g., the Resource Manager is used by apps for accessing localized strings or graphics. E.g., the Activity Manager handles the lifecycle of apps and provides a common navigation back stack (more on this below). And, as a final example, Content Providers enable apps to access data from other apps (e.g., a social media app access the Contacts app) or to share their own data. + +On top of this entire stack are the apps. Android comes with core apps for email, SMS, calendaring, web browsing, contacts, etc. but these apps have no special status, then can be replaced with third-party apps. The point of shipping them with Android is to provide key capabilities to other apps out of the box (e.g., allow multiple apps to have the functionality of sending an SMS without having to write the code for it). + +An Android app is a collection of components, with each component providing an entry point to the app. There are 4 main components: + +* An _Activity_ provides an entry point with a UI, for interacting with the user. An activity represents a single screen with a user interface. For example, an email app has an activity to show a list of new emails, another activity to compose an email, another activity to read emails, and so on -- activities work together to form the email app, but each one is independent of the others. Unlike in desktop software, other apps can start any one of these activities if the email app allows it, e.g., the Camera app may start the compose-email activity in order to send a picture by email. +* A _Service_ is a general-purpose entry point without a UI. This essentially keeps an app running in the background (e.g., play music in the background while the user is in a different app, or fetch data over the network without blocking user interaction with an activity). A service can also be started by another component. +* A _Broadcast Receiver_ enables Android to deliver events to apps out-of-band, e.g., an app may set a timer and ask Android to wakes it up at some point in the future; this way, the app need not run non-stop until then. So broadcasts can be delivered even to apps that aren't currently running. Broadcast events can be initiated by the OS (e.g., the screen turned off, battery is low, picture was taken) or can be initiated by apps (e.g., some data has been downloaded and is available for other apps to use). Broadcast receivers have no UI, but they can create a status bar notification to alert the user. +* A _Content Provider_ manages data that is shared among apps. Such data can be local (e.g., file system, SQLite DB) or remote on some server. Through a content provider, apps can query / modify the data (e.g., the Contacts data is a content provider: any app with the proper permissions can get contact info and can write contact info). + +Remember that any app can cause another app’s component to start. For instance, if WhatsApp wants to take a photo with the camera, it can ask that an activity be started in the Camera app, without the WhatsApp developer having to write the code to take photos--when done, the photo is returned to WhatsApp, and to the user it appears as if the camera is actually a part of WhatsApp. + +To start a component, Android starts the process for target app (if not already running), instantiates the relevant classes (this is because, e.g., the photo-taking activity runs in the Camera process, not WhatsApp's process). You see here an example of multiple entry points: there is no `main()` like in a desktop app. + +Then, to activate a component in another app, you must create an _Intent_ object, which is essentially a message that activates either a specific component (explicit intent) or a type of component (implicit intent). For activities and services, the intent defines the action to perform (e.g., view or send something) and specifies the URI of the data to act on (e.g., this is an intent to dial a certain phone number). If the activity returns a result, it is returned in an Intent (e.g., let the user pick a contact and return a URI pointing to it). For broadcast receivers, the intent defines the announcement being broadcast (e.g., if battery is low it includes the BATTERY_LOW string), and a receiver can register for it by filtering on the string. Content providers are not activated by intents but rather when targeted by a request from what's called a Content Resolver, which handles all interaction with the content provider. + +## A mobile app: Structure and lifecycle + +The centerpiece of an Android app is the _activity_. Unlike the kinds of programs you've been writing so far, there is no `main()`, but rather the underlying OS initiates code in an _Activity_ by invoking specific callback methods. + +An activity provides the window in which the app draws its UI. One activity essentially implements one screen in an app, e.g., 1 activity for a Preferences screen, 1 activity for a Select Photo screen, etc. + +Apps contain multiple screens, each of which corresponds to an activity. One activity is specified as the main activity, i.e., the first screen to appear when you launch the app. Each activity can then start another activity, e.g., the main activity in an email app may provide the screen that shows your inbox, then you have screens for opening and reasing a message, one for writing, etc. + +As a user navigates through, out of, back into an app, the activities in the app transition through diff states. Transitioning from one state to another is handled by specific callbacks that must be implemented in the activity. The activity learns about changes and reacts to when the user leaves and re-enters the activity: For example, a streaming video player will likely pause the video and terminate the network connection when the user switches to another app; when the user returns to the app, it will reconnect to the network and allow the user to resume the video from the same spot. + +To understand the lifecycle of an activity, please see [the Android documentation](https://developer.android.com/guide/components/activities/activity-lifecycle). + +Another important element are _fragments_. These provide a way to modularize an activity's UI and reuse modules across different activities. This provides a productive and efficient way to respond to various screen sizes, or whether your phone is in portrait or landscape mode, where the UI is composed differently but from the same modules. + +A fragment may show the list of emails, and another fragment may display an email thread. On a phone, you would display only one at a time, and upon tap switch to the next activity. On a tablet, with the greater screen real estate, you have room for both. By modularizing the UI into fragments, it's easy to adapt the app to the different layouts by rearranging fragments within the same view when the phone switches from portrait to landscape mode. + +To understand fragments, please see [the Android documentation](https://developer.android.com/guide/fragments). + +You don't have to use fragments within activities, but such modularization makes the app more flexible and also makes it easier to maintain over time. A fragment defines and manages its own layout, has its own lifecycle, and can handle its own input events. But it cannot live on its own, it needs to be hosted by an activity or another fragment. + +Android beginners tend to put all their logic inside Activities and Fragments, ending up with ""views"" that do a lot more than just render the UI. Remember what we said about MVVM and how that maps to how you build an Android app. In the exercise set, we will ask you to go through an MVVM exercise. Android provides a nice ViewModel class to store and manage UI-related data. + +## User experience (UX) + +User experience design (or ""UX design"") is about putting together an intuitive, responsive, navigable, usable interface to the app. You want to look at the app through the users' perspective to derive what can give them an easy, logical and positive experience. + +So you must ask who is your audience: Old or young people? Intellectuals or blue-collar workers? Think back to the lecture on personas and user stories. + +The UI should enable the user to get around the app easily, and to find quickly what they're looking for. To achieve this, several elements and widgets have emerged from the years of experience of building mobile apps. The [hamburger icons](https://en.wikipedia.org/wiki/Hamburger_button) are used for drop-down menus with further details--they avoid clutter. Home buttons give users a shortcut to home base. Chat bubbles offer quick help through context-sensitive messages. + +Personalization of the UI allows adapting to what the user is interested in, keeping the unrelated content away. For example, in the [EPFL Campus app](https://pocketcampus.org/epfl-en) you can select which features you see on the home screen, and so the home screen of two different users is likely to look different. + +In order to maximize readability, emphasize simplicity and clarity, choose adequate font size and image size. Avoid having too many elements on the screen, because that leads to confusion. Offer one necessary action per screen. Use [micro UX animations](https://uxplanet.org/ui-design-animations-microinteractions-50753e6f605c), which are little animations that appear when a specific action is performed or a particular item is hovered or touched; this can increase engagement and interactivity. E.g., hovering over a thumbnail shows details about it. E.g., hover zoom for maximizing the view of a specific part of something. Keep however the animations minimal, avoid flashiness, make them subtle and clear as to their intent. + +Keep in mind _thumb zones_. People use their mobile when they’re standing, walking, riding a bus, so the app should allow them to hold the device and view its screen and provide input in all these situations. So make it easy for the user to reach with their thumb the stuff they do most often. Leverage gesture controls to make it easy for a user to interact with apps--e.g., “holding” an item, “dragging” it to a container, and then “releasing”--but beware to do what users of that device are used to doing, not exotic stuff. + +Think of augmented reality and voice interaction, depending on the app, they can be excellent ways to interact with the device. + +## The mobile ecosystem + +A modern mobile app, no matter how good it is, can hardly survive without plugging into its ecosystem. + +Most mobile apps are consists of the phone side of the app and the cloud side. Apps interact with cloud services typically over REST APIs, which is an architectural style that builds upon HTTP and JSON to offer CRUD (copy-read-update-delete) operations on objects that are addressed by a URI. You use HTTP methods to access these resources via URL-encoded parameters. + +Aside from the split architecture, apps also interact with the ecosystem through Push notifications. These are automated messages sent to the user by the server of the app that’s working in the background (i.e., not currently open). Each OS (e.g., Android, iOS) has its own push notification service with a specific API that apps can use, and they each operate a push engine in the cloud. + +The app developer enables their app with push notifications, at which point the app can start receiving incoming notifications. When you open the app, unique IDs for both the app and the device are created by the OS and registered with the OS push notification service. IDs are passed back to the app and also sent to the app publisher. The in-cloud service generates the message (either when a publisher produces it, or in response to some event, etc.) which can be targeted at users or a group of users and gets sent to the corresponding mobile. On Android, an incoming notification can create an Activity. + +A controversial aspect of the ecosystem is the extent to which it track the device and the user of the device. We do not discuss this aspect here in detail, it is just something to be aware of. + +A key ingredient of ""plugging into"" the ecosystem is the network. Be aware that your users may experience different bandwidth limitations (e.g., 3G vs. 5G), and the Internet doesn't work as smoothly everywhere in the world as we're used to. In addition to bandwidth issues, there is also the risk of experiencing a noticeable disconnection from the Internet. E.g., a highly interactive, graphic-heavy app will not be appropriate for apps that target Latin America or Africa, or users in rural areas, because today there are still many areas with spotty connectivity. + +Perhaps a solution can be offered by the new concept of fog computing (as opposed to cloud computing), also called edge computing. It is about the many ""peripheral"" devices that connect to the periphery of a cloud, and many of these devices will generate lots of raw data (e.g., from sensors). Rather than forward this data to cloud-based servers, one could do as much processing as possible using computing units nearby, so that processed rather than raw data is forwarded to the cloud. As a result, bandwidth requirements are reduced. ""The Fog"" can support the Internet of Things (IoT), including phones, wearable health monitoring devices, connected vehicle and augmented reality devices. IoT devices are often resource-constrained and have limited computational abilities to perform, e.g., cryptography computations, so a fog node nearby can provide security for IoT devices by performing these cryptographic computations instead. + +## Summary + +In this lecture, you learned: + +* Several ways in which a desktop app differs from a mobile app +* How security, energy, and other requirements change when moving from a desktop to a mobile +* How activities and fragments work in Android apps +* The basics of good UX +* How mobile apps can leverage their ecosystem + +You can now check out the [exercises](exercises/). +",CS-305: Software engineering +"# Evolution + +Imagine you are an architect asked add a tower to a castle. +You know what towers look like, you know what castles look like, and you know how to build a tower in a castle. This will be easy! + +But then you get to the castle, and it looks like this: + +

+ +Sure, it's a castle, and it fulfills the same purpose most castles do, but... it's not exactly what you had in mind. +It's not built like a standard castle. +There's no obvious place to add a tower, and what kind of tower will this castle need anyway? +Where do you make space for a tower? How do you make sure you don't break half the castle while adding it? + +This lecture is all about evolving an existing codebase. + + +## Objectives + +After this lecture, you should be able to: +- Find your way in a _legacy codebase_ +- Apply common _refactorings_ to improve code +- Document and quantify _changes_ +- Establish solid foundations with _versioning_ + + +## What is legacy code, and why should we care? + +""Legacy code"" really means ""old code that we don't like"". +Legacy code may or may not have documentation, tests, and bugs. +If it has documentation and tests, they may or may not be complete enough; tests may even already be failing. + +One common reaction to legacy code is disgust: it's ugly, it's buggy, why should we even keep it? +If we rewrote the code from scratch we wouldn't have to ask all of these questions about evolution! It certainly sounds enticing. + +In the short term, a rewrite feels good. There's no need to learn about old code, instead you can use the latest technologies and write the entire application from scratch. + +However, legacy code works. It has many features, has been debugged and patched many times, and users rely on the way it works. +If you accidentally break something, or if you decide that some ""obscure"" feature is not necessary, you will anger a lot of your users, who may decide to jump ship to a competitor. + +One infamous rewrite story is [that of Netscape 5](https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/). +In the '90s, Netscape Navigator was in tough competition with Microsoft's Internet Explorer. +While IE became the butt of jokes later, at the time Microsoft was heavily invested in the browser wars. +The Netscape developers decided that their existing Netscape 4 codebase was too old, too buggy, and too hard to evolve. +They decided to write Netscape 5 from scratch. +The result is that it took them three years to ship the next version of Netscape; in that time, Microsoft had evolved their existing IE codebase, far outpacing Netscape 4, and Netscape went bankrupt. + +Most rewrites, like Netscape, fail. +A rewrite means a loss of experience, a repeat of many previous mistakes, and that's just to get to the same point as the previous codebase. +There then needs to be time to add features that justify the cost of upgrading to users. Most rewrites run out of time or money and fail. + +It's not even clear what ""a bug"" is in legacy code, which is one reason rewrites are dangerous: some users depend on things that might be considered ""bugs"". +For instance, Microsoft Excel [treats 1900 as a leap year](https://learn.microsoft.com/en-us/office/troubleshoot/excel/wrongly-assumes-1900-is-leap-year) +even though it is not, because back when it was released it had to compete with a product named Lotus 1-2-3 that did have this bug. +Fixing the bug means many spreadsheets would stop working correctly, as dates would become off by one. Thus, even nowadays, Microsoft Excel still contains a decades-old ""bug"" in the name of compatibility. + +A better reaction to legacy code is to _take ownership of it_: if you are assigned to a legacy codebase, it is now your code, and you should treat it just like any other code you are responsible for. +If the code is ugly, it is your responsibility to fix it. + + +## How can we improve legacy code? + +External improvements to an existing codebase, such as adding new features, fixing bugs, or improving performance, frequently require internal improvements to the code first. +Some features may be difficult to implement in an existing location, but could be much easier if the code was improved first. +This may require changing design _tradeoffs_, addressing _technical debt_, and _refactoring_ the codebase. Let's see each of these in detail. + +### Tradeoffs + +Software engineers make tradeoffs all the time when writing software, such as choosing an implementation that is faster at the cost of using more memory, +or simpler to implement at the cost of being slower, or more reliable at the cost of more disk space. +As code ages, its context changes, and old tradeoffs may no longer make sense. + +For instance, Windows XP, released in 2001, groups background services into a small handful of processes. +If any background service crashes, it will cause all of the services in the same process to also crash. +However, because there are few processes, this minimizes resource use. +It would have been too much in 2001, on computers with as little as 64 MB of RAM, to dedicate one process and the associated overheads per background service. + +But in 2015, when Windows 10 was released, computers typically had well over 2 GB of RAM. +Trading reliability for low resource use no longer made sense, so Windows 10 instead runs each background service in its own process. +The cost of memory is tiny on the computers Windows 10 is made for, and the benefits from not crashing entire groups of services at a time are well worth it. +The same choice made 15 years apart yielded a different decision. + +### Technical debt + +The cost of all of the ""cheap"" and ""quick"" fixes done to a codebase making it progressively worse is named _technical debt_. +Adding one piece of code that breaks modularity and hacks around the code's internals may be fine to meet a deadline, but after a few dozen such hacks, the code becomes hard to maintain. + +The concept is similar to monetary debt: it can make sense to invest more money than you have, so you borrow money and slowly pay it back. +But if you don't regularly pay back at least the interest, your debt grows and grows, and so does the share of your budget you must spend on repaying your debt. +You eventually go bankrupt from the debt payments taking up your entire budget. + +With technical debt, a task that should take hours can take a week instead, because one now needs to update the code in the multiple places where it has been copy-pasted for ""hacks"", +fix some unrelated code that in practice depends on the specific internals of the code to change, write complex tests that must set up way more than they should need, and so on. +You may no longer be able to use the latest library that would solve your problem in an hour, because your codebase is too old and depends on old technology the library is not compatible with, +so instead you need weeks to reimplement that functionality yourself. +You may regularly need to manually reimplement security patches done to the platform you use because you use an old version that is no longer maintained. + +This is one reason why standards are useful: using a standard version of a component means you can easily service it. +If you instead have a custom component, maintenance becomes much more difficult, even if the component is nicer to work with at the beginning. + +### Refactoring + +Refactoring is the process of making _incremental_ and _internal_ improvements. +These are improvements designed to make code easier to maintain, but that do not directly affect end users. + +Refactoring is about starting from a well-known code problem, applying a well-known solution, and ending up with better code. +The well-known problems are sometimes called ""code smells"", because they're like a strange smell: not harmful on its own, but worrying, and could lead to problems if left unchecked. + +For instance, consider the following code: +```java +class Player { + int hitPoints; + String weaponName; + int weaponDamage; + boolean isWeaponRanged; +} +``` +Something doesn't smell right. Why does a `Player` have all of the attributes of its weapon? +As it is, the code works, but what if you need to handle weapons independently of players, say, to buy and sell weapons at shops? +Now you will need fake ""players"" that exist just for their weapons. You'll need to write code that hides these players from view. +Future code that deals with players may not handle those ""fake players"" correctly, introducing bugs. +Pay some of your technical debt and fix this with a refactoring: extract a class. +```java +class Player { + int hitPoints; + Weapon weapon; +} +class Weapon { ... } +``` +Much better. Now you can deal with weapons independently of players. + +Your `Weapon` class now looks something like this: +```java +class Weapon { + boolean isRanged; + void attack() { + if (isRanged) { ... } else { ... } + } +} +``` +This doesn't smell right either. Every method on `Weapon` will have to first check if it is ranged or not, and some developers might forget to do that. +Refactor it: use polymorphism. +```java +abstract class Weapon { + abstract void attack(); +} +class MeleeWeapon extends Weapon { ... } +class RangedWeapon extends Weapon { ... } +``` +Better. + +But then you look at the damage calculation... +```java +int damage() { + return level * attack - max(0, armor - 500) * + attack / 20 + min(weakness * level / 10, 400); +} +``` +What is this even doing? It's hard to tell what was intended and whether there's any bug in there, let alone to extend it. +Refactor it: extract variables. +```java +int damage() { + int base = level * attack; + int resistance = max(0, armor - 500) * attack / 20; + int bonus = min(weakness * level / 10, 400); + return base - resistance + bonus; +} +``` +This does the same thing, but now you can tell what it is: there's one component for damage that scales with the level, +one component for taking the resistance into account, and one component for bonus damage. + +You do not have to do these refactorings by hand; any IDE contains tools to perform refactorings such as extracting an expression into a variable or renaming a method. + +--- +#### Exercise +Take a look at the [gilded-rose/](exercises/lecture/gilded-rose) exercise folder. +The code is obviously quite messy. +First, try figuring out what the code is trying to do, and what problems it has. +Then, write down what refactorings you would make to improve the code. You don't have to actually do the refactorings, only to list them, though you can do them for more practice. + +
+Proposed solution (click to expand) +

+ +The code tries to model the quality of items over time. But it has so many special cases and so much copy-pasted code that it's hard to tell. + +Some possible refactorings: +- Turn the `for (int i = 0; i < items.length; i++)` loop into a simpler `for (Item item : items)` loop +- Simplify `item.quality = item.quality - item.quality` into the equivalent but clearer `item.quality = 0` +- Extract `if (item.quality < 50) item.quality = item.quality + 1;` into a method `incrementQuality` on `Item`, which can thus encapsulate this cap at 50 +- Extract numbers such as `50` into named constants such as `MAX_QUALITY` +- Extract repeated checks such as those on the item names into methods on `Item` such as `isPerishable`, and maybe create subclasses of `Item` instead of checking names + +

+
+ + +## Where to start in a large codebase? + +Refactorings are useful for specific parts of code, but where to even start if you are given a codebase of millions of lines of code and told to fix a bug? +There may not be documentation, and if there is it may not be accurate. + +A naïve strategy is to ""move fast and break things"", as [was once Facebook's motto](https://www.businessinsider.com/mark-zuckerberg-on-facebooks-new-motto-2014-5). +The advantage is that you move fast, but... the disadvantage is that you break things. +The latter tends to massively outweigh the former in any large codebase with many users. +Making changes you don't fully understand is a recipe for disaster in most cases. + +An optimistic strategy, often taken by beginners, is to understand _everything_ about the code. +Spend days and weeks reading every part of the codebase. Watch video tutorials about every library the code uses. +This is not useful either, because it takes far too long, and because it will in fact never finish since others are likely making changes to the code while you learn. +Furthermore, by the time you have read half the codebase, you won't remember what exactly the first part you looked at was, and your time will have been wasted. + +Let's see three components of a more realistic strategy: learning as you go, using an IDE, and taking notes. + +### Learning as you go + +Think of how detectives such as [Sherlock Holmes](https://en.wikipedia.org/wiki/Sherlock_Holmes) or [Miss Marple](https://en.wikipedia.org/wiki/Miss_Marple) solve a case. +They need information about the victim, what happened, and any possible suspects, because that is how they will find out who did it. +But they do not start by asking every person around for their entire life story from birth to present. +They do not investigate the full history of every item found at the scene of the crime. +While the information they need is somewhere in there, getting it by enumerating all information would take too much time. + +Instead, detectives only learn what they need when they need it. If they find evidence that looks related, they ask about that evidence. +If somebody's behavior is suspect, they look into that person's general history, and if something looks related, they dig deeper for just that detail. + +This is what you should do as well in a large codebase. Learn as you go: only learn what you need when you need it. + +### Using an IDE + +You do not have to manually read through files to find out which class is used where, or which modules call which function. +IDEs have built-in features to do this for you, and those features get better if the language you're using is statically typed. +Want to find who uses a method? Right click on the method's name, and you should find some tool such as ""find all references"". +Do you realize the method is poorly named given how the callers use it? Refactor its name using your IDE, don't manually rename every use. + +One key feature of IDEs that will help you is the _debugger_. +Find the program's ""main"" function, the one called at the very beginning, and put a breakpoint on its first statement. +Run the program with the debugger, and you're ready to start your investigation by following along with the program's flow. +Want to know more about a function call? Step into it. Think that call is not relevant right now? Step over it instead. + +### Taking notes + +You cannot hope to remember all context about every part of a large codebase by heart. +Instead, take notes as you go, at first for yourself. You may later turn these notes into documentation. + +One formal way to take notes is to write _regression tests_. +You do not know what behavior the program should have, but you do know that it works and you do not want to break it. +Thus, write a test without assertions, run the test under the debugger, and look at the code's behavior. +Then, add assertions to the test for the current behavior. +This serves both as notes for yourself of what happens when the code is used in a specific way, and as an automated way to check if you broke something while modifying the code later. + +Another formal way to take notes is to write _facades_. +""Facade"" is a design pattern intended to simplify and modularize existing code by hiding complex parts behind simpler facades. +For instance, let's say you are working in a codebase that only provides a very generic ""draw"" method that takes a lot of arguments, but you only need to draw rectangles: +```java +var points = new Point[] { + new Point(0, 0), new Point(x, 0), + new Point(x, y), new Point(0, y) +}; +draw(points, true, false, Color.RED, null, new Logger(), ...); +``` +This is hard to read and maintain, so write a facade for it: +```java +drawRectangle(0, 0, x, y, Color.RED); +``` +Then implement the `drawRectangle` method in terms of the complex `draw` method. +The behavior hasn't changed, but the code is easier to read. +Now, you only need to look at the complex part of the code if you actually need to add functionality related to it. +Reading the code that needs to draw rectangles no longer requires knowledge of the complex drawing function. + +--- +#### Exercise +Take a look at the [pacman/](exercises/lecture/pacman) exercise folder. +It's a cool ""Pac-Man"" game written in Java, with a graphical user interface. +It's fun! Imagine you were asked to maintain it and add features. You've never read its code before. so where to start? + +First, look at the code, and take some notes: which classes exist, and what do they do? +Then, use a debugger to inspect the code's flow, as described above. +If someone asked you to extend this game to add a new kind of ghost, with a different color and behavior, which parts of the code would you need to change? +Finally, what changes could you make to the code to make it easier to add more kinds of ghosts? + +
+Proposed solution (click to expand) +

+ +To add a kind of ghost, you'd need to add a value to the `ghostType` enum, and a class extending `Ghost`. +You would then need to add parsing logic in `MapEditor` for your ghost, and to link the enum and class together in `PacBoard`. + +To make the addition of more ghosts easier, you could start by re-formatting the code to your desired standard, and changing names to be more uniform, such as the casing of `ghostType`. +One task would be to have a single object to represent ghosts, instead of having both `ghostType` and `Ghost`. +It would also probably make sense to split the parsing logic from `MapEditor`, since editing and parsing are not the same job. + +Remember, you might look at this Pac-Man game code thinking it's not as nice as some idealized code that could exist, but unlike the idealized code, this one does exist already, and it works. +Put energy into improving the code rather than complaining about it. + +

+
+ +--- + +Remember the rule typically used by scouts: leave the campground cleaner than you found it. +In your case, the campground is code. +Even small improvements can pay off with time, just like monetary investments. +If you improve a codebase by 1% every day, after 365 days, it will be around 38x better than you started. +But if you let it deteriorate by 1% instead, it will be only 0.03x as good. + + +## How should we document changes? + +You've made some changes to legacy code, such as refactorings and bug fixes. Now how do you document this? +The situation you want to avoid is to provide no documentation, lose all knowledge you gained while making these changes, and need to figure it all out again. This could happen to yourself or to someone else. + +Let's see three kinds of documentation you may want to write: for yourself, for code reviewers, and for maintainers. + +### Documenting for yourself + +The best way to check and improve your understanding of a legacy codebase is to teach others about it, which you can do by writing comments and documentation. +This is a kind of ""refactoring"": you improve the code's maintainability by writing the comments and documentation that good code should have. + +For instance, let's say you find the following line in a method somewhere without any explanation: +```java +if (i > 127) i = 127; +``` +After spending some time on it, such as commenting it out to see what happens, you realize that this value is eventually sent to a server which refuses values above 127. +You can document this fact by adding a comment, such as `// The server refuses values above 127`. You now understand the code better. +Then you find the following method: +```java +int indexOfSpace(String text) { ... } +``` +You think you understand what this does, but when running the code, you realize it not only finds spaces but also tabs. +After some investigation, it turns out this was a bug that is now a specific behavior on which clients depend, so you must keep it. +You can thus add some documentation: `Also finds tabs. Clients depend on this buggy behavior`. +You now understand the code better, and you won't be bitten by this issue again. + +### Documenting for code reviewers + +You submit a pull request to a legacy codebase. Your changes touch code that your colleagues aren't quite familiar with either. +How do you save your colleagues some time? You don't want them to have to understand exactly what is going on before being able to review your change, +yet if you only give them code, this is what will happen. + +First, your pull request should have a _description_, in which you explain what you are changing and why. +This description can be as long as necessary, and can include remarks such as whether this is a refactoring-only change or a change in behavior, +or why you had to change some code that intuitively looks like it should not need changes. + +Second, the commits themselves can help reviewing if you split your work correctly, or if you rewrite the history once you are done with the work. +For instance, your change may involve a refactoring and a bug fix, because the refactoring makes the bug fix cleaner. +If you submit the change as a single commit, reviewers will need time and energy to read and understand the entire change at once. +If a reviewer is interrupted in the middle of understanding that commit, they will have to start from scratch after the interruption. + +Instead of one big commit, you can submit a pull request consisting of one commit per logical task in the request. +For instance, you can have one commit for a refactoring, and one for a bug fix. This is easier to review, because they can be reviewed independently. +This is particularly important for large changes: the time spent reviewing a commit is not linear in the length of the commit, but closer to exponential, +because we humans have limited brain space and usually do not have large chunks of uninterrupted time whenever we'd like. +Instead of spending an hour reviewing 300 modified lines of code, it's easier to spend 10 times 3 minutes reviewing commits of 30 lines at a time. +This also lessens the effects of being interrupted during a review. + +### Documenting for maintainers + +We've talked about documentation for individual bits of code, but future maintainers need more than that to understand a codebase. +Design decisions must be documented too: what did you choose and why? +Even if you plan on maintaining a project yourself for a long time, this saves some work: your future colleagues could each take 5 minutes of your time asking +you the same question about some design decision taken long ago, or you could spend 10 minutes once documenting it in writing. + +At the level of commits, this can be done in a commit message, so that future maintainers can use tools such as [git blame](https://git-scm.com/docs/git-blame) +to find the commit that last changed a line of code and understand why it is that way. + +At the level of modules or entire projects, this can be done with _Architectural Decision Records_. +As their name implies, these are all about recording decisions made regarding the architecture of a project: the context, the decision, and its consequences. +The goal of ADRs is for future maintainers to know not only what choices were made but also why, so that maintainers can make an informed decision on whether to make changes. +For instance, knowing that a specific library for user interfaces was chosen because of its excellent accessibility features informs maintainers that even if they do not like +the library's API, they should pay particular attention to accessibility in any potential replacement. Perhaps the choice was made at a time when alternatives had poor accessibility, +and the maintainers can change that choice because in their time there are alternatives that also have great accessibility features. + +The context includes user requirements, deadlines, compatibility with previous systems, or any other piece of information that is useful to know to understand the decision. +For instance, in the context of lecture notes for a course, the context could include ""We must provide lecture notes, as student feedback tells us they are very useful"" +and ""We want to allow contributions from students, so that they can easily help if they find typos or mistakes"". + +The decision includes the alternatives that were considered, the arguments for and against each of them, and the reasons for the final choice. +For instance, still in the same context, the alternatives might be ""PDF files"", ""documents on Google Drive"", and ""documents on a Git repository"". +The arguments could then include ""PDF files are convenient to read on any device, but they make it hard to contribute"". +The final choice might then be ""We chose documents on a Git repository because of the ease of collaboration and review, and because GitHub provides a nice preview of Markdown files online"". + +The consequences are the list of tasks and changes that must be done in the short-to-medium term. +This is necessary to apply the decision, even if it may not be useful in the long term. +For instance, one person might be tasked with converting existing lecture notes to Markdown, and another might be tasked to create an organization and a repository on GitHub. + +It is important to keep ADRs _close to the code_, such as in the same repository, or in the same organization on a platform like GitHub. +If ADRs are in some unrelated location that only current maintainers know about, they will be of no use to future maintainers. + + +## How can we quantify changes? + +Making changes is not only about describing them qualitatively, but also about telling people who use the software whether the changes will affect them or not. +For instance, if you publish a release that removes a function from your library, people who used that function cannot use this new release immediately. +They need to change their code to no longer use the function you removed, which might be easy if you provided a replacement, or might require large changes if you did not. +And if the people using your library cannot or do not want to update their software, for instance because they have lost the source code, they now have to rewrite their software. + +Let's talk _compatibility_, and specifically three different axes: theory vs practice, forward vs backward, and source vs binary. + +### Theory vs practice + +In theory, any change could be an ""incompatible"" change. +Someone could have copied your code or binary somewhere, and start their program by checking that yours is still the exact same one. Any change would make this check fail. +Someone could depend on the exact precision of some computation whose precision is not documented, and then [fail](https://twitter.com/stephentyrone/status/1425815576795099138) +when the computation is made more precise. + +In practice, we will ignore the theoretical feasibility of detecting changes and choose to be ""reasonable"" instead. +What ""reasonable"" is can depend on the context, such as Microsoft Windows having to provide compatibility modes for all kinds of old software which did questionable things. +Microsoft must do this because one of the key features they provide is compatibility, and customers might move to other operating systems otherwise. +Most engineers do not have such strict compatibility requirements. + +### Forward vs backward + +A software release is _forward compatible_ if clients for the next release can also use it without needing changes. +This is also known as ""upward compatibility"". +That is, if a client works on version N+1, forward compatibility means it should also work on version N. +Forward compatibility means that you cannot ever add new features or make changes that break behavior, +since a client using those new features could not work on the previous version that did not have these features. +It is rare in practice to offer forward compatibility, since the only changes allowed are performance improvements and bug fixes. + +A software release is _backward-compatible_ if clients for the previous release can also use it without needing changes. +That is, if a client works on version N-1, backward compatibility means it should also work on version N. +Backward compatibility is the most common form of compatibility and corresponds to the intuitive notion of ""don’t break things"". +If something works, it should continue working. For instance, if you upgrade your operating systems, your applications should still work. + +Backward compatibility typically comes with a set of ""supported"" scenarios, such as a public API. +It is important to define what is supported when defining compatibility, since otherwise some clients could misunderstand what is and is not supported, +and their code could break even though you intended to maintain backward compatibility. +For instance, old operating systems did not have memory protection: any program could read and write any memory, including the operating system's. +This was not something programs should have been doing, but they could. When updating to a newer OS with memory protection, this no longer worked, +but it was not considered breaking backward compatibility since it was never a feature in the first place, only a limitation of the OS. + +One extreme example of providing backward compatibility is Microsoft's [App Assure](https://www.microsoft.com/en-us/fasttrack/microsoft-365/app-assure): +if a company has a program that used to work and no longer works after upgrading Windows, Microsoft will fix it, for free. +This kind of guarantee is what allowed Microsoft to dominate in the corporate world; no one wants to have to frequently rewrite their programs, +no matter how much ""better"" or ""nicer"" the new technologies are. If something works, it works. + +### Source vs binary + +Source compatibility is about whether your customers' source code still compiles against a different version of your code. +Binary compatibility is about whether your customers’ binaries still link with a different version of your binary. +These are orthogonal to behavioral compatibility; code may still compile and link against your code even though the behavior at run-time has changed. + +Binary compatibility can be defined in terms of ""ABI"", ""Application Binary Interface"", just like ""API"" for source code. +The ABI defines how components call each other, such as method signatures: method names, return types, parameter types, and so on. +The exact compatibility requirements differ due to the ABI of various languages and platforms. +For instance, parameter names are not a part of Java's ABI, and thus can be changed without breaking binary compatibility. +In fact, parameter names are not a part of Java's API either, and can thus also be changed without breaking source compatibility. + +An interesting example of preserving source but not binary compatibility is C#'s optional parameters. +A definition such as `void Run(int a, int b = 0)` means the parameter `b` is optional with a default value of `0`. +However, this is purely source-based; writing `Run(a)` is translated by the compiler as if one had written `Run(a, 0)`. +This means that evolving the method from `void Run(int a)` to `void Run(int a, int b = 0)` is compatible at the source level, +because the compiler will implicitly add the new parameter to all existing calls, but not at the binary level, because the method signature changed so existing binaries will not find the one they expect. + +An example of the opposite is Java's generic type parameters, due to type erasure. +Changing a class from `Widget` to `Widget` is incompatible at the source level, +since the second type parameter must now be added to all existing uses by hand. +It is however compatible at the binary level, +because generic parameters in Java are erased during compilation. +If the second generic parameter is not otherwise used, binaries will not even know it exists. + +--- +#### Exercise +Which of these preserves backward compatibility? +- Add a new class +- Make a return type _less_ specific (e.g., from `String` to `Object`) +- Make a return type _more_ specific (e.g., from `Object` to `String`) +- Add a new parameter to a function +- Make a parameter type _less_ specific +- Make a parameter type _more_ specific +
+Proposed solution (click to expand) +

+ +- Adding a new class is binary compatible, since no previous version could have referred to it. + It is generally considered to be source compatible, except for the annoying scenario in which someone used wildcard imports (e.g., `import org.example.*`) + for two modules in the same file, and your new class has the same name as another class in another module, at which point the compiler will complain of an ambiguity. +- Making a return type less specific is not backward compatible. + The signature changes, so binary compatibility is already broken, + and calling code must be modified to be able to deal with more kinds of values, so source compatibility is also broken. +- Making a return type more specific is source compatible, since code that dealt with the return type can definitely deal with a method that happens to always return a more specific type. + However, since the signature changes, it is not binary compatible. +- Adding a parameter is neither binary nor source compatible, since the signature changes and code must now be modified to pass an additional argument whenever the function is called. +- Making a parameter type less specific is source compatible, since a function that accepts a general type of parameter can be used by always giving a more specific type. + However, since the signature changes, it is not binary compatible. +- Making a parameter type more specific is not backward compatible. + The signature changes, so binary compatibility is already broken, + and calling code must be modified to always pass a more specific type for arguments, so source compatibility is also broken. +

+
+ +--- + +What compatibility guarantees should you provide, then? +This depends on who your customers are, and asking them is a good first step. +Avoid over-promising; the effort required to maintain very strict compatibility may be more than the benefits you get from the one or two specific customers who need this much compatibility. +Most of your customers are likely to be ""reasonable"". + +Backward compatibility is the main guarantee people expect in practice, even if they do not say so explicitly. +Breaking things that work is viewed poorly. +However, making a scenario that previously had a ""failure"" result, such as throwing an exception, return a ""success” result instead is typically OK. +Customers typically do not depend on things _not_ working. + + +## How can we establish solid foundations? + +You are asked to run an old Python script that a coworker wrote ages ago. +The script begins with `import simplejson`, and then proceeds to use the `simplejson` library. +You download the library, run the script... and you get a `NameError: name 'scanstring' is not defined`. + +Unfortunately, because the script did not specify what _version_ of the library it expected, you now have to figure it out by trial and error. +For crashes such as missing functions, this can be done relatively quickly by going through versions in a binary search. +However, it is also possible that your script will silently give a wrong result with some versions of the library. +For instance, perhaps the script depends on a bug fix made at a specific time, +and running the script with a version of the library older than that will give a result that is incorrect but not obviously so. + +Versions are specific, tested, and named releases. +For instance, ""Windows 11"" is a version, and so is ""simplejson 3.17.6"". +Versions can be more or less specific; for instance, ""Windows 11"" is a general product name with a set of features, and some more minor features were added in updates such as ""Windows 11 22H2"". + +Typical components of a version include a major and a minor version number, sometimes followed by a patch number and a build number; +a name or sometimes a codename; a date of release; and possibly even other information. + +In typical usage, changing the major version number is for big changes and new features, +changing the minor version number is for small changes and fixes, and changing the rest such as the patch version number is for small fixes that may not be noticeable to most users, +as well as security patches. + +Versioning schemes can be more formal, such as [Semantic Versioning](https://semver.org/), +a commonly used format in which versions have three main components: `Major.Minor.Patch`. +Incrementing the major version number is for changes that break compatibility, the minor version number is for changes that add features while remaining compatible, +and the patch number is for compatible changes that do not add features. +As already stated, remember that the definition of ""compatible"" changes is not objective. +Some people may consider a change to break compatibility even if others think it is compatible. + +Let's see three ways in which you will use versions: publishing versioned releases, _deprecating_ public APIs if truly necessary, and consuming versions of your dependencies. + +### Versioning your releases + +If you allowed customers to download your source code and compile it whenever they want, +it would be difficult to track who is using what, and any bug report would start with a long and arduous process of figuring out exactly which version of the code is in use. +Instead, if a customer states they are using version 5.4.1 of your product and encounter a specific bug, you can immediately know which code this corresponds to. + +Providing specific versions to customers means providing specific guarantees, such as ""version X of our product is compatible with versions Y and Z of the operating system"", +or ""version X of our product will be supported for 10 years with security patches"". + +You do not have to maintain a single version at a time; products routinely have multiple versions under active support, +such as [Java SE](https://www.oracle.com/java/technologies/java-se-support-roadmap.html). + +In practice, versions are typically different branches in a repository. +If a change is made to the ""main"" branch, you can then decide whether it should be ported to some of the other branches. +Security fixes are a good example of changes that should be ported to all versions that are still supported. + +### Deprecating public APIs + +Sometimes you realize your codebase contains some very bad mistakes that lead to problems, +and you'd like to correct them in a way that technically breaks compatibility but still leads to a reasonable experience for your customers. +That is what _deprecation_ is for. + +By declaring that a part of your public surface is deprecated, you are telling customers that they should stop using it, and that you may even remove it in a future version. +Deprecation should be reserved for cases that are truly problematic, not just annoying. +For instance, if the guarantees provided by a specific method force your entire codebase to use a suboptimal design that makes everything else slower, it may be worth removing the method. +Another good example is methods that accidentally make it very easy to introduce bugs or security vulnerabilities due to subtleties in their semantics. + +For instance, Java's `Thread::checkAccess` method was deprecated in Java 17, +because it depends on the Java Security Manager, which very few people use in practice and which constrains the evolution of the Java platform, +as [JEP 411 states](https://openjdk.org/jeps/411). + +Here is an example of a less reasonable deprecation from Python: +``` +>>> from collections import Iterable +DeprecationWarning: Using or importing the ABCs +from 'collections' +instead of from 'collections.abc' +is deprecated since Python 3.3, +and in 3.10 it will stop working +``` +Sure, having classes in the ""wrong"" module is not great, but the cost of maintaining backward compatibility is low. +Breaking all code that expects the ""Abstract Base Collections"" to be in the ""wrong"" module is likely more trouble than it's worth. + +Deprecating in practice means thinking about whether the cost is worth the risk, and if it is, using your language's way of deprecating, +such as `@Deprecated(...)` in Java or `[Obsolete(...)]` in C#. + +### Consuming versions + +Using specific versions of your dependencies allows you to have a ""known good"" environment that you can use as a solid base to work. +You can update dependencies as needed, using version numbers as a hint for what kind of changes to expect. + +This does not mean you should 100% trust all dependencies to follow compatibility guidelines such as semantic versioning. +Even those who try to follow such guidelines can make mistakes, and updating a dependency from 1.3.4 to 1.3.5 could break your code due to such a mistake. +But at least you know that your code worked with 1.3.4 and you can go back to it if needed. +The worst-case scenario, which is unfortunately common with old code, is when you cannot build a codebase anymore +because it does not work with the latest versions of its dependencies and you do not know which versions it expects. +You then have to spend lots of time figuring out which versions work and do not work, and writing them down so future you does not have to do it all over again. + +In practice, in order to manage dependencies, software engineers use package managers, such as Gradle for Java, which manage dependencies given versions: +``` +testImplementation 'org.junit.jupiter:junit-jupiter-api:5.8.1' +``` +This makes it easy to get the right dependency given its name and version, without having to manually find it online. +It's also easy to update dependencies; package managers can even tell you whether there is any newer version available. +You should however be careful of ""wildcard"" versions: +``` +testImplementation 'org.junit.jupiter:junit-jupiter-api:5.+' +``` +Not only can such a version cause your code to break because it will silently use a newer version when one is available, +which could contain some bug that breaks your code, but you will have to spend time figuring out which version you were previously using, since it is not written down. + +For big dependencies such as operating systems, one way to easily save the entire environment as one big ""version"" is to use a virtual machine or container. +Once it is built, you know it works, and you can distribute your software on that virtual machine or container. +This is particularly useful for software that needs to be exactly preserved for long periods of time, such as scientific artifacts. + + +## Summary + +In this lecture, you learned: +- Evolving legacy code: goals, refactorings, and documentation +- Dealing with large codebases: learning as you go, improving the code incrementally +- Quantifying and describing changes: compatibility and versioning + +You can now check out the [exercises](exercises/)! +",CS-305: Software engineering