values = ...;
+
+assertThat(values,
+ either(empty())
+.or(contains(1, 2, 3)));
+```
+
+If this assertion fails, Hamcrest's exception message states ""Expected: (an empty collection or iterable containing `[<1>, <2>, <3>]`) but: was `<[42]>`"".
+
+Sometimes we need to test that a piece of code _fails_ in some circumstances, such as validating arguments properly and throwing an exception of an argument has an invalid value.
+This is what `assertThrows` is for:
+
+```java
+var ex = assertThrows(
+ SomeException.class,
+ () -> someOperation(42)
+);
+
+// ... test 'ex'...
+```
+
+The first argument is the type of exception we expect, the second is a function that should throw that type of exception.
+If the function does not throw an exception, or throws an exception of another type, `assertThrows` will throw an exception to indicate the test failed.
+If the function does throw an exception of the right type, `assertThrows` returns that exception so that we can test it further if needed, such as asserting some fact about its message.
+
+---
+#### Exercise
+It's your turn now! Open [the in-lecture exercise project](exercises/lecture) and test `Functions.java`.
+Start by testing valid values for `fibonacci`, then test that it rejects invalid values.
+For `split` and `shuffle`, remember that Hamcrest has many matchers and has documentation.
+
+Example solution (click to expand)
+
+
+You could test `fibonacci` using the `is` matcher we discussed earlier for numbers such as 1 and 10, and test that it throws an exception with numbers below `0` using `assertThrows`.
+
+To test `split`, you could use Hamcrest's `contains` matcher, and for the shuffling function, you could use `arrayContainingInAnyOrder`.
+
+We provide some [examples](exercises/solutions/lecture/FunctionsTests.java).
+
+
+
+
+---
+
+**Should you test many things in one method, or have many small test methods?**
+Think of what the tests output will look like if you combine many tests in one method.
+If the test method fails, you will only get one exception message about the first failure in the method, and will not know whether the rest of the test method would pass.
+Having big test methods also means the fraction of passing tests is less representative of the overall code correctness.
+In the extreme, if you wrote all assertions in a single method, a single bug in your code would lead to 0% of passing tests.
+Thus, you should prefer small test methods that each test one ""logical"" concept, which may need one or multiple assertions.
+This does not mean one should copy-paste large blocks of code between tests; instead, share code using features such as JUnit's `@BeforeAll`, `@AfterAll`, `@BeforeEach`, and `@AfterEach` annotations.
+
+**How can you test private methods?** You **don't**.
+Otherwise the tests must be rewritten every time the implementation changes.
+Think back to the SQLite example: the code would be impossible to change if any change in implementation details required modifying even a fraction of the 90 million lines of tests.
+
+**What standards should you have for test code?**
+The same as for the rest of the code.
+Test code should be in the same version control repository as other code, and should be reviewed just like other code when making changes.
+This also means tests should have proper names, not `test1` or `testFeatureWorks` but specific names that give information in an overview of tests such as `nameCanIncludeThaiCharacters`.
+Avoid names that use vague descriptions such as ""correctly"", ""works"", or ""valid"".
+
+
+## What metric can one use to evaluate tests?
+
+What makes a good test?
+When reviewing a code change, how does one know whether the existing tests are enough, or whether there should be more or fewer tests?
+When reviewing a test, how does one know if it is useful?
+
+There are many ways to evaluate tests; we will focus here on the most common one, _coverage_.
+Test coverage is defined as the fraction of code executed by tests compared to the total amount of code.
+Without tests, it is 0%. With tests that execute each part of the code at least once, it is 100%.
+But what is a ""part of the code""? What should be the exact metric for coverage?
+
+One naïve way to do it is _line_ coverage. Consider this example:
+
+```java
+int getFee(Package pkg) {
+ if (pkg == null) throw ...;
+ int fee = 10;
+ if (pkg.isHeavy()) fee += 10;
+ if (pkg.isInternational()) fee *= 2;
+ return fee;
+}
+```
+
+A single test with a non-null package that is both heavy and international will cover both lines.
+This may sound great since the coverage is 100% and easy to obtain, but it is not.
+If the `throw` was on a different line instead of being on the same line as the `if`, line coverage would no longer be 100%.
+It is not a good idea to define a metric for coverage that depends on code formatting.
+
+Instead, the simplest metric for test coverage is _statement_ coverage.
+In our example, the `throw` statement is not covered but all others are, and this does not change based on code formatting.
+Still, reaching almost 100% statement coverage based on a single test for the code above seems wrong.
+There are three `if` statements, indicating the code performs different actions based on a condition, yet we ignored the implicit `else` blocks in those ifs.
+
+A more advanced form of coverage is _branch_ coverage: the fraction of branch choices that are covered.
+For each branch, such as an `if` statement, 100% branch coverage requires covering both choices.
+In the code above, branch coverage for our single example test is 50%: we have covered exactly half of the choices.
+
+Reaching 100% can be done with two additional tests: one null package, and one package that is neither heavy nor international.
+
+But let us take a step back for a moment and think about what our example code can do:
+
+![]()
+
+There are five paths throughout the code, one of which fails.
+Yet, with branch coverage, we could declare victory after only three tests, leaving two paths unexplored.
+This is where path coverage comes in. Path coverage is the most advanced form of coverage, counting the fraction of paths throughout the code that are executed.
+Our three tests cover 60% of paths, i.e., 3 out of 5. We can reach 100% by adding tests for the two uncovered paths: a package that is heavy but not international, and one that is the other way around.
+
+Path coverage sounds very nice in theory.
+But in practice, it is often infeasible, as is obvious from the following example:
+
+```java
+while (true) {
+ var input = getUserInput();
+ if (input.length() <= 10) break;
+ tellUser(""No more than 10 chars"");
+}
+```
+
+The maximum path coverage obtainable for this code is _zero_.
+That's because there is an infinite number of paths: the loop could execute once, or twice, or thrice, and so on.
+Since one can only write a finite number of tests, path coverage is blocked at 0%.
+
+Even without infinite loops, path coverage is hard to obtain in practice.
+With just 5 independent `if` statements that do not return early or throw, one must write 32 tests.
+If 1/10th of the lines of code are if statements, a 5-million-lines program has more paths than there are atoms in the universe.
+And 5 million lines is well below what some programs have in practice, such as browsers.
+
+There is thus a tradeoff in coverage between feasibility and confidence.
+Statement coverage is typically easy to obtain but does not give that much confidence, whereas path coverage can be impossible to obtain in practice but gives a lot of confidence.
+Branch coverage is a middle ground.
+
+It is important to note that coverage is not everything.
+We could cover 100% of the paths in our ""get fee"" function above with 5 tests, but if those 5 tests do not actually check the value returned by the function, they are not useful.
+Coverage is a metric that should help you decide whether additional tests would be useful, but it does not replace human review.
+
+---
+#### Exercise
+Run your tests from the previous exercise with coverage.
+You can do so either from the command line or from your favorite IDE, which should have a ""run tests with coverage"" command next to ""run tests"".
+Note that using the command line will run the [JaCoCo](https://www.jacoco.org/jacoco/) tool, which is a common way to get code coverage in Java.
+If you use an IDE, you may use the IDE's own code coverage tool, which could have minor differences in coverage compared to JaCoCo in some cases.
+
+
+## When to test?
+
+Up until now we have assumed tests are written after development, before the code is released.
+This is convenient, since the code being tested already exists.
+But it has the risk of duplicating any mistakes found in the code: if an engineer did not think of an edge case while writing the code,
+they are unlikely to think about it while writing the tests immediately afterwards.
+It's also too late to fix the design: if a test case reveals that the code does not work because its design needs fundamental alterations,
+this will likely have to be done quickly under pressure due to a deadline, leading to a suboptimal design.
+
+If we simplify a product lifecycle to its development and its release, there are three times at which we could test:
+
+![]()
+
+The middle one is the one we have seen already. The two others may seem odd at first glance, but they have good reasons to exist.
+
+Testing before development is commonly known as **test-driven development**, or _TDD_ for short, because the tests ""drive"" the development, specifically the design of the code.
+In TDD, one first writes tests, then the code.
+After writing the code, one can run the tests and fix any bugs.
+This forces programmers to think before coding, instead of writing the first thing that comes to mind.
+It provides instant feedback while writing the code, which can be very gratifying: write some of the code, run the tests, and some tests now pass!
+This gives a kind of progress indication. It's also not too late to fix the design, since the design does not exist yet.
+
+The main downside of TDD is that it requires a higher time investment, and may even lead to missed deadlines.
+This is because the code under test must be written regardless of what tests are written.
+If too much time is spent writing tests, there won't be enough time left to write the code.
+When testing after development, this is not a problem because it's always possible to stop writing tests at any time, since the code already exists,
+at the cost of fewer tests and thus less confidence in the code.
+Another downside of TDD is that the design must be known upfront, which is fine when developing a module according to customer requirements but not when prototyping, for instance, research code.
+There is no point in writing a comprehensive test suite for a program if that program's very purpose will change the next day after some thinking.
+
+Let us now walk through a TDD example step by step.
+You are a software engineer developing an application for a bank.
+Your first task is to implement money withdrawal from an account.
+The bank tells you that ""users can withdraw money from their bank account"".
+This leaves you with a question, which you ask the bank: ""can a bank account have a balance below zero?"".
+The bank answers ""no"", that is not possible.
+
+You start by writing a test:
+
+```java
+@Test void canWithdrawNothing() {
+ var account = new Account(100);
+ assertThat(account.withdraw(0), is(0));
+}
+```
+
+The `new Account` constructor and the `withdraw` method do not exist, so you create a ""skeleton"" code that is only enough to make the tests _compile_, not pass yet:
+
+```java
+class Account {
+ Account(int balance) { }
+ int withdraw(int amount) { throw new UnsupportedOperationException(""TODO""); }
+}
+```
+
+You can now add another test for the ""balance below zero"" question you had:
+
+```java
+@Test void noInitWithBalanceBelow0() {
+ assertThrows(IllegalArgumentException.class, () -> new Account(-1));
+}
+```
+
+This test does not require more methods in `Account`, so you continue with another test:
+
+```java
+@Test void canWithdrawLessThanBalance() {
+ var account = new Account(100);
+ assertThat(account.withdraw(10), is(10));
+ assertThat(account.balance(), is(90));
+}
+```
+
+This time you need to add a `balance` method to `Account`, with the same body as `withdraw`. Again, the point is to make tests compile, not pass yet.
+
+You then add one final test for partial withdrawals:
+
+```java
+@Test void partialWithdrawIfLowBalance() {
+ var account = new Account(10);
+ assertThat(account.withdraw(20), is(10));
+ assertThat(account.balance(), is(0));
+}
+```
+
+Now you can run the tests... and see them all fail! This is normal, since you did not actually implement anything.
+You can now implement `Account` and run the tests every time you make a change until they all pass.
+
+Finally, you go back to your customer, the bank, and ask what is next.
+They give you another requirement they had forgotten about: the bank can block accounts and withdrawing from a blocked account has no effect.
+You can now translate this requirement into tests, adding code as needed to make the tests compile, then implement the code.
+Once you finish, you will go back to asking for requirements, and so on until your application meets all the requirements.
+
+----
+#### Exercise
+It's your turn now! In [the in-lecture exercise project](exercises/lecture) you will find `PeopleCounter.java`, which is documented but not implemented.
+Write tests first then implement the code and fix your code if it doesn't pass the tests, in a TDD fashion.
+First, think of what tests to write, then write them, then implement the code.
+
+
+Example tests (click to expand)
+
+
+You could have five tests: the counter initializes to zero, the ""increment"" method increments the counter,
+the ""reset"" method sets the counter to zero, the ""increment"" method does not increment beyond the maximum,
+and the maximum cannot be below zero.
+
+We provide [sample tests](exercises/solutions/lecture/PeopleCounterTests.java) and [a reference implementation](exercises/solutions/lecture/PeopleCounter.java).
+
+
+
+
+----
+
+Testing after deployment is commonly known as **regression testing**. The goal is to ensure old bugs do not come back.
+
+When confronted with a bug, the idea is to first write a failing test that reproduces the bug, then fix the bug, then run the test again to show that the bug is fixed.
+It is crucial to run the test before fixing the bug to ensure it actually fails.
+Otherwise, the test might not actually reproduce the bug, and will ""pass"" after the bug fix only because it was already passing before, providing no useful information.
+
+Recall the SQLite example: all of these 90 million lines of code show that a very long list of possible bugs will not appear again in any future release.
+This does not mean there are no bugs left, but that many if not all common bugs have been removed, and that the rest are most likely unusual edge cases that nobody has encountered yet.
+
+
+## How can one test entire modules?
+
+Up until now we have seen tests for pure functions, which have no dependencies on other code.
+Testing them is useful to gain confidence in their correctness, but not all code is structured as pure functions.
+
+Consider the following function:
+
+```java
+/** Downloads the book with the given ID
+ * and prints it to the console. */
+void printBook(String bookId);
+```
+
+How can we test this? First off, the function returns `void`, i.e., nothing, so what can we even test?
+The documentation also mentions downloading data, but from where does this function do that?
+
+We could test this function by passing a book ID we know to be valid and checking the output.
+However, that book could one day be removed, or have its contents update, invalidating our test.
+
+Furthermore, tests that depend on the environment such as the book repository this function uses cannot easily test edge cases.
+How should we test what happens if there is a malformed book content? Or if the Internet connection drops after downloading the table of contents but before downloading the first chapter?
+
+One could design _end-to-end tests_ for this function: run the function in a custom environment, such as a virtual machine whose network requests are intercepted,
+and parse its output from the console, or perhaps redirect it to a file.
+While end-to-end testing is useful, it requires considerable time and effort, and is infrastructure that must be maintained.
+
+Instead, let's address the root cause of the problem: the input and output to `printBook` are _implicit_, when they should be _explicit_.
+
+Let's make the input explicit first, by designing an interface for HTTP requests:
+
+```java
+interface HttpClient {
+ String get(String url);
+}
+```
+
+We can then give an `HttpClient` as a parameter to `printBook`, which will use it instead of doing HTTP requests itself.
+This makes the input explicit, and also makes the `printBook` code more focused on the task it's supposed to do rather than on the details of HTTP requests.
+
+Our `printBook` function with an explicit input thus looks like this:
+
+```java
+void printBook(
+ String bookId,
+ HttpClient client
+);
+```
+
+This process of making dependencies explicit and passing them as inputs is called **dependency injection**.
+
+We can then test it with whatever HTTP responses we want, including exceptions, by creating a fake HTTP client for tests:
+
+```java
+var fakeClient = new HttpClient() {
+ @Override
+ public String get(String url) { ... }
+}
+```
+
+Meanwhile, in production code we will implement HTTP requests in a `RealHttpClient` class so that we can call `printBook(id, new RealHttpClient(...))`.
+
+We could make the output explicit in the same way, by creating a `ConsolePrinter` interface that we pass as an argument to `printBook`.
+However, we can change the method to return the text instead, which is often simpler:
+
+```java
+String getBook(
+ String bookId,
+ HttpClient client
+);
+```
+
+We can now test the result of `getBook`, and in production code feed it to `System.out.println`.
+
+Adapting code by injecting dependencies and making outputs explicit enables us to test more code with ""simple"" tests rather than complex end-to-end tests.
+While end-to-end tests would still be useful to ensure we pass the right dependencies and use the outputs in the right way, manual testing for end-to-end scenarios already provides
+a reasonable amount of confidence. For instance, if the code is not printing to the console at all, a human will definitely notice it.
+
+This kind of code changes can be done recursively until only ""glue code"" between modules and low-level primitives remain untestable.
+For instance, an ""UDP client"" class can take an ""IP client"" interface as a parameter, so that the UDP functionality is testable.
+The implementation of the ""IP client"" interface can itself take a ""Data client"" interface as a parameter, so that the IP functionality is testable.
+The implementations of the ""Data client"" interface, such as Ethernet or Wi-Fi, will likely need end-to-end testing since they do not themselves rely on other local software.
+
+---
+#### Exercise
+It's your turn now! In [the in-lecture exercise project](exercises/lecture) you will find `JokeFetcher.java`, which is not easy to test in its current state.
+Change it to make it testable, write tests for it, and change `App.java` to match the `JokeFetcher` changes and preserve the original program's functionality.
+Start by writing an interface for an HTTP client, implement it by moving existing code around, and use it in `JokeFetcher`. Then add tests.
+
+
+Suggestions (click to expand)
+
+
+The changes necessary are similar to those we discussed above, including injecting an `HttpClient` dependency and making the function return a `String`.
+We provide [an example `JokeFetcher`](exercises/solutions/lecture/JokeFetcher.java), [an example `App`](exercises/solutions/lecture/App.java),
+and [tests](exercises/solutions/lecture/JokeFetcherTests.java).
+
+
+
+
+----
+
+If you need to write lots of different fake dependencies, you may find _mocking_ frameworks such as [Mockito](https://site.mockito.org/) for Java useful.
+These frameworks enable you to write a fake `HttpClient`, for instance, like this:
+
+```java
+var client = mock(HttpClient.class);
+when(client.get(anyString())).thenReturn(""Hello"");
+// there are also methods to throw an exception, check that specific calls were made, etc.
+```
+
+There are other kinds of tests we have not talked about in this lecture, such as performance testing, accessibility testing, usability testing, and so on.
+We will see some of them in future lectures.
+
+
+## Summary
+
+In this lecture, you learned:
+- Automated testing, its basics, some good practices, and how to adapt code to make it testable
+- Code coverage as a way to evaluate tests, including statement coverage, branch coverage, and path coverage
+- When tests are useful, including testing after development, TDD, and regression tests
+
+You can now check out the [exercises](exercises/)!
+",CS-305: Software engineering
+"# Mobile Platforms
+
+This lecture's purpose is to give you a high-level picture of what the universe of mobile applications and devices is like. You will read about:
+
+* Differences between desktops and mobile devices w.r.t. applications, security, energy, and other related aspects
+* Challenges and opportunities created by mobile platforms
+* Brief specifics of the Android stack and how applications are structured
+* A few ideas for offering users a good experience on their mobile
+* The ecosystem that mobile apps plug into
+
+## From desktops to mobiles
+
+Roughly every decade, a new, lower priced computer class forms, based on a new programming platform, network, and interface. This results in new types of usage, and often the establishment of a new industry. This is known as [Bell's Law](https://en.wikipedia.org/wiki/Bell%27s_law_of_computer_classes).
+
+With every new computer class, the number of computers per person increases drastically. Today we have clouds of vast data centers, and perhaps an individual computer, like our laptop, that we use to be productive. On top of that come several computer devices per individual, like phones, wearables, and smart home items, which we use for entertainment, communication, quality of life, and so on.
+
+It is in this context that mobile software development becomes super-important.
+
+We said earlier that, no matter what job you will have, you will write code. We can add to that: you will likely write code for mobile devices. There are more than 15 billion mobile devices operating worldwide, and that number is only going up. As Gordon Bell said, this leads to new usage patterns.
+
+We access the Internet more often from our mobile than our desktop or laptop. Most of the digital content we consume we do so on mobiles. We spend hours a day on our mobile, and the vast majority of that time we spend in apps, not on websites.
+
+A simple example of a major change in how we use computing and communication is social media. Most of the world's population uses it. It changes how we work. Even the professional workforce is increasingly dependent on mobiles, for this reason.
+
+### Mobile vs. desktop: Applications
+
+There are many differences between how we write applications for a mobile device vs. a desktop computer. On a desktop, applications can do pretty much whatever they want, whereas, on a mobile, each app is super-specialized. On the desktop, users explicitly starts applications; on mobile, the difference between running or not is fluid: apps can be killed anytime, and they need to be ready to restart.
+
+On a desktop, you typically have multiple applications active in the foreground, with multiple windows on-screen. The mobile experience is difference: a user's interaction with an app doesn't always begin in the same place, but rather the user's journey often begins non-deterministically. As a result, a mobile app has a more complex structure than a traditional desktop application, with multiple types of components and entry points.
+
+The execution model on mobiles is more cooperative than on a desktop. For example, a social media app allows you to compose an email, and does so by reusing the email app. Another example is something like WhatsApp, which allows you to take pictures (and does so by asking the Photo app to do it). In essence, apps request services from other apps, and they build upon the functionality of others, which is fundamentally difference from the desktop application paradigm.
+
+### Mobile vs. desktop: Operating environment
+
+One of the biggest differences is in the security model. Think of your parents' PC at home or in an Internet café: there are potentially multiple users that don’t trust each other, each have specific file permissions, every application by default inherits all of a user's permissions, all applications are trusted to run with user's privileges alongside each other. The operating system prevents one application from overwriting others, but does not protect the I/O resources (e.g., files). One could fairly say that security is somewhat of an afterthought on the desktop.
+
+A mobile OS has considerably stronger isolation. The assumption here is that users might naively install malicious apps, and the goal is to protect users’ data (and privacy) even when they do stupid things. So, each mobile app is sandboxed separately.
+
+When you install a mobile app, it will ask the device for necessary permissions, like access to contacts, camera, microphone, location information, SMS, WiFi, user accounts, body sensors, etc.
+
+A mobile device is more constrained, e.g., you don't really get ""root"" access. It provides strong hardware-based isolation and powerful authentication features (like face recognition). E.g., for iPhone's FaceID, the face scan pattern is encrypted and sent to a secure hardware ""enclave"" in the CPU, to make sure that stored facial data is inaccessible to any party, including Apple.
+
+Power management is a first-class citizen in a mobile OS. While PCs use high-power CPUs and GPUs with heatsinks, smartphones and tablets are typically not plugged in, so they cannot provide sustained high power. On a mobile, the biggest energy hog is typically the screen, whereas in a desktop it is the CPU and GPU.
+
+Desktop OSes tend to be generic, whereas mobile OSes specialize for a given set of mobile devices, so they can have strict hardware requirements. This leads to less backward compatibility, and so the latest apps will not run on older versions of the OS, and new versions of a mobile OS won't run on older devices.
+
+There is also orders-of-magnitude less storage on mobiles, and orders-of-magnitude less memory. Plus, you cannot easily expand / modify storage and memory.
+
+Mobile networking tends to be more intermittent than in-wall connectivity. An Ethernet connections gives you Gbps, while a WiFi connection gives you Mbps.
+
+On a desktop, the input comes typically from a keyboard and a mouse, whereas mobiles have small touch keyboards, voice commands, complex gestures, etc. Output is also limited on mobiles, so often they communicate with other devices to achieve their output.
+
+We are witnessing today a convergence between desktop and mobile OSes, which will gradually make desktops disappear, and computing will become increasingly more embedded.
+
+### Challenges and opportunities
+
+This new world of mobile presents both opportunities and challenges.
+
+Users are many more, and are more diverse. They have widely differing computer skills. Developers need to focus on ease of use, internationalization, and accessibility.
+
+Platforms are more diverse (think phones, wearables, TVs, cars, e-book readers). Different vendors take different approaches to dealing with this diversity: Apple has the ""walled garden"" approach, while Android is more like the wild west: Android is open-source, so OEMs (original equipment manufacturers) can customize Android to their devices, so naturally Android runs on tens of thousands of models of devices today.
+
+Mobiles get interesting new kinds of inputs, from multi-touch screens, accelerometers, GPS. But they also face serious limitations in terms of screen size, processor power, and battery capacity.
+
+Mobile devices need to (and can be) managed more tightly. Today you can remote-lock a device, you can locate the device, and with software like MDM (mobile device management) you can do everything on the device remotely. Some mobile carriers even block users from installing certain apps on their devices.
+
+App stores or Play stores are digital storefronts for content consumed on device. Today, the success of a mobile platform depends heavily on its app store, which becomes a central hub for content to be consumed on that particular mobile platform. The operator of the store controls apps published in the store (e.g., disallow sexually explicit content, violence, hate speech, anything illegal), which means that the developer needs to be aware of rules and laws in the places where app will be used. Operators typically use automated tools and human reviewers to check apps for malware and terms-of-service violations.
+
+## How does a mobile operating environment work?
+
+A mobile device aims to achieve seemingly contradictory goals: On the one hand, it aims for greater integration than a desktop, to create a more cooperative ecosystem in which to enable apps to provide services to each other. On the other hand, it aims for greater isolation, it wants to offer a stronger sandbox, to protect user data, and to restrict apps from interacting in more complex ways
+
+Each mobile OS makes its own choices for how to do this. In this lecture, we chose to focus on Android, for three reasons:
+
+* it is open-source, so it's easier to understand what it really does
+* over 70% of mobiles use Android, there are more than 2.5 billion Android users and more than 3 billion active Android devices, so this is very real
+* we will use it in the follow-on course [Software Development Project](https://dslab.epfl.ch/teaching/sweng/proj)
+
+Android is based on Linux, and things like multi-threading and low-level memory management are all provided by the Linux kernel. There is a layer, called the hardware abstraction layer (HAL), that provides standard interfaces to specific hardware capabilities, such as the camera or the Bluetooth module. Whenever an application makes a call to access device hardware, Android loads a corresponding library module from the HAL for that hardware component.
+
+Above the HAL, there is the Android runtime (ART) and the Native C/C++ libraries.
+
+The ART provides virtual machines for executing DEX (Dalvik executable) bytecode, which is specially designed to have a minimal memory footprint. Java code gets turned into DEX bytecode, which can run on the ART.
+
+Android apps can be written in Kotlin, Java, or even C++. The Android SDK (software development kit) tools compile code + resource files into a so-called Android application package (APK), which is an archive used to install the app. When publishing to the Google Play app store, one generates instead an Android app bundle (ABB), and then Google Play itself generates optimized APKs for the target device that is requesting installation of the app. This is a more practical approach than having the developer produce individual APKs for each device.
+
+Each Android app lives in its own security sandbox. In Linux terms, each app gets its own user ID. Permissions for all the files accessed by the app are set so that only the assigned user ID can access them. It is possible for apps to share data and access each other's files, in which case they get the same Linux user ID; the apps however must be from the same developer (i.e., signed with the same developer certificate).
+
+Each app runs in its own Linux process. By default, an app can access only the components that it needs to do its work and no more (this follows the [principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege)). An app can access the device's location, camera, Bluetooth connection, etc. as long as the phone user has explicitly granted it these permissions. Each app has its own instance of the ART, with its own VM to execute DEX bytecode. In other words, apps don't run inside a common VM.
+
+In Android, the UI is privileged. Foreground and UI threads get higher priority in CPU scheduling than the other threads. Android uses process containers (a.k.a., [Linux cgroups](https://en.wikipedia.org/wiki/Cgroups)) to allocate a higher percentage of the CPU to them. When users switch between apps, Android keeps non-foreground apps (e.g., not visible to the user) in a cache (in Linux terms, the corresponding processes do not terminate) and, if the user returns to the app, the process is reused, which makes app switching faster.
+
+Many core system components (like the ART and HAL) require native libraries written in C/C++. Android provides Java APIs to expose the functionality of these native libraries to apps. This Java APIs framework is layered on top of the ART and native C/C++ libraries. Apps written in C/C++ can use the native libraries directly (this requires the Android native development kit, or NDK).
+
+Within the Java API framework, there are a number of Android features provided through APIs written in Java that provide key building blocks for apps. For example, the View System is used for building the UI. E.g., the Resource Manager is used by apps for accessing localized strings or graphics. E.g., the Activity Manager handles the lifecycle of apps and provides a common navigation back stack (more on this below). And, as a final example, Content Providers enable apps to access data from other apps (e.g., a social media app access the Contacts app) or to share their own data.
+
+On top of this entire stack are the apps. Android comes with core apps for email, SMS, calendaring, web browsing, contacts, etc. but these apps have no special status, then can be replaced with third-party apps. The point of shipping them with Android is to provide key capabilities to other apps out of the box (e.g., allow multiple apps to have the functionality of sending an SMS without having to write the code for it).
+
+An Android app is a collection of components, with each component providing an entry point to the app. There are 4 main components:
+
+* An _Activity_ provides an entry point with a UI, for interacting with the user. An activity represents a single screen with a user interface. For example, an email app has an activity to show a list of new emails, another activity to compose an email, another activity to read emails, and so on -- activities work together to form the email app, but each one is independent of the others. Unlike in desktop software, other apps can start any one of these activities if the email app allows it, e.g., the Camera app may start the compose-email activity in order to send a picture by email.
+* A _Service_ is a general-purpose entry point without a UI. This essentially keeps an app running in the background (e.g., play music in the background while the user is in a different app, or fetch data over the network without blocking user interaction with an activity). A service can also be started by another component.
+* A _Broadcast Receiver_ enables Android to deliver events to apps out-of-band, e.g., an app may set a timer and ask Android to wakes it up at some point in the future; this way, the app need not run non-stop until then. So broadcasts can be delivered even to apps that aren't currently running. Broadcast events can be initiated by the OS (e.g., the screen turned off, battery is low, picture was taken) or can be initiated by apps (e.g., some data has been downloaded and is available for other apps to use). Broadcast receivers have no UI, but they can create a status bar notification to alert the user.
+* A _Content Provider_ manages data that is shared among apps. Such data can be local (e.g., file system, SQLite DB) or remote on some server. Through a content provider, apps can query / modify the data (e.g., the Contacts data is a content provider: any app with the proper permissions can get contact info and can write contact info).
+
+Remember that any app can cause another app’s component to start. For instance, if WhatsApp wants to take a photo with the camera, it can ask that an activity be started in the Camera app, without the WhatsApp developer having to write the code to take photos--when done, the photo is returned to WhatsApp, and to the user it appears as if the camera is actually a part of WhatsApp.
+
+To start a component, Android starts the process for target app (if not already running), instantiates the relevant classes (this is because, e.g., the photo-taking activity runs in the Camera process, not WhatsApp's process). You see here an example of multiple entry points: there is no `main()` like in a desktop app.
+
+Then, to activate a component in another app, you must create an _Intent_ object, which is essentially a message that activates either a specific component (explicit intent) or a type of component (implicit intent). For activities and services, the intent defines the action to perform (e.g., view or send something) and specifies the URI of the data to act on (e.g., this is an intent to dial a certain phone number). If the activity returns a result, it is returned in an Intent (e.g., let the user pick a contact and return a URI pointing to it). For broadcast receivers, the intent defines the announcement being broadcast (e.g., if battery is low it includes the BATTERY_LOW string), and a receiver can register for it by filtering on the string. Content providers are not activated by intents but rather when targeted by a request from what's called a Content Resolver, which handles all interaction with the content provider.
+
+## A mobile app: Structure and lifecycle
+
+The centerpiece of an Android app is the _activity_. Unlike the kinds of programs you've been writing so far, there is no `main()`, but rather the underlying OS initiates code in an _Activity_ by invoking specific callback methods.
+
+An activity provides the window in which the app draws its UI. One activity essentially implements one screen in an app, e.g., 1 activity for a Preferences screen, 1 activity for a Select Photo screen, etc.
+
+Apps contain multiple screens, each of which corresponds to an activity. One activity is specified as the main activity, i.e., the first screen to appear when you launch the app. Each activity can then start another activity, e.g., the main activity in an email app may provide the screen that shows your inbox, then you have screens for opening and reasing a message, one for writing, etc.
+
+As a user navigates through, out of, back into an app, the activities in the app transition through diff states. Transitioning from one state to another is handled by specific callbacks that must be implemented in the activity. The activity learns about changes and reacts to when the user leaves and re-enters the activity: For example, a streaming video player will likely pause the video and terminate the network connection when the user switches to another app; when the user returns to the app, it will reconnect to the network and allow the user to resume the video from the same spot.
+
+To understand the lifecycle of an activity, please see [the Android documentation](https://developer.android.com/guide/components/activities/activity-lifecycle).
+
+Another important element are _fragments_. These provide a way to modularize an activity's UI and reuse modules across different activities. This provides a productive and efficient way to respond to various screen sizes, or whether your phone is in portrait or landscape mode, where the UI is composed differently but from the same modules.
+
+A fragment may show the list of emails, and another fragment may display an email thread. On a phone, you would display only one at a time, and upon tap switch to the next activity. On a tablet, with the greater screen real estate, you have room for both. By modularizing the UI into fragments, it's easy to adapt the app to the different layouts by rearranging fragments within the same view when the phone switches from portrait to landscape mode.
+
+To understand fragments, please see [the Android documentation](https://developer.android.com/guide/fragments).
+
+You don't have to use fragments within activities, but such modularization makes the app more flexible and also makes it easier to maintain over time. A fragment defines and manages its own layout, has its own lifecycle, and can handle its own input events. But it cannot live on its own, it needs to be hosted by an activity or another fragment.
+
+Android beginners tend to put all their logic inside Activities and Fragments, ending up with ""views"" that do a lot more than just render the UI. Remember what we said about MVVM and how that maps to how you build an Android app. In the exercise set, we will ask you to go through an MVVM exercise. Android provides a nice ViewModel class to store and manage UI-related data.
+
+## User experience (UX)
+
+User experience design (or ""UX design"") is about putting together an intuitive, responsive, navigable, usable interface to the app. You want to look at the app through the users' perspective to derive what can give them an easy, logical and positive experience.
+
+So you must ask who is your audience: Old or young people? Intellectuals or blue-collar workers? Think back to the lecture on personas and user stories.
+
+The UI should enable the user to get around the app easily, and to find quickly what they're looking for. To achieve this, several elements and widgets have emerged from the years of experience of building mobile apps. The [hamburger icons](https://en.wikipedia.org/wiki/Hamburger_button) are used for drop-down menus with further details--they avoid clutter. Home buttons give users a shortcut to home base. Chat bubbles offer quick help through context-sensitive messages.
+
+Personalization of the UI allows adapting to what the user is interested in, keeping the unrelated content away. For example, in the [EPFL Campus app](https://pocketcampus.org/epfl-en) you can select which features you see on the home screen, and so the home screen of two different users is likely to look different.
+
+In order to maximize readability, emphasize simplicity and clarity, choose adequate font size and image size. Avoid having too many elements on the screen, because that leads to confusion. Offer one necessary action per screen. Use [micro UX animations](https://uxplanet.org/ui-design-animations-microinteractions-50753e6f605c), which are little animations that appear when a specific action is performed or a particular item is hovered or touched; this can increase engagement and interactivity. E.g., hovering over a thumbnail shows details about it. E.g., hover zoom for maximizing the view of a specific part of something. Keep however the animations minimal, avoid flashiness, make them subtle and clear as to their intent.
+
+Keep in mind _thumb zones_. People use their mobile when they’re standing, walking, riding a bus, so the app should allow them to hold the device and view its screen and provide input in all these situations. So make it easy for the user to reach with their thumb the stuff they do most often. Leverage gesture controls to make it easy for a user to interact with apps--e.g., “holding” an item, “dragging” it to a container, and then “releasing”--but beware to do what users of that device are used to doing, not exotic stuff.
+
+Think of augmented reality and voice interaction, depending on the app, they can be excellent ways to interact with the device.
+
+## The mobile ecosystem
+
+A modern mobile app, no matter how good it is, can hardly survive without plugging into its ecosystem.
+
+Most mobile apps are consists of the phone side of the app and the cloud side. Apps interact with cloud services typically over REST APIs, which is an architectural style that builds upon HTTP and JSON to offer CRUD (copy-read-update-delete) operations on objects that are addressed by a URI. You use HTTP methods to access these resources via URL-encoded parameters.
+
+Aside from the split architecture, apps also interact with the ecosystem through Push notifications. These are automated messages sent to the user by the server of the app that’s working in the background (i.e., not currently open). Each OS (e.g., Android, iOS) has its own push notification service with a specific API that apps can use, and they each operate a push engine in the cloud.
+
+The app developer enables their app with push notifications, at which point the app can start receiving incoming notifications. When you open the app, unique IDs for both the app and the device are created by the OS and registered with the OS push notification service. IDs are passed back to the app and also sent to the app publisher. The in-cloud service generates the message (either when a publisher produces it, or in response to some event, etc.) which can be targeted at users or a group of users and gets sent to the corresponding mobile. On Android, an incoming notification can create an Activity.
+
+A controversial aspect of the ecosystem is the extent to which it track the device and the user of the device. We do not discuss this aspect here in detail, it is just something to be aware of.
+
+A key ingredient of ""plugging into"" the ecosystem is the network. Be aware that your users may experience different bandwidth limitations (e.g., 3G vs. 5G), and the Internet doesn't work as smoothly everywhere in the world as we're used to. In addition to bandwidth issues, there is also the risk of experiencing a noticeable disconnection from the Internet. E.g., a highly interactive, graphic-heavy app will not be appropriate for apps that target Latin America or Africa, or users in rural areas, because today there are still many areas with spotty connectivity.
+
+Perhaps a solution can be offered by the new concept of fog computing (as opposed to cloud computing), also called edge computing. It is about the many ""peripheral"" devices that connect to the periphery of a cloud, and many of these devices will generate lots of raw data (e.g., from sensors). Rather than forward this data to cloud-based servers, one could do as much processing as possible using computing units nearby, so that processed rather than raw data is forwarded to the cloud. As a result, bandwidth requirements are reduced. ""The Fog"" can support the Internet of Things (IoT), including phones, wearable health monitoring devices, connected vehicle and augmented reality devices. IoT devices are often resource-constrained and have limited computational abilities to perform, e.g., cryptography computations, so a fog node nearby can provide security for IoT devices by performing these cryptographic computations instead.
+
+## Summary
+
+In this lecture, you learned:
+
+* Several ways in which a desktop app differs from a mobile app
+* How security, energy, and other requirements change when moving from a desktop to a mobile
+* How activities and fragments work in Android apps
+* The basics of good UX
+* How mobile apps can leverage their ecosystem
+
+You can now check out the [exercises](exercises/).
+",CS-305: Software engineering
+"# Evolution
+
+Imagine you are an architect asked add a tower to a castle.
+You know what towers look like, you know what castles look like, and you know how to build a tower in a castle. This will be easy!
+
+But then you get to the castle, and it looks like this:
+
+![]()
+
+Sure, it's a castle, and it fulfills the same purpose most castles do, but... it's not exactly what you had in mind.
+It's not built like a standard castle.
+There's no obvious place to add a tower, and what kind of tower will this castle need anyway?
+Where do you make space for a tower? How do you make sure you don't break half the castle while adding it?
+
+This lecture is all about evolving an existing codebase.
+
+
+## Objectives
+
+After this lecture, you should be able to:
+- Find your way in a _legacy codebase_
+- Apply common _refactorings_ to improve code
+- Document and quantify _changes_
+- Establish solid foundations with _versioning_
+
+
+## What is legacy code, and why should we care?
+
+""Legacy code"" really means ""old code that we don't like"".
+Legacy code may or may not have documentation, tests, and bugs.
+If it has documentation and tests, they may or may not be complete enough; tests may even already be failing.
+
+One common reaction to legacy code is disgust: it's ugly, it's buggy, why should we even keep it?
+If we rewrote the code from scratch we wouldn't have to ask all of these questions about evolution! It certainly sounds enticing.
+
+In the short term, a rewrite feels good. There's no need to learn about old code, instead you can use the latest technologies and write the entire application from scratch.
+
+However, legacy code works. It has many features, has been debugged and patched many times, and users rely on the way it works.
+If you accidentally break something, or if you decide that some ""obscure"" feature is not necessary, you will anger a lot of your users, who may decide to jump ship to a competitor.
+
+One infamous rewrite story is [that of Netscape 5](https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/).
+In the '90s, Netscape Navigator was in tough competition with Microsoft's Internet Explorer.
+While IE became the butt of jokes later, at the time Microsoft was heavily invested in the browser wars.
+The Netscape developers decided that their existing Netscape 4 codebase was too old, too buggy, and too hard to evolve.
+They decided to write Netscape 5 from scratch.
+The result is that it took them three years to ship the next version of Netscape; in that time, Microsoft had evolved their existing IE codebase, far outpacing Netscape 4, and Netscape went bankrupt.
+
+Most rewrites, like Netscape, fail.
+A rewrite means a loss of experience, a repeat of many previous mistakes, and that's just to get to the same point as the previous codebase.
+There then needs to be time to add features that justify the cost of upgrading to users. Most rewrites run out of time or money and fail.
+
+It's not even clear what ""a bug"" is in legacy code, which is one reason rewrites are dangerous: some users depend on things that might be considered ""bugs"".
+For instance, Microsoft Excel [treats 1900 as a leap year](https://learn.microsoft.com/en-us/office/troubleshoot/excel/wrongly-assumes-1900-is-leap-year)
+even though it is not, because back when it was released it had to compete with a product named Lotus 1-2-3 that did have this bug.
+Fixing the bug means many spreadsheets would stop working correctly, as dates would become off by one. Thus, even nowadays, Microsoft Excel still contains a decades-old ""bug"" in the name of compatibility.
+
+A better reaction to legacy code is to _take ownership of it_: if you are assigned to a legacy codebase, it is now your code, and you should treat it just like any other code you are responsible for.
+If the code is ugly, it is your responsibility to fix it.
+
+
+## How can we improve legacy code?
+
+External improvements to an existing codebase, such as adding new features, fixing bugs, or improving performance, frequently require internal improvements to the code first.
+Some features may be difficult to implement in an existing location, but could be much easier if the code was improved first.
+This may require changing design _tradeoffs_, addressing _technical debt_, and _refactoring_ the codebase. Let's see each of these in detail.
+
+### Tradeoffs
+
+Software engineers make tradeoffs all the time when writing software, such as choosing an implementation that is faster at the cost of using more memory,
+or simpler to implement at the cost of being slower, or more reliable at the cost of more disk space.
+As code ages, its context changes, and old tradeoffs may no longer make sense.
+
+For instance, Windows XP, released in 2001, groups background services into a small handful of processes.
+If any background service crashes, it will cause all of the services in the same process to also crash.
+However, because there are few processes, this minimizes resource use.
+It would have been too much in 2001, on computers with as little as 64 MB of RAM, to dedicate one process and the associated overheads per background service.
+
+But in 2015, when Windows 10 was released, computers typically had well over 2 GB of RAM.
+Trading reliability for low resource use no longer made sense, so Windows 10 instead runs each background service in its own process.
+The cost of memory is tiny on the computers Windows 10 is made for, and the benefits from not crashing entire groups of services at a time are well worth it.
+The same choice made 15 years apart yielded a different decision.
+
+### Technical debt
+
+The cost of all of the ""cheap"" and ""quick"" fixes done to a codebase making it progressively worse is named _technical debt_.
+Adding one piece of code that breaks modularity and hacks around the code's internals may be fine to meet a deadline, but after a few dozen such hacks, the code becomes hard to maintain.
+
+The concept is similar to monetary debt: it can make sense to invest more money than you have, so you borrow money and slowly pay it back.
+But if you don't regularly pay back at least the interest, your debt grows and grows, and so does the share of your budget you must spend on repaying your debt.
+You eventually go bankrupt from the debt payments taking up your entire budget.
+
+With technical debt, a task that should take hours can take a week instead, because one now needs to update the code in the multiple places where it has been copy-pasted for ""hacks"",
+fix some unrelated code that in practice depends on the specific internals of the code to change, write complex tests that must set up way more than they should need, and so on.
+You may no longer be able to use the latest library that would solve your problem in an hour, because your codebase is too old and depends on old technology the library is not compatible with,
+so instead you need weeks to reimplement that functionality yourself.
+You may regularly need to manually reimplement security patches done to the platform you use because you use an old version that is no longer maintained.
+
+This is one reason why standards are useful: using a standard version of a component means you can easily service it.
+If you instead have a custom component, maintenance becomes much more difficult, even if the component is nicer to work with at the beginning.
+
+### Refactoring
+
+Refactoring is the process of making _incremental_ and _internal_ improvements.
+These are improvements designed to make code easier to maintain, but that do not directly affect end users.
+
+Refactoring is about starting from a well-known code problem, applying a well-known solution, and ending up with better code.
+The well-known problems are sometimes called ""code smells"", because they're like a strange smell: not harmful on its own, but worrying, and could lead to problems if left unchecked.
+
+For instance, consider the following code:
+```java
+class Player {
+ int hitPoints;
+ String weaponName;
+ int weaponDamage;
+ boolean isWeaponRanged;
+}
+```
+Something doesn't smell right. Why does a `Player` have all of the attributes of its weapon?
+As it is, the code works, but what if you need to handle weapons independently of players, say, to buy and sell weapons at shops?
+Now you will need fake ""players"" that exist just for their weapons. You'll need to write code that hides these players from view.
+Future code that deals with players may not handle those ""fake players"" correctly, introducing bugs.
+Pay some of your technical debt and fix this with a refactoring: extract a class.
+```java
+class Player {
+ int hitPoints;
+ Weapon weapon;
+}
+class Weapon { ... }
+```
+Much better. Now you can deal with weapons independently of players.
+
+Your `Weapon` class now looks something like this:
+```java
+class Weapon {
+ boolean isRanged;
+ void attack() {
+ if (isRanged) { ... } else { ... }
+ }
+}
+```
+This doesn't smell right either. Every method on `Weapon` will have to first check if it is ranged or not, and some developers might forget to do that.
+Refactor it: use polymorphism.
+```java
+abstract class Weapon {
+ abstract void attack();
+}
+class MeleeWeapon extends Weapon { ... }
+class RangedWeapon extends Weapon { ... }
+```
+Better.
+
+But then you look at the damage calculation...
+```java
+int damage() {
+ return level * attack - max(0, armor - 500) *
+ attack / 20 + min(weakness * level / 10, 400);
+}
+```
+What is this even doing? It's hard to tell what was intended and whether there's any bug in there, let alone to extend it.
+Refactor it: extract variables.
+```java
+int damage() {
+ int base = level * attack;
+ int resistance = max(0, armor - 500) * attack / 20;
+ int bonus = min(weakness * level / 10, 400);
+ return base - resistance + bonus;
+}
+```
+This does the same thing, but now you can tell what it is: there's one component for damage that scales with the level,
+one component for taking the resistance into account, and one component for bonus damage.
+
+You do not have to do these refactorings by hand; any IDE contains tools to perform refactorings such as extracting an expression into a variable or renaming a method.
+
+---
+#### Exercise
+Take a look at the [gilded-rose/](exercises/lecture/gilded-rose) exercise folder.
+The code is obviously quite messy.
+First, try figuring out what the code is trying to do, and what problems it has.
+Then, write down what refactorings you would make to improve the code. You don't have to actually do the refactorings, only to list them, though you can do them for more practice.
+
+
+Proposed solution (click to expand)
+
+
+The code tries to model the quality of items over time. But it has so many special cases and so much copy-pasted code that it's hard to tell.
+
+Some possible refactorings:
+- Turn the `for (int i = 0; i < items.length; i++)` loop into a simpler `for (Item item : items)` loop
+- Simplify `item.quality = item.quality - item.quality` into the equivalent but clearer `item.quality = 0`
+- Extract `if (item.quality < 50) item.quality = item.quality + 1;` into a method `incrementQuality` on `Item`, which can thus encapsulate this cap at 50
+- Extract numbers such as `50` into named constants such as `MAX_QUALITY`
+- Extract repeated checks such as those on the item names into methods on `Item` such as `isPerishable`, and maybe create subclasses of `Item` instead of checking names
+
+
+
+
+
+## Where to start in a large codebase?
+
+Refactorings are useful for specific parts of code, but where to even start if you are given a codebase of millions of lines of code and told to fix a bug?
+There may not be documentation, and if there is it may not be accurate.
+
+A naïve strategy is to ""move fast and break things"", as [was once Facebook's motto](https://www.businessinsider.com/mark-zuckerberg-on-facebooks-new-motto-2014-5).
+The advantage is that you move fast, but... the disadvantage is that you break things.
+The latter tends to massively outweigh the former in any large codebase with many users.
+Making changes you don't fully understand is a recipe for disaster in most cases.
+
+An optimistic strategy, often taken by beginners, is to understand _everything_ about the code.
+Spend days and weeks reading every part of the codebase. Watch video tutorials about every library the code uses.
+This is not useful either, because it takes far too long, and because it will in fact never finish since others are likely making changes to the code while you learn.
+Furthermore, by the time you have read half the codebase, you won't remember what exactly the first part you looked at was, and your time will have been wasted.
+
+Let's see three components of a more realistic strategy: learning as you go, using an IDE, and taking notes.
+
+### Learning as you go
+
+Think of how detectives such as [Sherlock Holmes](https://en.wikipedia.org/wiki/Sherlock_Holmes) or [Miss Marple](https://en.wikipedia.org/wiki/Miss_Marple) solve a case.
+They need information about the victim, what happened, and any possible suspects, because that is how they will find out who did it.
+But they do not start by asking every person around for their entire life story from birth to present.
+They do not investigate the full history of every item found at the scene of the crime.
+While the information they need is somewhere in there, getting it by enumerating all information would take too much time.
+
+Instead, detectives only learn what they need when they need it. If they find evidence that looks related, they ask about that evidence.
+If somebody's behavior is suspect, they look into that person's general history, and if something looks related, they dig deeper for just that detail.
+
+This is what you should do as well in a large codebase. Learn as you go: only learn what you need when you need it.
+
+### Using an IDE
+
+You do not have to manually read through files to find out which class is used where, or which modules call which function.
+IDEs have built-in features to do this for you, and those features get better if the language you're using is statically typed.
+Want to find who uses a method? Right click on the method's name, and you should find some tool such as ""find all references"".
+Do you realize the method is poorly named given how the callers use it? Refactor its name using your IDE, don't manually rename every use.
+
+One key feature of IDEs that will help you is the _debugger_.
+Find the program's ""main"" function, the one called at the very beginning, and put a breakpoint on its first statement.
+Run the program with the debugger, and you're ready to start your investigation by following along with the program's flow.
+Want to know more about a function call? Step into it. Think that call is not relevant right now? Step over it instead.
+
+### Taking notes
+
+You cannot hope to remember all context about every part of a large codebase by heart.
+Instead, take notes as you go, at first for yourself. You may later turn these notes into documentation.
+
+One formal way to take notes is to write _regression tests_.
+You do not know what behavior the program should have, but you do know that it works and you do not want to break it.
+Thus, write a test without assertions, run the test under the debugger, and look at the code's behavior.
+Then, add assertions to the test for the current behavior.
+This serves both as notes for yourself of what happens when the code is used in a specific way, and as an automated way to check if you broke something while modifying the code later.
+
+Another formal way to take notes is to write _facades_.
+""Facade"" is a design pattern intended to simplify and modularize existing code by hiding complex parts behind simpler facades.
+For instance, let's say you are working in a codebase that only provides a very generic ""draw"" method that takes a lot of arguments, but you only need to draw rectangles:
+```java
+var points = new Point[] {
+ new Point(0, 0), new Point(x, 0),
+ new Point(x, y), new Point(0, y)
+};
+draw(points, true, false, Color.RED, null, new Logger(), ...);
+```
+This is hard to read and maintain, so write a facade for it:
+```java
+drawRectangle(0, 0, x, y, Color.RED);
+```
+Then implement the `drawRectangle` method in terms of the complex `draw` method.
+The behavior hasn't changed, but the code is easier to read.
+Now, you only need to look at the complex part of the code if you actually need to add functionality related to it.
+Reading the code that needs to draw rectangles no longer requires knowledge of the complex drawing function.
+
+---
+#### Exercise
+Take a look at the [pacman/](exercises/lecture/pacman) exercise folder.
+It's a cool ""Pac-Man"" game written in Java, with a graphical user interface.
+It's fun! Imagine you were asked to maintain it and add features. You've never read its code before. so where to start?
+
+First, look at the code, and take some notes: which classes exist, and what do they do?
+Then, use a debugger to inspect the code's flow, as described above.
+If someone asked you to extend this game to add a new kind of ghost, with a different color and behavior, which parts of the code would you need to change?
+Finally, what changes could you make to the code to make it easier to add more kinds of ghosts?
+
+
+Proposed solution (click to expand)
+
+
+To add a kind of ghost, you'd need to add a value to the `ghostType` enum, and a class extending `Ghost`.
+You would then need to add parsing logic in `MapEditor` for your ghost, and to link the enum and class together in `PacBoard`.
+
+To make the addition of more ghosts easier, you could start by re-formatting the code to your desired standard, and changing names to be more uniform, such as the casing of `ghostType`.
+One task would be to have a single object to represent ghosts, instead of having both `ghostType` and `Ghost`.
+It would also probably make sense to split the parsing logic from `MapEditor`, since editing and parsing are not the same job.
+
+Remember, you might look at this Pac-Man game code thinking it's not as nice as some idealized code that could exist, but unlike the idealized code, this one does exist already, and it works.
+Put energy into improving the code rather than complaining about it.
+
+
+
+
+---
+
+Remember the rule typically used by scouts: leave the campground cleaner than you found it.
+In your case, the campground is code.
+Even small improvements can pay off with time, just like monetary investments.
+If you improve a codebase by 1% every day, after 365 days, it will be around 38x better than you started.
+But if you let it deteriorate by 1% instead, it will be only 0.03x as good.
+
+
+## How should we document changes?
+
+You've made some changes to legacy code, such as refactorings and bug fixes. Now how do you document this?
+The situation you want to avoid is to provide no documentation, lose all knowledge you gained while making these changes, and need to figure it all out again. This could happen to yourself or to someone else.
+
+Let's see three kinds of documentation you may want to write: for yourself, for code reviewers, and for maintainers.
+
+### Documenting for yourself
+
+The best way to check and improve your understanding of a legacy codebase is to teach others about it, which you can do by writing comments and documentation.
+This is a kind of ""refactoring"": you improve the code's maintainability by writing the comments and documentation that good code should have.
+
+For instance, let's say you find the following line in a method somewhere without any explanation:
+```java
+if (i > 127) i = 127;
+```
+After spending some time on it, such as commenting it out to see what happens, you realize that this value is eventually sent to a server which refuses values above 127.
+You can document this fact by adding a comment, such as `// The server refuses values above 127`. You now understand the code better.
+Then you find the following method:
+```java
+int indexOfSpace(String text) { ... }
+```
+You think you understand what this does, but when running the code, you realize it not only finds spaces but also tabs.
+After some investigation, it turns out this was a bug that is now a specific behavior on which clients depend, so you must keep it.
+You can thus add some documentation: `Also finds tabs. Clients depend on this buggy behavior`.
+You now understand the code better, and you won't be bitten by this issue again.
+
+### Documenting for code reviewers
+
+You submit a pull request to a legacy codebase. Your changes touch code that your colleagues aren't quite familiar with either.
+How do you save your colleagues some time? You don't want them to have to understand exactly what is going on before being able to review your change,
+yet if you only give them code, this is what will happen.
+
+First, your pull request should have a _description_, in which you explain what you are changing and why.
+This description can be as long as necessary, and can include remarks such as whether this is a refactoring-only change or a change in behavior,
+or why you had to change some code that intuitively looks like it should not need changes.
+
+Second, the commits themselves can help reviewing if you split your work correctly, or if you rewrite the history once you are done with the work.
+For instance, your change may involve a refactoring and a bug fix, because the refactoring makes the bug fix cleaner.
+If you submit the change as a single commit, reviewers will need time and energy to read and understand the entire change at once.
+If a reviewer is interrupted in the middle of understanding that commit, they will have to start from scratch after the interruption.
+
+Instead of one big commit, you can submit a pull request consisting of one commit per logical task in the request.
+For instance, you can have one commit for a refactoring, and one for a bug fix. This is easier to review, because they can be reviewed independently.
+This is particularly important for large changes: the time spent reviewing a commit is not linear in the length of the commit, but closer to exponential,
+because we humans have limited brain space and usually do not have large chunks of uninterrupted time whenever we'd like.
+Instead of spending an hour reviewing 300 modified lines of code, it's easier to spend 10 times 3 minutes reviewing commits of 30 lines at a time.
+This also lessens the effects of being interrupted during a review.
+
+### Documenting for maintainers
+
+We've talked about documentation for individual bits of code, but future maintainers need more than that to understand a codebase.
+Design decisions must be documented too: what did you choose and why?
+Even if you plan on maintaining a project yourself for a long time, this saves some work: your future colleagues could each take 5 minutes of your time asking
+you the same question about some design decision taken long ago, or you could spend 10 minutes once documenting it in writing.
+
+At the level of commits, this can be done in a commit message, so that future maintainers can use tools such as [git blame](https://git-scm.com/docs/git-blame)
+to find the commit that last changed a line of code and understand why it is that way.
+
+At the level of modules or entire projects, this can be done with _Architectural Decision Records_.
+As their name implies, these are all about recording decisions made regarding the architecture of a project: the context, the decision, and its consequences.
+The goal of ADRs is for future maintainers to know not only what choices were made but also why, so that maintainers can make an informed decision on whether to make changes.
+For instance, knowing that a specific library for user interfaces was chosen because of its excellent accessibility features informs maintainers that even if they do not like
+the library's API, they should pay particular attention to accessibility in any potential replacement. Perhaps the choice was made at a time when alternatives had poor accessibility,
+and the maintainers can change that choice because in their time there are alternatives that also have great accessibility features.
+
+The context includes user requirements, deadlines, compatibility with previous systems, or any other piece of information that is useful to know to understand the decision.
+For instance, in the context of lecture notes for a course, the context could include ""We must provide lecture notes, as student feedback tells us they are very useful""
+and ""We want to allow contributions from students, so that they can easily help if they find typos or mistakes"".
+
+The decision includes the alternatives that were considered, the arguments for and against each of them, and the reasons for the final choice.
+For instance, still in the same context, the alternatives might be ""PDF files"", ""documents on Google Drive"", and ""documents on a Git repository"".
+The arguments could then include ""PDF files are convenient to read on any device, but they make it hard to contribute"".
+The final choice might then be ""We chose documents on a Git repository because of the ease of collaboration and review, and because GitHub provides a nice preview of Markdown files online"".
+
+The consequences are the list of tasks and changes that must be done in the short-to-medium term.
+This is necessary to apply the decision, even if it may not be useful in the long term.
+For instance, one person might be tasked with converting existing lecture notes to Markdown, and another might be tasked to create an organization and a repository on GitHub.
+
+It is important to keep ADRs _close to the code_, such as in the same repository, or in the same organization on a platform like GitHub.
+If ADRs are in some unrelated location that only current maintainers know about, they will be of no use to future maintainers.
+
+
+## How can we quantify changes?
+
+Making changes is not only about describing them qualitatively, but also about telling people who use the software whether the changes will affect them or not.
+For instance, if you publish a release that removes a function from your library, people who used that function cannot use this new release immediately.
+They need to change their code to no longer use the function you removed, which might be easy if you provided a replacement, or might require large changes if you did not.
+And if the people using your library cannot or do not want to update their software, for instance because they have lost the source code, they now have to rewrite their software.
+
+Let's talk _compatibility_, and specifically three different axes: theory vs practice, forward vs backward, and source vs binary.
+
+### Theory vs practice
+
+In theory, any change could be an ""incompatible"" change.
+Someone could have copied your code or binary somewhere, and start their program by checking that yours is still the exact same one. Any change would make this check fail.
+Someone could depend on the exact precision of some computation whose precision is not documented, and then [fail](https://twitter.com/stephentyrone/status/1425815576795099138)
+when the computation is made more precise.
+
+In practice, we will ignore the theoretical feasibility of detecting changes and choose to be ""reasonable"" instead.
+What ""reasonable"" is can depend on the context, such as Microsoft Windows having to provide compatibility modes for all kinds of old software which did questionable things.
+Microsoft must do this because one of the key features they provide is compatibility, and customers might move to other operating systems otherwise.
+Most engineers do not have such strict compatibility requirements.
+
+### Forward vs backward
+
+A software release is _forward compatible_ if clients for the next release can also use it without needing changes.
+This is also known as ""upward compatibility"".
+That is, if a client works on version N+1, forward compatibility means it should also work on version N.
+Forward compatibility means that you cannot ever add new features or make changes that break behavior,
+since a client using those new features could not work on the previous version that did not have these features.
+It is rare in practice to offer forward compatibility, since the only changes allowed are performance improvements and bug fixes.
+
+A software release is _backward-compatible_ if clients for the previous release can also use it without needing changes.
+That is, if a client works on version N-1, backward compatibility means it should also work on version N.
+Backward compatibility is the most common form of compatibility and corresponds to the intuitive notion of ""don’t break things"".
+If something works, it should continue working. For instance, if you upgrade your operating systems, your applications should still work.
+
+Backward compatibility typically comes with a set of ""supported"" scenarios, such as a public API.
+It is important to define what is supported when defining compatibility, since otherwise some clients could misunderstand what is and is not supported,
+and their code could break even though you intended to maintain backward compatibility.
+For instance, old operating systems did not have memory protection: any program could read and write any memory, including the operating system's.
+This was not something programs should have been doing, but they could. When updating to a newer OS with memory protection, this no longer worked,
+but it was not considered breaking backward compatibility since it was never a feature in the first place, only a limitation of the OS.
+
+One extreme example of providing backward compatibility is Microsoft's [App Assure](https://www.microsoft.com/en-us/fasttrack/microsoft-365/app-assure):
+if a company has a program that used to work and no longer works after upgrading Windows, Microsoft will fix it, for free.
+This kind of guarantee is what allowed Microsoft to dominate in the corporate world; no one wants to have to frequently rewrite their programs,
+no matter how much ""better"" or ""nicer"" the new technologies are. If something works, it works.
+
+### Source vs binary
+
+Source compatibility is about whether your customers' source code still compiles against a different version of your code.
+Binary compatibility is about whether your customers’ binaries still link with a different version of your binary.
+These are orthogonal to behavioral compatibility; code may still compile and link against your code even though the behavior at run-time has changed.
+
+Binary compatibility can be defined in terms of ""ABI"", ""Application Binary Interface"", just like ""API"" for source code.
+The ABI defines how components call each other, such as method signatures: method names, return types, parameter types, and so on.
+The exact compatibility requirements differ due to the ABI of various languages and platforms.
+For instance, parameter names are not a part of Java's ABI, and thus can be changed without breaking binary compatibility.
+In fact, parameter names are not a part of Java's API either, and can thus also be changed without breaking source compatibility.
+
+An interesting example of preserving source but not binary compatibility is C#'s optional parameters.
+A definition such as `void Run(int a, int b = 0)` means the parameter `b` is optional with a default value of `0`.
+However, this is purely source-based; writing `Run(a)` is translated by the compiler as if one had written `Run(a, 0)`.
+This means that evolving the method from `void Run(int a)` to `void Run(int a, int b = 0)` is compatible at the source level,
+because the compiler will implicitly add the new parameter to all existing calls, but not at the binary level, because the method signature changed so existing binaries will not find the one they expect.
+
+An example of the opposite is Java's generic type parameters, due to type erasure.
+Changing a class from `Widget` to `Widget` is incompatible at the source level,
+since the second type parameter must now be added to all existing uses by hand.
+It is however compatible at the binary level,
+because generic parameters in Java are erased during compilation.
+If the second generic parameter is not otherwise used, binaries will not even know it exists.
+
+---
+#### Exercise
+Which of these preserves backward compatibility?
+- Add a new class
+- Make a return type _less_ specific (e.g., from `String` to `Object`)
+- Make a return type _more_ specific (e.g., from `Object` to `String`)
+- Add a new parameter to a function
+- Make a parameter type _less_ specific
+- Make a parameter type _more_ specific
+
+Proposed solution (click to expand)
+
+
+- Adding a new class is binary compatible, since no previous version could have referred to it.
+ It is generally considered to be source compatible, except for the annoying scenario in which someone used wildcard imports (e.g., `import org.example.*`)
+ for two modules in the same file, and your new class has the same name as another class in another module, at which point the compiler will complain of an ambiguity.
+- Making a return type less specific is not backward compatible.
+ The signature changes, so binary compatibility is already broken,
+ and calling code must be modified to be able to deal with more kinds of values, so source compatibility is also broken.
+- Making a return type more specific is source compatible, since code that dealt with the return type can definitely deal with a method that happens to always return a more specific type.
+ However, since the signature changes, it is not binary compatible.
+- Adding a parameter is neither binary nor source compatible, since the signature changes and code must now be modified to pass an additional argument whenever the function is called.
+- Making a parameter type less specific is source compatible, since a function that accepts a general type of parameter can be used by always giving a more specific type.
+ However, since the signature changes, it is not binary compatible.
+- Making a parameter type more specific is not backward compatible.
+ The signature changes, so binary compatibility is already broken,
+ and calling code must be modified to always pass a more specific type for arguments, so source compatibility is also broken.
+
+
+
+---
+
+What compatibility guarantees should you provide, then?
+This depends on who your customers are, and asking them is a good first step.
+Avoid over-promising; the effort required to maintain very strict compatibility may be more than the benefits you get from the one or two specific customers who need this much compatibility.
+Most of your customers are likely to be ""reasonable"".
+
+Backward compatibility is the main guarantee people expect in practice, even if they do not say so explicitly.
+Breaking things that work is viewed poorly.
+However, making a scenario that previously had a ""failure"" result, such as throwing an exception, return a ""success” result instead is typically OK.
+Customers typically do not depend on things _not_ working.
+
+
+## How can we establish solid foundations?
+
+You are asked to run an old Python script that a coworker wrote ages ago.
+The script begins with `import simplejson`, and then proceeds to use the `simplejson` library.
+You download the library, run the script... and you get a `NameError: name 'scanstring' is not defined`.
+
+Unfortunately, because the script did not specify what _version_ of the library it expected, you now have to figure it out by trial and error.
+For crashes such as missing functions, this can be done relatively quickly by going through versions in a binary search.
+However, it is also possible that your script will silently give a wrong result with some versions of the library.
+For instance, perhaps the script depends on a bug fix made at a specific time,
+and running the script with a version of the library older than that will give a result that is incorrect but not obviously so.
+
+Versions are specific, tested, and named releases.
+For instance, ""Windows 11"" is a version, and so is ""simplejson 3.17.6"".
+Versions can be more or less specific; for instance, ""Windows 11"" is a general product name with a set of features, and some more minor features were added in updates such as ""Windows 11 22H2"".
+
+Typical components of a version include a major and a minor version number, sometimes followed by a patch number and a build number;
+a name or sometimes a codename; a date of release; and possibly even other information.
+
+In typical usage, changing the major version number is for big changes and new features,
+changing the minor version number is for small changes and fixes, and changing the rest such as the patch version number is for small fixes that may not be noticeable to most users,
+as well as security patches.
+
+Versioning schemes can be more formal, such as [Semantic Versioning](https://semver.org/),
+a commonly used format in which versions have three main components: `Major.Minor.Patch`.
+Incrementing the major version number is for changes that break compatibility, the minor version number is for changes that add features while remaining compatible,
+and the patch number is for compatible changes that do not add features.
+As already stated, remember that the definition of ""compatible"" changes is not objective.
+Some people may consider a change to break compatibility even if others think it is compatible.
+
+Let's see three ways in which you will use versions: publishing versioned releases, _deprecating_ public APIs if truly necessary, and consuming versions of your dependencies.
+
+### Versioning your releases
+
+If you allowed customers to download your source code and compile it whenever they want,
+it would be difficult to track who is using what, and any bug report would start with a long and arduous process of figuring out exactly which version of the code is in use.
+Instead, if a customer states they are using version 5.4.1 of your product and encounter a specific bug, you can immediately know which code this corresponds to.
+
+Providing specific versions to customers means providing specific guarantees, such as ""version X of our product is compatible with versions Y and Z of the operating system"",
+or ""version X of our product will be supported for 10 years with security patches"".
+
+You do not have to maintain a single version at a time; products routinely have multiple versions under active support,
+such as [Java SE](https://www.oracle.com/java/technologies/java-se-support-roadmap.html).
+
+In practice, versions are typically different branches in a repository.
+If a change is made to the ""main"" branch, you can then decide whether it should be ported to some of the other branches.
+Security fixes are a good example of changes that should be ported to all versions that are still supported.
+
+### Deprecating public APIs
+
+Sometimes you realize your codebase contains some very bad mistakes that lead to problems,
+and you'd like to correct them in a way that technically breaks compatibility but still leads to a reasonable experience for your customers.
+That is what _deprecation_ is for.
+
+By declaring that a part of your public surface is deprecated, you are telling customers that they should stop using it, and that you may even remove it in a future version.
+Deprecation should be reserved for cases that are truly problematic, not just annoying.
+For instance, if the guarantees provided by a specific method force your entire codebase to use a suboptimal design that makes everything else slower, it may be worth removing the method.
+Another good example is methods that accidentally make it very easy to introduce bugs or security vulnerabilities due to subtleties in their semantics.
+
+For instance, Java's `Thread::checkAccess` method was deprecated in Java 17,
+because it depends on the Java Security Manager, which very few people use in practice and which constrains the evolution of the Java platform,
+as [JEP 411 states](https://openjdk.org/jeps/411).
+
+Here is an example of a less reasonable deprecation from Python:
+```
+>>> from collections import Iterable
+DeprecationWarning: Using or importing the ABCs
+from 'collections'
+instead of from 'collections.abc'
+is deprecated since Python 3.3,
+and in 3.10 it will stop working
+```
+Sure, having classes in the ""wrong"" module is not great, but the cost of maintaining backward compatibility is low.
+Breaking all code that expects the ""Abstract Base Collections"" to be in the ""wrong"" module is likely more trouble than it's worth.
+
+Deprecating in practice means thinking about whether the cost is worth the risk, and if it is, using your language's way of deprecating,
+such as `@Deprecated(...)` in Java or `[Obsolete(...)]` in C#.
+
+### Consuming versions
+
+Using specific versions of your dependencies allows you to have a ""known good"" environment that you can use as a solid base to work.
+You can update dependencies as needed, using version numbers as a hint for what kind of changes to expect.
+
+This does not mean you should 100% trust all dependencies to follow compatibility guidelines such as semantic versioning.
+Even those who try to follow such guidelines can make mistakes, and updating a dependency from 1.3.4 to 1.3.5 could break your code due to such a mistake.
+But at least you know that your code worked with 1.3.4 and you can go back to it if needed.
+The worst-case scenario, which is unfortunately common with old code, is when you cannot build a codebase anymore
+because it does not work with the latest versions of its dependencies and you do not know which versions it expects.
+You then have to spend lots of time figuring out which versions work and do not work, and writing them down so future you does not have to do it all over again.
+
+In practice, in order to manage dependencies, software engineers use package managers, such as Gradle for Java, which manage dependencies given versions:
+```
+testImplementation 'org.junit.jupiter:junit-jupiter-api:5.8.1'
+```
+This makes it easy to get the right dependency given its name and version, without having to manually find it online.
+It's also easy to update dependencies; package managers can even tell you whether there is any newer version available.
+You should however be careful of ""wildcard"" versions:
+```
+testImplementation 'org.junit.jupiter:junit-jupiter-api:5.+'
+```
+Not only can such a version cause your code to break because it will silently use a newer version when one is available,
+which could contain some bug that breaks your code, but you will have to spend time figuring out which version you were previously using, since it is not written down.
+
+For big dependencies such as operating systems, one way to easily save the entire environment as one big ""version"" is to use a virtual machine or container.
+Once it is built, you know it works, and you can distribute your software on that virtual machine or container.
+This is particularly useful for software that needs to be exactly preserved for long periods of time, such as scientific artifacts.
+
+
+## Summary
+
+In this lecture, you learned:
+- Evolving legacy code: goals, refactorings, and documentation
+- Dealing with large codebases: learning as you go, improving the code incrementally
+- Quantifying and describing changes: compatibility and versioning
+
+You can now check out the [exercises](exercises/)!
+",CS-305: Software engineering